Inclusive design for a healthy digital world

Plus company-logo

By Plus Company


Authored by Brett Marchand

Even machines and algorithms can be biased. Whether it’s Amazon’s gender-biased recruiting engine (now scrapped), Facebook’s ethnically exclusionary ad serving algorithm (also now scrapped), or the racist patient screening software developed by Optum for use by hospitals (now substantially improved), we have seen how easy it is for human bias to be baked into software design.

It’s our job to design for inclusivity, to augment artificial intelligence with human sensitivity and to shape the way modern digital experiences support everyone. 

We can point to a recent example of work done by our teams at Citizen Relations. If you’re of Asian descent, you have probably noticed that Asian names are considered an “error” and redlined in traditional word processing programs. That jagged red line is a micro-aggression, and it’s fixable through a more inclusive .dic file that makes native lexicons more inclusive. 

Recently, our teams at Citizen Relations helped Elimin8hate launch a massive campaign – – to help build a more inclusive dictionary for Asian names. It’s even made its way to Microsoft, which wants to normalize non-English identities. 

Coincidentally, Microsoft CEO Satya Nadella has articulated his own six principles and goals which he believes AI research should follow to keep society safe, and two of them focus on inclusivity. One is that AI must have algorithmic accountability, so that humans can undo unintended harm – exactly what we did with And the other is that AI must guard against bias, with proper and representative research to make sure AI doesn't discriminate against people like humans do. 

Recently our team at Mekanism was asked by Civic Nation to mount a campaign to encourage Covid vaccination among Americans, 40% of whom were hesitant to receive one when it first became available. Research by the National Health Library revealed that hesitancy was highest among African Americans and Hispanics, so generalized mass-market messaging wasn’t going to cut it. We activated a campaign called Made to Save, utilizing English & Spanish language digital materials aimed at distinct audience segments. We trained nine thousand vaccine ambassadors, empowered over 625,000 conversations, translated materials into 21+ languages and activated over 1,600 vaccine partners.

Ethical Conduct

As social channels evolve, media outlets combat fake news, and Gen Z’s turn to TikTok for search more often than Google, the way marketers behave matters. Whether it’s creating safe space guidelines for campaigns on the web, or leveraging a more ethical way of buying media, it’s on us to help clients find the balance between results and ethics in a web 3 world that’s ever changing.

In 2020 the world started paying attention to media that profited by allowing hate, extremism and disinformation on their sites and in their news cycles. The ad dollars that inadvertently supported “fake news” sites, driven by programmatic spend, were contributing to the problem. 

Our client Sun Life and their partners at Cossette Media engaged NOBL, a platform that uses AI to analyze content through 30 indicators of ethical behavior. It’s designed to help buyers support higher quality sites, especially local news which, as it turns out, tends to enjoy far more trust from viewers than the big news networks. The results were astonishing. Cossette saw a 104% increase in click-through rates; a 37% decrease in the cost-per-click rate, and a 29% decrease in the cost-per-engagement rate, metrics very close to those claimed by NOBL. With that first successful campaign under its belt, Cossette has been talking about the tool to other clients that have expressed an interest in using it for their campaigns. 

Another of Cossette’s clients, TELUS, one of Canada’s largest telcos, tries hard to ensure its technology isn’t enabling harmful behaviour online. But the fact remains that 1 million kids in Canada are cyberbullied every month. To combat this, we partnered with TELUS to create a :30 spot featuring kids who’d been victimized to illustrate its wide-reaching impact. We set up free #EndBullying WiFi in public places, asking users to pledge to be kind online. We partnered with rapper SonReal to create an anthem kids could rally around, launched on his social platforms and performed live for the first time at WE Day. The results? 2.5 million+ song streams, 2 million+ pledges to be kind online and a 112% increase in TELUS cyberbullying resource downloads.

Guarding against our worst instincts

Marshall McLuhan famously viewed technology as an extension of ourselves. The fact that both our best intentions and worst instincts are literally encoded into the algorithms that are designed to extract value from our online behaviour is proof of McLuhan’s dictum. The fact that this behaviour can be mined for the predictive signals that are used to sell us more and more stuff almost guarantees digital abuse.

Building a less harmful internet requires us to imbue artificial intelligence with human sensitivity. As communications professionals we not only have the power but also the responsibility to do so.

It’s just as important for us to manage the medium – in this case, the Internet – as it is to manage the message.

Any questions?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.