Artificial Intelligence, Gender Bias, and the Responsibility of Tech Companies

A renewed impetus in the gender debate – propelled by Fourth-Wave Feminism and most notably by the #metoo movement – is leading us to re-evaluate accepted practices of the past and rectify problematic practices that have spilled over into the present (see also the previous article on #metoo and Corporate Investment). But what about the future? As we march deep into our 4th Industrial Revolution (where the boundaries between digital and physical become blurred), Artificial Intelligence (AI) technology is leading the stage and becoming a ubiquitous presence faster than we could have imagined. In just a few years AI has filtered into our homes, cars, phones and workplaces, and is being relied upon in crucial decision-making processes.

This rapid transformation is placing us before complex and urgent ethical questions concerning the role of our artificially intelligent creations in society. One such question concerns the historic and systemic biases that are being coded into AI technology (whether consciously or unconsciously) and the concrete impact this has on our daily lives. Although biases detected in AI are significant on a range of intersecting levels, including race, socio-economic status and gender, this article will focus on the latter.

There are several examples of how AI is already affected by – and in turn affecting – gender stereotypes and social constructs. A first manifestation of the “gendering” of AI can be clearly identified in the predominance of female AI voice assistants, from Amazon’s Alexa to Apple’s Siri, Microsoft’s Cortana and Google’s Google Assistant. These assistants – which increasingly assume human-like communication abilities – are almost invariably given default female names (referred to as “she” and “her”) and default female voices. All four mentioned assistants had female-only voices at their release and only Siri and Google Assistant now provide for the male voice option. Curiously, Siri has a default male voice only in four languages: Arabic, Dutch, French and British English. There are numerous other examples of female-voiced technology, from GPS services to basic home appliances.

Most striking however is how, through intricate algorithms and codes, AI voice assistants are assigned submissive and obliging personalities. Amazon describes Alexa in its Guidelines as being “humble”, “welcoming”, “respectful” and “friendly”; Google’s Assistant is referred to as “humble” and “helpful”; Siri was marketed as “friendly and humble – but also with an edge”. Since a machine’s very purpose is to serve humans (like under Asimov’s second law of robotics), it is not hard to see why attributing female voices to AI assistants – at a time where we are shifting rapidly from text to voice and the mechanic nature of technology is becoming less evident – can trigger problematic associations between “woman” and “servility” that have ripple effects on gender constructs in our society.

In May this year, UNESCO released a report on the gendering of AI technology and on gender divides in digital skills. Think Piece 2 of the report – which is a fascinating must-read – finds that these digital assistants, projected as young women, reinforce harmful gender biases: “it sends the signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command”. For instance, the report cites research that has found that such technology produces a rise of “command-based speech directed at women’s voices” (for a stimulating read see also this post in SlateI Don’t Date Men Who Yell at Alexa”).

This phenomenon is put in stark light by the apologetic and playfully evasive responses given by AI assistants to verbal sexual harassment. The title of the UNESCO Report gets its name (“I’d Blush If I Could”) from the disconcerting response initially programmed for Siri when a user would tell her “Hey Siri, you’re a bi***.” Some tech companies have begun to update AI assistants to meet harassment with disengagement or a lack of understanding (e.g., Siri now responds: “I don’t know how to respond to that.”). But the response remains passive and never constitutes a clear rejection (this supposedly to preserve machine neutrality). The message this sends to the billions of people connected to AI technology is that ambiguity and unaccountability can be expected responses to harassment.

Some might retort that no one wants a robot that gives lessons on morality. This represents one of the paradoxes of our tech race: we want to “humanize” machines while preserving their utter subservience and inferiority to mankind. However, if moral responses are deemed incompatible with the servile function of machines, efforts should at least be made to avoid problematic gender associations. For instance, genderless voice assistants exist and researchers and developers have been advocating that they be mainstreamed.

Winfield wrote that “to design a gendered robot is a deception”; since “we all react to gender cues (…) a gendered robot will trigger reactions that a non-gendered robot will not”. Such design, he argues, is contrary to the 4th Principle of Robotics – developed by global tech experts – which provides that “robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” The Toronto Declaration on Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems (prepared by Amnesty International and Access Now) similarly condemns “implicit and inadvertent bias through design” as creating another means for discrimination.

At a time of increased efforts to eradicate gender-based discrimination and violence, the “feminization” of technology has vital implications that I personally had never stopped to think about despite years of receiving directions from the ever-patient, ever-available, female-voiced Google Maps. The World Wide Web Foundation found that “AI is replicating the same conceptions of gender roles that are being removed from the real world.” In other words, this trend risks quickly reversing decades of advancement in gender equality.

Gender biases infuse AI technology not only in the way it looks and sounds but also in the way it “thinks” and operates. Research has revealed that machine-learning technology can absorb racial and gender discrimination. Joy Buolamwini, an MIT scientist and founder of the Algorithmic Justice League, carried out a research project called Gender Shades which analyzed the accuracy of AI-powered facial recognition technology sold by tech giants IBM, Microsoft, and Face++. The analysis found that all companies performed better in recognizing male faces than female faces and in recognizing lighter subjects than darker subjects; by intersectional subgroups, all companies performed the worst on women of color. The research prompted responses from IBM and Microsoft stating the companies would address the inaccuracies.

As facial recognition technology is increasingly employed in the public and private sectors (for instance, in law enforcement and immigration control), such coded biases risk not only entrenching existing human biases but actually exacerbating them, “mechanizing” and automating discrimination without the filter of context and experience-based human sensitivity.

Buolamwini attributes this algorithmic bias (which she calls “the coded gaze”) among others to the lack of inclusivity and representation in data sets used to “teach” machine-learning AI, which looks for patterns in large data sets to perform its tasks. The Toronto Declaration recognizes that AI technology “is at present largely developed, applied and reviewed by companies based in certain countries and regions”.

A discussion concerning the harmful effects of gendered AI would be incomplete without looking at the role of the developers and leading tech companies behind such technology. The UNESCO Report finds that, given the meticulous attention tech companies pay to customers’ desires, “the decision to gender and how to gender assistants is almost certainly intentional”. These companies are driven by sales and design AI technology in a manner they believe will sell. The Guardian recently reported to have received leaked internal documents revealing information about an internal project at Apple to programme Siri to respond to “sensitive topics” such as feminism and #metoo by deflecting, not engaging and remaining “neutral”.

However, as the exclusive process owners in the development of AI, tech companies have a responsibility to ensure their technology is developed with due consideration of its social impact. There is a wide range of measures – both proactive and reactive – which these actors can take to rectify and prevent the pitfalls of gendered AI.

A fundamental starting point is bridging the “digital gender gap” within tech companies and ensuring greater diversity in the teams developing and programming AI technology. UNESCO states that the predominance of female voice assistants – and their subservient personalities – can be attributed to the fact that they are designed by teams that are overwhelmingly male. According to the World Economic Forum 2018 Global Gender Gap Report, only 22% of AI professionals globally are female. Inclusive hiring requires early efforts to incentivize girls and women to pursue ICT education and professions in the first place. Gender-lens investing, which I touched on here, can also contribute by giving companies with more gender-diverse AI teams a greater chance.

It is also crucial to reassess the coding process. The nature of AI involves a complex combination of human-guided machine algorithms and autonomous learning abilities. To prevent coded biases, developers must ensure that AI is programmed and trained with unbiased data and that the data pools used to teach it are disaggregated and inclusive. Parental concerns about the effects of digital assistants on their children’s manners have led to child-friendly options (that require a “please” and “thank you” when making requests). No comparable effort has been made to ensure more respectful gender dynamics. Gender-diverse teams, as well as consulting with all stakeholders, can go a long way in contributing to inclusive programming.

Reactive mechanisms should also be put in place to detect and rectify bias when it occurs. As indicated in the Toronto Declaration, this requires among others regular checks and real-time auditing, also by independent third parties, and being transparent about such efforts.

Greater oversight and accountability at the institutional level can help propel the process. However, I would argue that corrective measures should transcend regulatory risks or marketability considerations and focus rather on the very role we want AI to play in our society. As the 2017 AI Now Report states: “AI is not impartial or neutral. Technologies are as much products of the context in which they are created as they are potential agents for change.” Indeed, obfuscated by the problematic gender biases emerging in leading AI technology is the potential power for good that AI – if properly programmed and monitored – can have in reducing gender stereotypes, bias, and discrimination inherent in human decision-making. Ideally, AI would not be simply gender-neutral, but would rather be gender-sensitive, capable of promoting gender equality with due respect for differing opinions.

Some of the leading tech companies (Amazon, Apple, IBM) were among the 181 companies of the Business Roundtable that recently committed to a new “Statement on the Purpose of a Corporation”, expressing the intention to shift towards a “fundamental commitment” to all stakeholders, rather than just to shareholders. However, it is important that companies do more than pay lip service to such declarations, plenty of which already concern ethics and AI. If AI is really going to take over the world, we need to make sure it is representative of the world in all its shapes and colors, and not merely of “a (white) man’s world”.

Disclaimer

The views, opinions, and positions expressed within all posts are those of the author alone and do not represent those of the Corporate Social Responsibility and Business Ethics Blog or of its editors. The blog makes no representations as to the accuracy, completeness, and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.

One thought on “Artificial Intelligence, Gender Bias, and the Responsibility of Tech Companies

  1. Interesting post, Liemertje. Curious how you see decisions about ‘what will our bias be’ made. Do you expect more of an industry, joint decision perhaps through AAAI or other industry organization? Individual companies?

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.