Artificial Intelligence, Gender Bias, and the Responsibility of Tech Companies

A renewed impetus in the gender debate – propelled by Fourth-Wave Feminism and most notably by the #metoo movement – is leading us to re-evaluate accepted practices of the past and rectify problematic practices that have spilled over into the present (see also the previous article on #metoo and Corporate Investment). But what about the future? As we march deep into our 4th Industrial Revolution (where the boundaries between digital and physical become blurred), Artificial Intelligence (AI) technology is leading the stage and becoming a ubiquitous presence faster than we could have imagined. In just a few years AI has filtered into our homes, cars, phones and workplaces, and is being relied upon in crucial decision-making processes.

This rapid transformation is placing us before complex and urgent ethical questions concerning the role of our artificially intelligent creations in society. One such question concerns the historic and systemic biases that are being coded into AI technology (whether consciously or unconsciously) and the concrete impact this has on our daily lives. Although biases detected in AI are significant on a range of intersecting levels, including race, socio-economic status and gender, this article will focus on the latter.

There are several examples of how AI is already affected by – and in turn affecting – gender stereotypes and social constructs. A first manifestation of the “gendering” of AI can be clearly identified in the predominance of female AI voice assistants, from Amazon’s Alexa to Apple’s Siri, Microsoft’s Cortana and Google’s Google Assistant. These assistants – which increasingly assume human-like communication abilities – are almost invariably given default female names (referred to as “she” and “her”) and default female voices. All four mentioned assistants had female-only voices at their release and only Siri and Google Assistant now provide for the male voice option. Curiously, Siri has a default male voice only in four languages: Arabic, Dutch, French and British English. There are numerous other examples of female-voiced technology, from GPS services to basic home appliances.

Most striking however is how, through intricate algorithms and codes, AI voice assistants are assigned submissive and obliging personalities. Amazon describes Alexa in its Guidelines as being “humble”, “welcoming”, “respectful” and “friendly”; Google’s Assistant is referred to as “humble” and “helpful”; Siri was marketed as “friendly and humble – but also with an edge”. Since a machine’s very purpose is to serve humans (like under Asimov’s second law of robotics), it is not hard to see why attributing female voices to AI assistants – at a time where we are shifting rapidly from text to voice and the mechanic nature of technology is becoming less evident – can trigger problematic associations between “woman” and “servility” that have ripple effects on gender constructs in our society.

In May this year, UNESCO released a report on the gendering of AI technology and on gender divides in digital skills. Think Piece 2 of the report – which is a fascinating must-read – finds that these digital assistants, projected as young women, reinforce harmful gender biases: “it sends the signal that women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command”. For instance, the report cites research that has found that such technology produces a rise of “command-based speech directed at women’s voices” (for a stimulating read see also this post in SlateI Don’t Date Men Who Yell at Alexa”).

This phenomenon is put in stark light by the apologetic and playfully evasive responses given by AI assistants to verbal sexual harassment. The title of the UNESCO Report gets its name (“I’d Blush If I Could”) from the disconcerting response initially programmed for Siri when a user would tell her “Hey Siri, you’re a bi***.” Some tech companies have begun to update AI assistants to meet harassment with disengagement or a lack of understanding (e.g., Siri now responds: “I don’t know how to respond to that.”). But the response remains passive and never constitutes a clear rejection (this supposedly to preserve machine neutrality). The message this sends to the billions of people connected to AI technology is that ambiguity and unaccountability can be expected responses to harassment.

Some might retort that no one wants a robot that gives lessons on morality. This represents one of the paradoxes of our tech race: we want to “humanize” machines while preserving their utter subservience and inferiority to mankind. However, if moral responses are deemed incompatible with the servile function of machines, efforts should at least be made to avoid problematic gender associations. For instance, genderless voice assistants exist and researchers and developers have been advocating that they be mainstreamed.

Winfield wrote that “to design a gendered robot is a deception”; since “we all react to gender cues (…) a gendered robot will trigger reactions that a non-gendered robot will not”. Such design, he argues, is contrary to the 4th Principle of Robotics – developed by global tech experts – which provides that “robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” The Toronto Declaration on Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems (prepared by Amnesty International and Access Now) similarly condemns “implicit and inadvertent bias through design” as creating another means for discrimination.

At a time of increased efforts to eradicate gender-based discrimination and violence, the “feminization” of technology has vital implications that I personally had never stopped to think about despite years of receiving directions from the ever-patient, ever-available, female-voiced Google Maps. The World Wide Web Foundation found that “AI is replicating the same conceptions of gender roles that are being removed from the real world.” In other words, this trend risks quickly reversing decades of advancement in gender equality.

Gender biases infuse AI technology not only in the way it looks and sounds but also in the way it “thinks” and operates. Research has revealed that machine-learning technology can absorb racial and gender discrimination. Joy Buolamwini, an MIT scientist and founder of the Algorithmic Justice League, carried out a research project called Gender Shades which analyzed the accuracy of AI-powered facial recognition technology sold by tech giants IBM, Microsoft, and Face++. The analysis found that all companies performed better in recognizing male faces than female faces and in recognizing lighter subjects than darker subjects; by intersectional subgroups, all companies performed the worst on women of color. The research prompted responses from IBM and Microsoft stating the companies would address the inaccuracies.

As facial recognition technology is increasingly employed in the public and private sectors (for instance, in law enforcement and immigration control), such coded biases risk not only entrenching existing human biases but actually exacerbating them, “mechanizing” and automating discrimination without the filter of context and experience-based human sensitivity.

Buolamwini attributes this algorithmic bias (which she calls “the coded gaze”) among others to the lack of inclusivity and representation in data sets used to “teach” machine-learning AI, which looks for patterns in large data sets to perform its tasks. The Toronto Declaration recognizes that AI technology “is at present largely developed, applied and reviewed by companies based in certain countries and regions”.

A discussion concerning the harmful effects of gendered AI would be incomplete without looking at the role of the developers and leading tech companies behind such technology. The UNESCO Report finds that, given the meticulous attention tech companies pay to customers’ desires, “the decision to gender and how to gender assistants is almost certainly intentional”. These companies are driven by sales and design AI technology in a manner they believe will sell. The Guardian recently reported to have received leaked internal documents revealing information about an internal project at Apple to programme Siri to respond to “sensitive topics” such as feminism and #metoo by deflecting, not engaging and remaining “neutral”.

However, as the exclusive process owners in the development of AI, tech companies have a responsibility to ensure their technology is developed with due consideration of its social impact. There is a wide range of measures – both proactive and reactive – which these actors can take to rectify and prevent the pitfalls of gendered AI.

A fundamental starting point is bridging the “digital gender gap” within tech companies and ensuring greater diversity in the teams developing and programming AI technology. UNESCO states that the predominance of female voice assistants – and their subservient personalities – can be attributed to the fact that they are designed by teams that are overwhelmingly male. According to the World Economic Forum 2018 Global Gender Gap Report, only 22% of AI professionals globally are female. Inclusive hiring requires early efforts to incentivize girls and women to pursue ICT education and professions in the first place. Gender-lens investing, which I touched on here, can also contribute by giving companies with more gender-diverse AI teams a greater chance.

It is also crucial to reassess the coding process. The nature of AI involves a complex combination of human-guided machine algorithms and autonomous learning abilities. To prevent coded biases, developers must ensure that AI is programmed and trained with unbiased data and that the data pools used to teach it are disaggregated and inclusive. Parental concerns about the effects of digital assistants on their children’s manners have led to child-friendly options (that require a “please” and “thank you” when making requests). No comparable effort has been made to ensure more respectful gender dynamics. Gender-diverse teams, as well as consulting with all stakeholders, can go a long way in contributing to inclusive programming.

Reactive mechanisms should also be put in place to detect and rectify bias when it occurs. As indicated in the Toronto Declaration, this requires among others regular checks and real-time auditing, also by independent third parties, and being transparent about such efforts.

Greater oversight and accountability at the institutional level can help propel the process. However, I would argue that corrective measures should transcend regulatory risks or marketability considerations and focus rather on the very role we want AI to play in our society. As the 2017 AI Now Report states: “AI is not impartial or neutral. Technologies are as much products of the context in which they are created as they are potential agents for change.” Indeed, obfuscated by the problematic gender biases emerging in leading AI technology is the potential power for good that AI – if properly programmed and monitored – can have in reducing gender stereotypes, bias, and discrimination inherent in human decision-making. Ideally, AI would not be simply gender-neutral, but would rather be gender-sensitive, capable of promoting gender equality with due respect for differing opinions.

Some of the leading tech companies (Amazon, Apple, IBM) were among the 181 companies of the Business Roundtable that recently committed to a new “Statement on the Purpose of a Corporation”, expressing the intention to shift towards a “fundamental commitment” to all stakeholders, rather than just to shareholders. However, it is important that companies do more than pay lip service to such declarations, plenty of which already concern ethics and AI. If AI is really going to take over the world, we need to make sure it is representative of the world in all its shapes and colors, and not merely of “a (white) man’s world”.

Disclaimer

The views, opinions, and positions expressed within all posts are those of the author alone and do not represent those of the Corporate Social Responsibility and Business Ethics Blog or of its editors. The blog makes no representations as to the accuracy, completeness, and validity of any statements made on this site and will not be liable for any errors, omissions or representations. The copyright of this content belongs to the author and any liability with regards to infringement of intellectual property rights remains with the author.

13 thoughts on “Artificial Intelligence, Gender Bias, and the Responsibility of Tech Companies

  1. Interesting post, Liemertje. Curious how you see decisions about ‘what will our bias be’ made. Do you expect more of an industry, joint decision perhaps through AAAI or other industry organization? Individual companies?

  2. Rod thank you for your comment (which skipped my radar somehow!). I understand you are wondering whether industry guidelines can play a role in encouraging more responsible AI development or whether the onus lies on individual companies. Preventing and eradicating bias in AI requires a multifaceted and multi-actor response, a crucial part of which lies in proper regulation and enforcement. In this regard, an Algorithmic Accountability Act was put forward in the U.S. Senate proposing that large companies be required to conduct “automated decision system impact assessments” – evaluating among others fairness, bias and discrimination – and submit them to the Federal Trade Commission. Such disclosures would constitute a first step in enhancing corporate accountability in AI programming and could set an important example for other jurisdictions.

  3. Thank you for this insightful post, Liemertje! It was indeed a surprise for me to realize that I never put enough thought into the issue of gender bias generated by artificial intelligence in the voice assistants before. Reading about this made me put a lot of things into perspective.

    I fully agree with you that if they all default to a female voice, the voice assistants may tend to reinforce sexist stereotypes. Not only this but, as you mentioned, some of the responses given by the voice assistants could prompt to a certain level of tolerance of sexual harassment and verbal abuse. While reading your article, I also remembered about the existence of the gender-neutral voice assistance called Q.

    Since AI does not have a gender, companies should be urged not to use only one person’s voice when designing the voice assistants and not to assign a gender to them. Some would suggest that robotic voices such as the one Stephen Hawking used, may reduce the gender-based discussions and issues, but it is true that companies thrive to humanize the voice assistants as much as possible, and that would go against their goals.

    It would ultimately be very interesting to find out the general public preference once all voice assistants will have both male and female voices, and maybe even a gender-neutral voice.

    1. Thank you very much Bianca for taking the time to share your thoughts. Since I last wrote, I have been noticing that Siri’s responses to insulting or disrespectful language, while not always necessarily playful, remain evasive and in my opinion still problematic. As you mention, tech companies programme this technology in a manner that they think responds to consumer’s basic and instinctive needs, meaning that we need a shift at both demand and supply levels, i.e. towards greater awareness among consumers and a greater sense of responsibility among programmers and service providers.

  4. Dear Liemertje,

    Thanks a lot for the stimulating and interesting article. I must also admit that I have never thought about this side of the gender-based problems.
    I once wondered and checked for the meaning of Siri which is “beautiful women who lead you to victory” in Scandinavian. Given the fact that AI has no gender, it seems irrelevant to me to give female names to virtual assistants such as Siri and Alexa. As gender-neutral voice (Q) is recently developed with the objective to end gender bias in AI assistants, it would be possible for AI companies to be encouraged to use voice Q in virtual assistants.
    The effects of tech companies on societies should be fully understood. Especially their impacts on children should not be underestimated since AI technologies are used by children as well. Any act of gender bias imposed by AI may harm the efforts on creating respectful gender dynamics.
    It is disappointing that women are still dealing against traditional gender roles in the 21th century even in the field of AI. Hopefully, the voices of women can be heard with the contribution of #metoo movement and Fourth-Wave Feminism. Thanks for pointing out.

  5. Ilay it’s very interesting that you looked up the meaning of “Siri” as this definition points to a further important element of the discussion. It has been argued that the fact that AI assistants are female-voiced and ever-subservient can also lead to the “sexualisation” of the technology in consumers’ perceptions. According to a study cited in the UNESCO report I referred to in my article, when certain online forums invited people to share images and drawings of what the AI assistants looked like in their imaginations, nearly all of the depictions were not only female, but were young and attractive. This carries the risk that such technology perpetuates not only discrimination but also female objectification.

  6. Our digital society is quickly transforming with the advancements of innovation and the introduction of artificial intelligence in the public sector. The quick adoption of the use of artificial intelligence among the general public poses some concerns including the perceived gender bias in the formulation of AI.

    There are several examples of how AI technology is affected by and perpetuates both gender stereotypes and social constructs. Notably, one example of the alleged “gendering” can be seen in the plethora of female AI voice assistants including Siri, Alexa, Cortana, etc. These AI voice assistants are described as, “humble”, “friendly”, “welcoming”, and “helpful.” Since the key function of these technologies are to serve people, dangerous parallels can be perceived about gender roles.

    In light of the Me Too and fourth-wave feminist movements, there is an increased focus on the distinction of gender roles and stereotypes. Although some may associate the gendering of AI as a point of contention, I think it highlights the lack of diversity of thought in the technology industry. The technology industry is composed of predominantly white and Asian American men. As a result, many AI products reflect inherent bias and preconceptions of those who invented said technology.

    Though many choose to focus on the negative aspects of gendering AI, it highlights the importance of the feminine genius. Women naturally bring different skill sets to their respective positions, with tendencies of high emotional intelligence, risk averseness, and empathy for their colleagues. When industries have high barriers of entry for women, they are limiting their ability to relate to all consumers and cultivate a diversity of thought in the workplace. The feminine genius needs to be present where hard decisions are being made, regardless of industry or political leanings.

    Each gender, race, and religion have different points of view that are essential to creating an equitable society that serves the whole person.

    1. Colleen, thank you for taking the time to contribute to this important conversation. When it comes to AI, going to the origin of the problem means, as you say, looking at those (highly-skilled) programmers and at how their preconceptions and inherent biases flow into this technology. At the same time, it also means looking at consumers: is technology being programmed this way because this is what we want? If so, enhancing awareness around gender issues and gender diversity within programming teams is a crucial piece of the puzzle but will only go so far, if not accompanied by deeper societal and structural transformation.

  7. We must not allow human biases to affect how we experience and interact with technology because doing so will continue to infiltrate future generations with the same biases. Children born in the 21st century will be raised with AI voice assistants as the norm; Sieders notably draws attention to the female-coded names and voices of these systems. These continue to perpetrate the belief that women are naturally suited for more docile and servile roles. An enlightening report (https://www.eeoc.gov/eeoc/statistics/reports/hightech/) by the U.S. Equal Employment Opportunity Commission highlights disparities between both gender and ethnicity in the tech field. Overwhelmingly, the leaders in the tech industry are white males; as a result, much of their technology is not created with people in mind who do not look like them or have the same base privileges. Sieders refers to the study which found AI-powered facial recognition technology most easily recognizes white male faces rather than their female or non-white counterparts. This was a disappointing revelation but not a shocking one; it is no surprise that the technology would be most compatible with users that most closely resemble its creators. To curb these issues that have so many ramifications across our rapidly changing society, we must encourage individuals of all ethnicities and gender expressions to join the tech field. Our technology is only a reflection of those who create it.

    1. Great report, April, thank you for sharing! I always appreciate when a discussion leads to the exchange of ideas and information. In return, I would recommend you follow the Algorithmic Justice League: https://www.ajlunited.org/. It was set up by Joy Buolamwini, who carried out the research on coded bias. They organize panel discussions, publish research and creatively (also through art) advocate for equitable and accountable AI.

  8. Such an interesting conversation about technology and the gender inequalities that take place. To add to your idea about technology being born of its creators and the influences that is created. As we know this is not an exclusive trait to the technology industry. As an architectural graduate I took a course in Roman and classical architecture to study the orders as well as historical influences in the work. Architecture has always been and continues to be a male-dominated field. Architecture being the study and design of buildings and structures for us to inhabit is actually very tied to gender and gender biases. For example, looking at classical architecture there are the three orders: Doric, Corinthian, and Ionic. These orders actually have assigned genders and ages: Doric = Masculine (young or older), Corinthian (Female and Virginal), and Ionic (Female Matron). The orders influence the feel of churches, temples, homes, etc. based on the intention. For example, Churches to Mary sometimes use Corinthian columns to represent her as youthful and as a virgin. However, some churches will use Ionic in order to pay homage to her as the mother of god.

    These gender biases in architecture have influenced the visual arts since antiquity. It could be argued that when used correctly this work is a very high art in its ability to pay respect to the genders and sexuality through architectural form. However, architecture has had its instances of perversion and abuse such as one instance with Claude-Nicholas Ledoux who created the House of Pleasure. It was an unbuilt (for many good reasons) brothel shaped like a phallus employing a combination of Corinthian and Ionic forms. Needless to say this message is pretty clear about women and the sexualized nature that even an artform like architecture can imply.

    This piece is definitely away from technology; however, I hope to shine a light on the historic burdens that even the arts have long carried with them since antiquity. I believe that this conversation on the gender inequality that technology has brought more to the frontline is one that will hopefully begin to make a change for the better in industries around the world.

  9. Kyle I’m so glad you brought this up as it is another telling (and fascinating!) example of how the views and preconceptions of those responsible for imagining a project can determine the concrete final result. The way buildings (and cities) are designed has not only visual impacts but also important gendered impacts on how we inhabit and experience those environments. This is especially the case when it comes to urban planning and design. I’ll drop some articles here that you may find interesting: “What would a non-sexist city be like?” https://www.jstor.org/stable/3173814?read-now=1&refreqid=excelsior%3A4eb8b2c1b3e7af5ba5cbd1d852ff35e7&seq=1#page_scan_tab_contents ; “What would a city designed by women be like?” https://www.bbc.com/news/av/world-50269778/what-would-a-city-designed-by-women-be-like

  10. Thank you Liemertje Sieders for sharing such a profoundly important topic regarding AI and gender bias. I too never stopped to think about the implications of the growing “feminization” of technology despite years of receiving direction from the ever-present female-voiced Google Maps. It is disappointing, even terrifying, to see that at a time of increased efforts to eliminate gender-based discrimination, artificial intelligence is replicating the same notions of gender roles that are being fought against in the real world. It is disheartening, to say the least, to think about the effects these implicit bias will have upon future generations growing up with access to AI if no genuine efforts are made to remedy these implicit issues. As you have said, such coded biases risk not only rooting human raises but intensifying them.

    We must challenge these issues now while they are manageable, and before they become more ingrained into our society as widely accepted and reasonable. Once again, I have to thank you Liemertje for opening my eyes to the “digital gender gap” within most tech companies as the fundamental starting point towards facing AI gender-bias issues. Although I was not entirely surprised, I was still rather shocked to see that only 22% of AI professionals globally are women. With this in mind, it is not surprising in the least to see why we are seeing the feminization of technology in a subservient and docile manner. Globally, we certainly need to inspire more young women to be interested in tech as technology has become entirely embedded in our society. Without more diversification in personnel developing technology, we’ll begin to see technology representing only (white) men instead of representing the entire world.

Leave a Reply