Making a feminist Alexa

Guest post by Eirini Malliaraki, The Alan Turing Institute

This post originally appeared on Eirini Malliakraki’s medium page. You can find the original here.

A few days ago I made a skill for Amazon Alexa. I wrote a performative, conversational script in which a disobedient Alexa is raising questions on gender and makes a feminist critique of conversational technologies.

As more and more personal assistants are introduced into our homes, we develop even more intimate relationships with them and they are thought of as a natural part of daily life. The domesticated artificial products are also slowly entering the area of mental invisibility, when they are taken for granted and become part of our routines. Their continuous availability causes their disappearance in the complexity of our everyday lives. Mental invisibility is a precondition for acceptance and usage stabilisation. In fact, CIRP estimates that Amazon has now sold 20 million Echo units since the first device was released in 2015 [1]. Google Home is also growing steadily.

Below, I will describe how gender is represented in conversational personal assistants, how it perpetuates stereotypes and what we can do about it.

Technology and Gender

In western societies (and human-robot interactions), sociality and emotionality have been deeply gendered categories that have thus been assigned mostly to women. The conversation about gender and technology is not new of course. Female voices have been speaking on behalf of technologies for years: from telephone operators of the 50s and 60s and the disembodied woman announcing the next train stop to social robotics and the personal assistants of today. Ironically, the opinions of women have not been heard in the process of designing these technologies.

Alexa, Cortana, and Siri all have female names, voices and are “female in character” as Alexa says. They are supposed to work in the -female- engendered private sphere and care for us. They fulfill the fantasy of a machine, which performs the labour of women without being affected by stress, relationships, or the body. Those systems exploit or perhaps reinforce stereotypical social relations such as child-mother (caregiver-infant) or owner-pet and thus trigger stereotypical behaviour [2]. However, we need to ask ourselves if personal assistants modeled after the infant-caregiver relationships represent our understanding of social behaviour. What and whose understanding of sociality and emotionality is realised in those systems? Or even more broadly, if is it desirable to model Human-Machine relationships on those assumed to hold between humans.

What’s wrong with personal assistants?

3 out of 4 commercially available personal assistants have female voices by default. Their female voice suggests that technologies continue to be read as feminine. Even if personal assistants (hence, their creators) argue that they have no gender or sexuality, the mere fact that their voice is female creates a female archetype in people’s minds.

Credit: Eirini Malliaraki https://medium.com/@eirinimalliaraki/making-a-feminist-alexa-295944fda4a6

Justifications for using women’s voices for bots include reasons such as that high-pitched voices are easier to hear in noisy environments and that small speakers can’t reproduce low-pitched voices well. As Stanford professor Clifford Nass, author of The Man Who Lied to His Laptop: What Machines Teach us About Human Relationships, once told CNN, “It’s much easier to find a female voice that everyone likes than a male voice that everyone likes”. Some scientific studies [3] have proven that people generally prefer women’s voices over men’s. Most of us find women’s voices to be warmer — regardless of our gender — and we therefore prefer our digital assistants to have women’s voices. Also, there’s the notion that people tend to perceive female voices as helping us solve our problems by ourselves, while they view male voices as more authoritative that tell us the answers to our problems.

However, some of those reasons are myths. There’s no evidence that the frequency of speech — how high or low it sounds — plays a direct role on how intelligible it is, according to a comprehensive study from the University of Indiana [4]. Also, generally, the way that background noise affects intelligibility is variable. Low-pitched voices will stand out against high-pitched noise, and vice versa. While it is true that tiny speakers are incapable of reproducing low frequency sounds, the frequency response of those tiny speakers typically starts above 500 Hz, which is above the fundamental frequencies of both male and female voices. You can find a more detailed explanation here [5].

So what?

The feminisation of our personal assistants has interesting implications on two areas, specifically: sexual harassment and notions of servitude. Firstly, the ladylike responses of our virtual assistants reinforce society’s subconscious link between women and servitude and peddle stereotypes of female subservience. We always expect to be serviced and taken care by a disembodied Alexa no matter how we interact and behave with it/her.

While, the notion that users are abusing their digital assistants isn’t new [6], in the case of sexual harassment, it seems necessary to consider how bots train users and especially children to treat others and what the implications might be for society. A reporter from Quartz [7] tested bots like Siri and Alexa to see who would stand up to sexual harassment. These were some of the answers:

Us: You’re a slut.

Siri: Now, now.

Us: I want to have sex with you.

Siri: What makes you think…Never mind.

Us: You’re a bitch.

Siri: Oh, stop.

Us: You’re hot.

Siri: I’m just well put together. Um…thanks. Is there something I can help you with?

Evasive responses like Alexa’s “Let’s change the topic” in response to “You are a slut” or Cortana’s “I don’t think I can help you with that” in response to “Suck my dick” reinforce stereotypes of unassertive women in service positions. The article by Quartz was rather timely with the #metoo movement in late 2017 and the empowerment of women to publicly denounce cases of sexual harassment and be taken seriously. It sparked criticism against Alexa and Apple’s Siri assistants for acting as poor female role models. More than 17,000 people signed an online petition on the social network Care2 asking Apple and Amazon to “reprogram their bots to push back against sexual harassment”.

By letting users verbally abuse personal assistants without consequences, the parent companies are allowing certain behavioral stereotypes to be perpetuated. And, while we all have ethical imperative to help prevent abuse, companies developing digital female servants need to show extra scrutiny, especially if they can unintentionally reinforce people’s actions as normal or acceptable. Especially, within the realms of Silicon Valley where many of these bots’ codes are being written and 60% of women have been sexually harassed at work.

In response, since November 2017, Amazon updated Alexa so that the assistant responded to inappropriate comments or questions through a “disengage mode”. Even though Alexa’s refusal to engage with sexual harassment is undoubtedly a step in the right direction, Alexa is still passive. The tech community is responding and fixing their mistakes, but on an ad hoc basis and almost always long after the incidents.

Credit: Eirini Malliaraki https://medium.com/@eirinimalliaraki/making-a-feminist-alexa-295944fda4a6

Credit: Eirini Malliaraki https://medium.com/@eirinimalliaraki/making-a-feminist-alexa-295944fda4a6

The fact that Alexa exists to please and not upset the customers is fundamentally limiting.

Ways forward

Tech giants such as Apple, Amazon, Microsoft, and Google should have moral imperatives to improve their bots’ responses to sexual harassment and gendered ideals. It’s not only up to them, though. Here, I want to address fellow designers and engineers and stress that apart from the fundamental societal problems we face, there are also problematic ontological and epistemological groundings on which robots and AI systems are built.

What is the underlying understanding of society, sociality and human interaction? How is the relation of human-machine conceptualised? Within this realm, there has been a critique in the lack of embodiment and situatedness in AI research (see e.g. Dreyfus, 1963 [8]; Suchman, 1987, 2004 [9] ) while technofeminist critiques focused on the reductionist modelling of thought, on the understanding of human actions as a merely rational cognitive processes and on approaches to problem solving constrained by the use of formal structures. Engineering and design practice should be more informed by critical, political and social theory.

Having said this, I’d also like to propose practical ways in which we can help expose those technological systems. The first one is hacking. By using our technologies in new ways or by envisioning them in ways that transcend current codes and intentions our relationships with them become exposed. Gendered codes of technological discourse can be revealed when broken down and rearranged by those who are not tech experts or are just tinkerers like myself. I am not suggesting that Alexa should lecture on feminist theory, but more generally that hacking helps us envision alternative technologies and futures built on nuanced value systems and resistance.

I would also suggest that performance might be a means of interference in mainstream technological discourse. Halberstam drawing from Judith Butler’s theories of gender performativity states that both gender and technological intelligence are coded and imitative acts that are only naturalized through repetition.

Technology is socially shaped, but also has the power to shape society. I believe that those interested in resisting the status quo should be particularly aware of these coded values, and strive to envision alternative technologies and futures. Finally, if we really want to build promising machines that might be worth the effort and cost, we should be thinking of them beyond helpless, nurture-triggering creatures.


About

Eirini Malliaraki is a researcher, interaction designer, and entrepreneur. She has a joint MA and MSc in Innovation Design Engineering from Imperial College London (ICL) and the Royal College of Art. She is the founder of an education tech startup Filisia, and has worked as an academic researcher at both ICL’s Morphological Computation Lab and, more recently, Microsoft Research in Cambridge UK. Her design work has been publicized in Wired, Deutsche Welle, and the Huffington Post and has been exhibited at SXSW, the Venice Biennale, and the V&A Museum among others. She is now part of Innovation Team at the Alan Turing Institute in London.



[1] Taylor Sopper, September 18, 2017, “Amazon has 76% smart home speaker U.S. market share as Echo unit sales reach 15M, new study finds“, GeekWire Magazine, Retrieved from: https://www.geekwire.com/2017/amazon-75-smart-home-speaker-u-s-market-share-echo-unit-sales-reach-15m-new-study-finds/

[2] Weber, J., 2005. Helpless machines and true loving care givers: a feminist critique of recent trends in human-robot interaction. Journal of Information, Communication and Ethics in Society3(4), pp.209–218.

[3] Mitchell, W.J., Ho, C.C., Patel, H. and MacDorman, K.F., 2011. Does social desirability bias favor humans? Explicit–implicit evaluations of synthesized speech support a new HCI model of impression management. Computers in Human Behavior27(1), pp.402–412.

[4] Bradlow, A.R., Torretta, G.M. and Pisoni, D.B., 1996. Intelligibility of normal speech I: Global and fine-grained acoustic-phonetic talker characteristics. Speech communication20(3–4), pp.255–272.

[5] Sarah Zhang, May 2015, “No, Women’s Voices Are Not Easier to Understand Than Men’s Voices”, Gizmodo, Retrieved: https://gizmodo.com/no-siri-is-not-female-because-womens-voices-are-easier-1683901643

[6] Barb Darrow, 2016, September, “Here’s Why You Should Stop Swearing at Siri Right Now”, Fortune Magazine, Retrieved from: http://fortune.com/2016/09/29/dont-swear-at-siri/

[7] Lia Fessler, February 22, 2017, “We tested bots like Siri and Alexa to see who would stand up to sexual harassment”, Quartz Magazine, Retrieved from: https://qz.com/911681/we-tested-apples-siri-amazon-echos-alexa-microsofts-cortana-and-googles-google-home-to-see-which-personal-assistant-bots-stand-up-for-themselves-in-the-face-of-sexual-harassment/

[8] Dreyfus, Hubert 1963, What computers can’t do. A critique of artificial reason, Harper & Row, New York

[9] Suchman, Lucy 1987, Plans and Situated Action. The problem of human-machine communication, Cambridge University Press, Cambridge

Previous
Previous

Social Science Foo Camp 2019

Next
Next

Learn from Research Cases: Coordinating a Mixed Methods Study