In the age of technological naivety, robots and artificial intelligence were seen as raceless and objective machines. However, as anthropomorphic technology has advanced, and more and more robots are being designed, a critical question has emerged: why is AI so white?
Most AI is racialised as white; from the white humanoid Sophia of Hanson Robotics, the blonde-haired blue-eyed Cindy Smart Doll, to the dawn of AI Kismet – with its white colour, blue eyes, and light eyebrows. We see it in popular culture with Schwarzenegger’s Terminator, Alicia Vikander as the tragic yet threatening robot Ava in Ex Machina, RoboCop and the Blade Runner franchise. In a particularly pertinent line, Hanson Robotics state on Sophia’s website that she “personifies our dreams for the future of AI.” That future, evidently, is white.
Whiteness has become what is now universally normalised in technology and artificial intelligence. Stephen Cave and Kanta Dihal explore this in their paper The Whiteness of AI, arguing that, as long as artificial intelligence is synonymous with high functioning autonomy, power and agency, it will be imagined as white. They discuss how technological superiority has always been essential to the logic of colonialism and, thus, depicting AI as white stands to assert Western dominance. More so, when given the choice to imagine what the future of Man will look like, the West will envision Whiteness or, that is to say, themselves. This occurs to such an extent that Whiteness in AI has become interpreted more as the absence of race, while minority-racialised AI sticks out for the absence of Whiteness. In these ways, Whiteness has quietly become the assumed image of AI.
Ruha Benjamin, in Race After Technology, quotes one former Apple employee who questioned why Siri didn’t speak African American English like it did other English vernacular dialects. His boss replied by saying “Well, apple products are for the premium market.” In cultural policy, this sentiment is often referred to as ‘the deficit model’, which assumes black and minority communities are not engaged in art and culture simply because they are not interested or smart enough. This reasoning provides the justification for white cultural outputs that purposefully ostracise minority groups, and forms the bedrock of racist ideals of progress. The same is found in the tech-industry, and Benjamin presents how the architecture of artificial intelligence is exclusionary by design, even when a well-meaning employee wishes to change that. The assumption of supreme Whiteness in production and distribution is deeply entrenched in the field of AI, and allows for what Cave and Dihal call, “a full erasure of people of colour from the western utopian imaginary.”
In many aspects, Whiteness takes on the invisible default as Blackness is erased while, in others, Blackness becomes hypervisibile. For example, one study found black robots are more likely than white or Asian ones to garner more negative commentary, and that black and Asian robots were subjected to twice as many dehumanising comments as white robots (Strait et al). Another shows how patterns of shooter bias, which demonstrates that African Americans are more likely to be shot than white subjects, in both armed and unarmed conditions, is still evident in videos with racialised robots instead of real people (Correll et al 2012). The hypervisbility of Blackness in AI becomes apparent when it is to its detriment, showcasing how pre-existing racial biases are reenacted through seemingly objective technology.
Don’t let the shiny, detached language of silicone valley fool you; AI does not build itself. Behind every robot is a room of real people, and with them the many flaws and biases we have yet to resolve as a society. It should come as no surprise, then, that the supposedly objective AI is, in actuality, a pandora’s box of racisms and exclusionary assumptions. No matter how good the intentions are for its conception, representation matters in AI; it matters who is in the room, who it is made for and who it is made in the image of. Benajim tells us, “The public must hold accountable the very platforms and programmers that legally and invisibly facilitate the new Jim Code;” and what I take from this growing field of research is that the racialisation of AI was always going to be impactful, and we are now being asked to notice it.