Robotic racists: AI technologies could inherit their creators's biases

04 September 2017 - 10:10
By Yolisa Mkele
What if AIs began influencing our decision-making without our knowledge?
Image: Supplied What if AIs began influencing our decision-making without our knowledge?

Artificial Intelligence, AI, is a lovely idea. It helps us translate languages, diagnoses illness more efficiently than a human doctor and finds new music for you based on your previous selections.

But if we've learnt anything from The Matrix it's just a matter of time before it turns us all into batteries to fuel its own existence.

Now some creepy data from the tech world suggest that it's going to come for people of colour and women first.

As it stands, AI hasn't reached its diabolically sentient final form yet and still relies on algorithms that compute current and historical data to "learn" the best way to fulfil its function, but even that is already leading us down some rather dark paths.

A risk analysis program used by US courts was found to be mistakenly labelling black prisoners as being twice as likely to re-offend as their white counterparts.

Another program used by some US police departments got stuck in a feedback loop that led to the over-policing of black and brown neighbourhoods.

The job-searching site LinkedIn ran into problems when its AI was found to favour male names in searches; a Google image recognition tool was found to be labelling certain black people as gorillas and Microsoft's attempt at creating a Twitter bot that could tweet like a person failed spectacularly when it began spewing anti-semitic, racist and sexist posts.

The problem is that the information AIs learn is laced with our own prejudices.

A Google image recognition tool was found to be labelling certain black people as gorillas

Kristen Lum, the lead statistician at the San Francisco nonprofit Human Rights Data Analysis Group, told UK newspaper The Guardian: "If you're not careful, you risk automating the exact same biases these programs are supposed to eliminate."

Echoing the sentiment was Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford.

"The world is biased, the historical data are biased, so it's not surprising that we receive biased results."

Fixing the problem is proving to be a little trickier than simply calling in the IT guy. As AI learns and adapts from its original coding it becomes more complex and opaque.

Last month Facebook had to shut down an experiment after two of its AI programs developed a language that only they could understand. While still a hypothetical issue at this point, the danger demonstrated here was clear: What if prejudiced AIs began communicating with each other and influencing our decision-making without our knowledge?

Lifestyle
Why Siri is a woman
6 years ago

According to Wachter, however, there is a silver lining.

"At least with algorithms we can potentially know when the algorithm is biased.

"Humans are able to lie about the reasons they didn't hire someone, for example, but we don't expect algorithms to lie or deceive us," Wachter said.

He maintained that, in principle, systems could be put in place to detect bias in an AI.

But until then there's a good chance that if your science-loving daughter gets career guidance from some newfangled AI at her school it will tell her to become a nurse or a housewife.

This article was originally published in The Times.