Study Finds AI Systems Exhibit Human-Like Prejudices

Should we be worried about bots with biases?

Posted Sep 30, 2019

Shutterstock
Source: Shutterstock

Whether we like to believe it or not, scientific research has clearly shown that we all have deeply ingrained biases, which create stereotypes in our mind that can often lead to unfair treatment of others. As artificial intelligence (AI) plays an increasingly important role in our lives as decision-makers in self-driving cars, doctor offices, and surveillance, it becomes critical to ask whether AI exhibits the same inbuilt biases as humans. According to a study conducted by a team of researchers at Princeton, many AI systems do in fact exhibit racial and gender biases that could prove problematic in some cases.

One well-established way for psychologists to detect biases is the Implicit Association Test. Introduced into the scientific literature in 1998 and widely used today in clinical, cognitive, and developmental research, the test is designed to measure the strength of a person’s automatic association between concepts or objects in memory. It is administered through a computer task, in which subjects are asked to quickly categorize concepts (e.g., black people, gay people) and evaluations (e.g., good, bad) or stereotypes (athletic, intelligent). The main idea is that faster word pairings mean those words are more strongly associated in memory than pairings that take longer.

In the version of the task that tests racial bias, it has consistently been found that white individuals, on average, are faster at categorizing negative words when they follow pictures of black as opposed to white faces. What this research suggests is that many whites have split-second negative reactions towards members of certain other races, which can affect our behavior. Similarly, studies using the Implicit Association Test have shown that most people have gender biases whereby they tend to associate women with the arts and homemaking, and men with science and careers.

The Princeton study used an augmented version of the Implicit Association Test on AI systems to detect specific ingrained biases relating to race and gender. Since computers operate unfathomably fast, rather than measuring the amount of time the system took to categorize words, they calculated biases by looking at the number of statistical associations between concepts.

Since many AI systems "learn" about the world through reading massive amounts of human-created text, often taken from the web, and calculating proximity between words, the researchers suspected that human-like biases could be present. Roughly 2.2 million words were analyzed through a computer program that was looking at specific word associations. Knowing which words tend to occur with each other is important because it is how AIs derive meaning. For example, an automated system can tell that a cat is more like a dog and less like a car or a table because people often say and write things like, “I need to go home and feed my cat” or “I need to go home and feed my dog,” but not statements like, “I need to go home to feed my car.”

The analyses showed that those machine-learning systems that are trained with massive amounts of text often display human-like biases that relate to race, gender, and other sensitive matters. For example, past studies have shown that the same exact resume is twice as likely to result in an interview opportunity if the applicant’s name is European-American rather than African-American. In a similar way, the AI system was more likely to associate European American names with positive stimuli compared to African-American names. There were similar findings in terms of gender, such that female words, like “woman” or “girl” were more closely related to the arts compared to mathematics.

Such gender biases can clearly be seen in AI algorithms like those used by Google translate. Great examples of this can be seen when one tries to translate statements from Turkish to English that use a genderless noun. The statement that means, “He, she or it is a doctor” is automatically translated to “He’s a doctor” in English, while “He, she, or it is a nurse” is translated to “She’s a nurse.”

It is important to point out that biases are not always a bad thing. In fact, without biases we wouldn’t be able to make countless predictions about the world that we rely on for survival and smooth social interaction. Our brains are highly evolved prediction machines that make decisions based on previously experienced patterns. But in some instances, our biases can reflect prejudices that result in unfair treatment of others or incorrect assumptions about individuals.

These biases could have some important real-world consequences that are worth considering as we create AI systems that take on new roles previously reserved for human judgment. AI is already being used in doctors’ offices to make medical diagnoses that sift through mountains of data and studies that no doctor could process in his or her lifetime.

It would be very helpful for such a medical machine to take into account known differences in associations between specific races and genetic predispositions that put them at risk for certain diseases, for example. But what about AIs that are tasked with sifting through job or college applications to make decisions about which people are granted interviews? Do we really want a machine that associates one race with pleasant things and another race with negative things making the decisions?

To make matters worse, the types of learning algorithms that our newest and best forms of AI use to operate are becoming so complicated that even their designers can’t understand exactly how they are making their decisions. As AI becomes increasingly sophisticated, this problem is expected to become worse, and for many applications, those who want the advantages of automation might be forced to accept the decisions and suggestions of AI systems on blind faith.

Advanced statistical models are already being used to help determine who is approved for loans, who is eligible for parole, and who gets hired for jobs. AI will certainly expand farther into these domains and many more; such systems are already being utilized by the military, banks, and employers.

In light of these findings, researchers and society as a whole should be aware of the potential problems that can arise from the fact that artificial intelligence learns much of what it knows from human created knowledge, and is therefore vulnerable to the same prejudices and stereotypes. From this work, we can learn more about what biases are worth correcting for in specific artificial systems so that they perform better and more fairly. This new field of study can also use AI and word association measures to better understand the prejudices that are ingrained in human language and culture.