Okay, if you’ve ever felt guilty about having bias, don’t. Apparently, even computers have it.
According to “Computers, Artificial Intelligence Show Bias and Prejudice, Too,” an NBC news article released a few days ago, it may not be programmers fault. “It may just be that the body of published material is based on millennia of biased thinking and practices.”
It makes a rough kind of sense, that nothing is free of bias, but I was still surprised. I suppose I thought that search engines and artificial intelligence, or AI, programs would be above it all. But nope. “Cyberspace isn’t any fairer than the workplace.”
According to the article, if your name is Leroy or Jamal you’re less likely to get a job offer than an Adam or Chip with the same credentials. You’re also more likely to be associated with negativity. And, “AI thinks Megan is far more likely to be a nurse than Alan is,” Aylin Caliskan and fellow colleagues at Princeton University found.
The team studied language in different software programs, analyzing the associations computer-based systems made for more than 2 million words. What was really interesting was as part of the analysis they used some of the same standard psychology tests that people get, such as the Implicit Association Test, which measures how long it takes to link different words.
The results indicate that female names are associated more often with family terms, and male names are more often associated with career terms. “We show that cultural stereotypes propagate to artificial intelligence (AI) technologies in widespread use,” researchers wrote in their report, published in the journal Science.
Damn. Even computers think women should be barefoot and pregnant. For instance, Caliskan found that language translation programs can exhibit bias. According to the article she “demonstrated on a video, showing how translation software turned the gender-neutral Turkish term “o” into “he” for a doctor and “she” for a nurse.”
Joanna Bryson, who helped oversee the research, said that “it’s easy for software to learn these associations just by going through publications and literature…just learning language, could account for the prejudices.” But people are responsible. I disagree with that earlier statement that programmers aren’t at fault. Of course, they are, at least somewhat. Even if machines pick up bias from the societal data they’re fed, people create and program the machines and the software they run on.
Still, who’s at fault is irrelevant at this point. Rather than get mad about it or point fingers I agree with Arvin Narayanan, assistant professor of computer science at Princeton: “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”
Discussions of workplace diversity and inclusion are no different. We must acknowledge that issues and challenges exist. Then we have to decide what we can stand and what we can’t. Everything in the former category, we shrug off or work through. Everything we can’t stand, we must find the ways and means to change.