AI Created To Identify ‘Hate Speech’ — Backfires On African-Americans

31785 0

After colleges created artificial intelligence systems to identify “hate speech,” the results raised concerns. Apparently, the systems backfired on African-Americans, and many quickly declared the AI “biased.”

Stock image (Photo Credit: Pixabay)

The University of Cornell decided to study artificial intelligence systems unveiled by many universities, which were created to monitor social media websites and flag potentially offensive online content. The results were troublesome, especially for African-Americans, leading many to deem the AI systems “biased against black people,” as one headline from the NY Post read.

Researchers at Cornell discovered that AI systems designed to identify “hate speech” flagged comments purportedly made by minorities “at substantially higher rates” than those made by whites. More specifically, their study found that, according to the AI systems’ definition of abusive speech, “tweets written in African-American English are abusive at substantially higher rates.” In addition, “black-aligned tweets” were “sexist at almost twice the rate of white-aligned tweets.”

artificial intelligence systems Created To Identify Hate Speech
Stock image (Photo Credit: Pixabay)

Based on the results, the Cornell University study suggested some artificial intelligence systems might be racially biased and that their implementation could backfire, leading to the over-policing of minority voices online, Campus Reform reported. According to the study abstract, the machine learning practices behind AI may actually “discriminate against the groups who are often the targets of the abuse we are trying to detect.”

One of the study’s authors explained that “internal biases” may be to blame for why people are “more likely to think that it’s something that is offensive” when they “may see language written in what linguists consider African American English.” In addition, the research team asserted that the findings could be explained by “systematic racial bias” from the human beings who assisted in spotting offensive content.

“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates,” the study’s abstract reads. “If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users.”

artificial intelligence systems Created To Identify Hate Speech
Stock image (Photo Credit: Facebook Screenshot)

The Cornell study wasn’t the only one to discover alleged “racial bias” among leading AI models for processing hate speech either. Both Vox and NY Post mentioned an additional study that also had similar findings:

Researchers in one study discovered that leading AI models for processing hate speech were one and a half times more likely to flag tweets as offensive or hateful when they were written by African-Americans and 2.2 times more likely to flag those written in the variation known as African-American English — commonly used by black people in the US.

The second study [referring to the one from the University of Cornell] found racial bias against African-Americans’ speech in five academic data sets for studying hate speech.

The allegations of racial bias in social media monitoring are nothing new. Activist groups such as “Color of Change” have long said that social media platforms, including Facebook and Twitter, police the speech of black people more rigidly than that of white people. In one example, Reveal reported that a black woman was banned from Facebook for posting the same “Dear White People” note that her white friends posted without consequence.

Stock image (Photo Credit: Pixabay)

Meanwhile, tech giants like Google, Facebook, Twitter, and YouTube are banking on AI systems to not only weed out “hate speech” but also potential “misinformation,” “fake news,” and “misleading” content. Unfortunately, the same problems can be seen with those AI systems; namely, an inability to understand the context, where it matters, as well as unfairly stifling voices online. Simply put, most machine learning systems can’t analyze the nuances that might make a difference in the message.

We should ask ourselves whether they even should. “We want people annotating data to be aware of the nuances of online speech and to be very careful in what they’re considering hate speech,” the University of Cornell study author Thomas Davidson said. But, perhaps, there’s a better idea. Instead of using subjective and imperfect measures to decide which voices have a right to be heard, maybe we should allow free speech across all platforms and let the users decide what to do about it. After all, there is a “block” function if you really don’t want to hear or see what someone has to say.