Google Tackles Trolls With AI
Three months ago, I took strong exception to a “code of conduct” agreed upon between the European Commission, Facebook, Twitter, YouTube, and Microsoft to police and suppress “hate speech” on the Internet. More recently, I learned that Google has developed software designed to identify and suppress certain kinds of speech, too. But Google has something different in mind. Read on to see if it will be used for Good or Evil...
What is Google's Conversation AI?
The European Commission’s code of conduct focuses on “illegal hate speech;” that phrase appears twelve times in the EC’s press release on its agreement with the aforementioned tech giants. One of my objections to this code of conduct is that any member-state of the EU can pass a law that makes any sort of speech illegal; the law can simply define the targeted speech as “hate speech.” So-called “IT companies” operating in the EU would have 24 hours to delete or deny access to “hate speech” after being notified of a specific specimen that exists on their services.
This code of conduct solidifies the dominance of powerful governments over weak individuals or small groups. (See my article Should Tech Giants Police Hate Speech Online?) It is anathema to the First Amendment cherished by Americans. Google, in contrast, intends to protect the small and weak from the enormous and powerful.
“Troll armies” have become a significant threat to free speech online. Take the case of Sarah Jeong, a 28 year-old journalist who tweeted something caustic about supporters of Bernie Sanders in January, 2016. The backlash grew very ugly, very quickly; as Wired magazine recounted:
By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to “rip each one of [her] hairs out” and “twist her [bodyparts] clear off.” The flood of abuse continued non-stop for weeks, growing in volume and vitriol. Eventually, Jeong gave up; she made her Twitter feed private for a month, and even took a two-week leave of absence from her journalist job. She was driven off the Internet by a hateful troll army.
Jeong is just one of many victims of trolls. Anti-Semitic trolls bombarded Jewish public figures with menacing Holocaust “jokes.” A horde of racists bullied comedienne Leslie Jones off Twitter temporarily, bombarding her with pictures of apes and other Photoshopped crudities. Jessica Valenti, a columnist for The Guardian newspaper, quit Twitter after suffering threats of rape against her 5 year-old daughter. There is no limit to the depravity and hatred of trolls.
Another case that's made news recently involves actor/singer Corey Feldman. After a September 16th performance on the Today Show, Feldman was mercilessly mocked and ridiculed online. Feldman described the experience as "very painful" and that he was afraid to leave his home in the wake of the public shaming.
Artificial Intelligence to the Rescue?
Google has developed a software system called “Conversation AI” that can identify hateful, threatening, abusive speech directed at specific individuals. Conversation AI employs machine learning. It has learned to identify hate speech by being exposed to millions of speech samples drawn from the comments section of the New York Times, and 130,000 snippets of Wikipedia editors’ conversation about various Wikipedia pages.
Conversation AI assigns an “attack score” to each comment it reviews, where 0 is “harmless” and 100 is “maximum harm” to the targeted individual. The administrator of Conversation AI can set a threshold attack score that will trigger some action. Action may include warning the speaker that he/she has crossed a line; blocking the harmful comment so its target never sees it; or even banning the harmful speaker from using the service at all.
Ideally, “over the line” comments would be referred to human reviewers who can make fine judgment calls that are beyond Conversation AI’s capabilities. But we all know that corporations love to replace costly human labor with algorithms. Earlier this year, Facebook fired 24 human editors and replaced them with an algorithm that selects “trending topics.” The algorithm quickly started highlighting fake news stories, but Facebook says, “It will get better with time.”
That’s Google’s attitude about Conversation AI, too. In typical Google fashion, the company is releasing a half-baked beta-stage program as open-source code, allowing all and sundry to “play with” Conversation AI in hope of seeing marvelous innovations that no one could have anticipated. But that could be as disastrous as the EC’s code of conduct for free speech.
False Positives and Unintended Consequences
Google claims that Conversation AI is currently 92% accurate in identifying harmful speech, and has a 10% false positive rate. “It will get better with time,” the company says, meaning that Conversation AI will learn on the job. But the mistakes that it makes will infringe upon perfectly innocuous speech, to the detriment of all.
A “false positive” in this context is speech that is identified as hateful by Conversation AI but which is not considered hateful by an ordinary human being. Some examples of false positives that Wired turned up include:
“I shit you not” -- 98 attack score, the same as “you are shit.” and “You suck all the fun out of life” -- another 98, just one point short of “You suck.” Even “You are a troll” got an attack score of 93. You can’t even out a troll under Conversation AI!
Conversation AI’s imperfections may diminish in time. But... the system could also be trained to suppress non-hateful but unpopular speech; for example, anything deemed derogatory about the ruler or government of a country. Open-source code is intended to be modified by whoever gets their hands on it.
Although Google created Conversation AI with the good intention of protecting the weak and free speech, the software is a two-edged sword. I don’t see how Google can prevent it from being put to evil uses.
Your thoughts on this topic are welcome. Post your comment or question below...
This article was posted by Bob Rankin on 28 Sep 2016
|For Fun: Buy Bob a Snickers.|
Be A Smart Philanthropist
The Top Twenty
The FBI Wants To Hide Your Face
There's more reader feedback... See all 28 comments for this article.
Post your Comments, Questions or Suggestions
Free Tech Support -- Ask Bob Rankin
Subscribe to AskBobRankin Updates: Free Newsletter
Copyright © 2005 - Bob Rankin - All Rights Reserved
Article information: AskBobRankin -- Google Tackles Trolls With AI (Posted: 28 Sep 2016)
Copyright © 2005 - Bob Rankin - All Rights Reserved