Can AI Rescue Democracy? Nope, It’s Not Funny Enough
Originally published on March 11th, 2024 by Tech Policy.Press
Suddenly AI tools can use language fluently – just about as well as most humans. So shouldn’t you let ChatGPT take over whenever communication is annoying? It can argue with your spouse, nag your children, or reply to trolls.
No. Your family’s up to you, but online debate shouldn’t be outsourced to AI, even though there’s excited buzz about this prospect, and several university teams are building AI tools to respond to digital hatred.
Meanwhile people are practicing counterspeech, which we define as responses to hatred that are meant to undermine it, vigorously. In spite of the “don’t feed the trolls'' nostrum, many people do respond constructively to online hatred all the time. They should be encouraged to continue, and more of us should join them. It’s true that bots have certain big advantages – they can’t be disgusted, frightened, or hurt, and they can operate at a scale that people can’t match. But AI can’t do the thing that is essential for democratic civic and political life: engaging and debating other people, including some whose ideas you abhor.
“[P]ublic discussion is a political duty,” as US Supreme Court Justice Louis Brandeis famously pointed out in his opinion in the 1927 case Whitney v California. The founders of the United States believed, he went on, “that it is hazardous to discourage thought, hope and imagination…” (all precursors to discussion that AI can’t manage) “...that the path of safety lies in the opportunity to discuss freely supposed grievances and proposed remedies, and that the fitting remedy for evil counsels is good ones.” Those ideas constitute a case for counterspeech, and they are the basis for US Constitutional law zealously protecting freedom of speech.
Empty engagement
Imagine organizing a protest for a cause you believe in – recruiting other marchers, painting signs, borrowing a megaphone. Then on the day of the event, it rains. You don’t want to get wet, so you decide to send robots to march instead, while you watch a livestream.
No matter how many robots go or how loudly they yell slogans, you know this is a bad idea since the act of responding matters, not just the content of the response. It’s a civic act that has tangible effects on the person or people who act, and also on the audience: the people who are reading or listening.
Counterspeech can have strong favorable effects on the audience and on discourse norms, according to our research. Counterspeech can create more counterspeakers even without changing anyone’s mind, by disinhibiting silent “lurkers.” When people see other people counterspeaking, they often feel it’s safe to chime in. Also, for those who are denigrated and attacked online, counterspeech can deliver much needed support and reassurance. Messages produced and delivered by AI, no matter how warmly they are worded, can’t give the same sense of solidarity.
Finally, counterspeakers themselves often benefit from speaking up, as we have learned from interviewing many of them. It feels like a duty that often makes them feel braver and more engaged, they say, not only online but also offline, where they speak up against hatred more readily thanks to practicing online.
Authentic engagement can be effective
At the Dangerous Speech Project, our independent research team, we have found thousands of active counterspeakers around the world, and have studied many of them and their efforts. In general their goals are similar – their main objective is not to transform the people to whom they respond, but instead to shift discourse norms among the people who witness their work. Their methods, however, are strikingly different. They are variously wily, ingenious, funny, empathetic, mocking, exasperated, and sometimes unpredictable, as they switch from one technique or tone to another. Those are qualities that AI cannot match, without heavy participation from people.
Take Hasnain Kazim, a German journalist who, after receiving torrents of messages from readers attacking him for his name, skin color, and what they presumed to be his Muslim faith, set himself the task of responding to as many of them as possible. He sometimes writes long, detailed letters, intended to educate people who ask questions like, “are you in favor of the headscarf?” And he is sometimes irreverent, to put it mildly. More than one reader demanded to know whether he eats pork, apparently considering that to be a reliable metric of German-ness. “No,” Kazim responded to one of them. “Only elephant (well done) and camel (bloody).”
When another reader inquired, “Do I understand correctly that you are against Islamism?” Kazim replied, “Yes, I am, except where radical militant veganism gains the upper hand. There, I’m counting on the Salafists to put a stop to it.”
“Salafists are supposed to stop this with violence?” replied the reader (exhibiting, it seems fair to note, an AI-level sense of humor).
“No, with salamis,” responded Kazim.“Salami is pork, Mr. Kazim, you should know that! Salafists do not eat salamis!”
Kazim’s dialogues with his German readers so entertained other Germans that when he published a book in 2018, compiling and commenting on them, it became a bestseller. The title (Post von KarlHeinz: Wütende Mails von richtigen Deutschen und was ich ihnen antworte, which means “Mail from KarlHeinz: Angry messages from authentic Germans and how I replied to them”) comes from the pseudonym used by a reader who sent Kazim a vicious note, excoriating him for trying, as an ostensible foreigner, to “instruct us Germans.”
“Come to where I live and I’ll show you what a real German is!” In fact, Kazim was born and raised in a small German town and served as an officer in the German navy. Instead of noting any of that in his reply, he simply said he was delighted to accept the invitation, and would soon arrive with three of his four wives, eight children, 17 cousins, 22 of their children, and three goats – all in two large buses. “We are all very excited to learn from you what a ‘real German’ is!”
So far, at least, AI cannot write like that. Nor can it organize thousands of counterspeakers to act together, as the group #iamhere has done. It cannot express “radical empathy”, as the writer and actor Dylan Marron calls it and does, in his podcast and book, both named Conversations with People Who Hate Me. We have found myriad other examples of counterspeech that are too nimble, subtle, empathetic, funny – too human – to be replicated by AI.
This is not to say that AI can’t be helpful in countering digital hatred. Counterspeech can be time consuming and emotionally taxing for those who do it. Well-designed AI tools could ease some of these burdens. But they should be used to scale the efforts that thousands of people are already making and, critically, AI tools should be offered to counterspeakers (and would-be counterspeakers) in ways that would actually help them. To discover what those are, one must ask them – a step that nearly all developers and researchers have skipped. We haven’t.
We have been working with a team led by Professor Maarten Sap of Carnegie Mellon University and Professor Joshua Garland of Arizona State University to interview a diverse and experienced group of counterspeakers and survey everyday social media users. In a study to appear at CHI 2024 led by PhD student Jimin Mun, we found that contrary to what many other researchers assume, most experienced counterspeakers do not struggle with what to write, so they don’t need an AI coach to suggest language for their posts. Instead they said they want help finding the content to which they wish to respond, since that would allow them to operate much more efficiently.
Take, for example, members of #iamhere, the world’s largest group of counterspeakers: an international coalition of over 150,000 people in 16 countries who respond collectively to comments on public Facebook pages that they consider hateful. Group moderators spend many hours each week finding such content. AI could be used to help locate hateful speech, increasing the number of posts to which the group responds.
Many social media users who are not (yet) counterspeakers, on the other hand, indicated that they would welcome an AI coach that could make suggestions for how to respond. A tool like this could help would-be counterspeakers overcome their inhibitions against getting involved.
Those we interviewed and surveyed also wondered about the ethics of automating counterspeech and whether AI-generated responses would be as effective as those written by a human. But most agreed that AI should be used to support, not supplant, humans.
Future work
More work should be done to assess the needs of counterspeakers in different contexts and the impact of automated responses to hatred. For example, how are the barriers faced by counterspeakers responding to hatred under authoritarian governments different from the ones that democratic societies present? And can audiences distinguish counterspeech written by a bot from messages written by humans? If so, are there differences in effectiveness?
It is essential to study questions such as these in diverse settings and with counterspeakers working in a variety of languages in order to create useful tools that can operate at scale, without undermining what human counterspeakers are already doing.
Online debate shouldn’t be outsourced to AI.
DownloadRead More