A team of researchers led by Dominik Hangartner, IPL co-director and professor of public policy at ETH Zurich, has joined forces with colleagues at the University of Zurich to investigate what kind of messages could lead authors of hate speech to refrain from such postings in the future
Using machine learning methods, the researchers identified 1,350 English-speaking Twitter users who had published racist or xenophobic content. They randomly assigned these accounts to a control group or one of following three counterspeech strategies: messages that elicit empathy with the group targeted by racism; humor; or a warning of possible consequences.
The results were clear: Only counterspeech messages that elicit empathy with the people affected by hate speech are likely to persuade the senders to change their behavior. An example of such a response could be: “Your post is very painful for Jewish people to read…” Compared to the control group, the authors of hateful tweets posted around one-third fewer racist or xenophobic comments after such an empathy-inducing intervention. Additionally, the probability that a hate tweet was deleted by its author increased significantly. In contrast, the authors of hate tweets barely reacted to humorous counterspeech. Reminding senders that their family, friends and colleagues could see their hateful comments was not effective, either. This is striking because these two strategies are frequently used by organizations that are committed to combatting hate speech.
Organization Type: | Academic / research organization |
---|---|
Status: | Active |
Related Links: | |
Parent Organization: | Immigration Policy Lab |
Last Modified: | 9/12/2024 |
Added on: | 3/17/2023 |