Harmful online content – including hate speech, false news, cyberbullying, and inflammatory rumors – can spread quickly and reach millions. Though these points are well-documented, what is less known by researchers, tech practitioners and policy makers is how individuals and groups living in conflict settings respond to harmful content online.
Search for Common Ground aimed to address this knowledge gap by exploring the experiences of social media users in seven countries, El Salvador, Guatemala, Honduras, Kenya, Tanzania, Myanmar, and Kyrgyzstan. The study revealed a wide range of social media tactics used by individuals and groups in close proximity to violent conflict. Although responses vary, the majority of participants indicated that they want to feel a sense of ownership and agency when tackling harmful content online. You can find the full report by clicking here.
What do you think of the report and its findings?
I’m so thankful for this discussion topic and for the insight I gained from the presentation. I asked a question in reference to an article in the New England Journal of Medicine that I just wanted to share as we all think about different approaches to countering harmful online content. https://www.nejm.org/doi/full/10.1056/NEJMp2103798 It’s a short read so if you have time I definitely recommend reading it in detail.
For the sake to this conversation, and in case some aren’t able to access the article, some of the highlights include:
“We believe the intertwining spreads of the virus and of misinformation and disinformation require an approach to counteracting deceptions and misconceptions that parallels epidemiologic models by focusing on three elements: real-time surveillance, accurate diagnosis, and rapid response.”
“So-called infodemiologists — modeled on the CDC’s corps of Epidemic Intelligence Service (EIS) officers — can counteract misinformation in traditional media sources and online using evidence-based methods, including empathetic engagement, motivational interviewing, leveraging trusted sources, and pairing rebuttals with alternative explanations. Drawing on intelligence gathered from surveillance and identification systems, infodemiologists can inoculate people against dangerous deceptions.”
While I’m a fan of this novel approach, I believe it needs further development and input from several fields like Conflict Analysis and Resolution and Cyber Security to know in what context something like this could be implemented.
I think the most important missing piece is a shared definition of what constitutes “harmful content” in the same way that we all have a shared definition of Epidemic/Pandemic or Universal Human Rights. While this is a critical step, time is of the essence and bad actors all over the world have already had a head start.
- You must be logged in to reply to this topic.
Please login here.
On September 23rd, Samah Abdalrahman, Data Management Officer at Search for Common Ground (Search), will lead a DME for Peace webinar on the impact of COVID-19 on conflict-affected countries. The discussion will focus on Search’s ongoing, quarterly research addressing...
Our sister site DME for Peace is being redeveloped, and we would like your input for renaming the platform. The redevelopment is being guided by the concept of "Access with Ownership." DME for Peace has...