(Editor’s Note: This article introduces the Just Security symposium “Thinking Beyond Risks: Tech and Atrocity Prevention,” organized with the Programme on International Peace and Security (IPS) at the Oxford Institute for Ethics, Law and Armed Conflict. Links to each installment can be found below as they are published. The symposium emerges from research conducted by IPS on the role of technology in atrocity prevention and response.)
Technological advances are creating both new opportunities and risks for atrocity prevention. On the one hand, AI and digital technologies present urgent challenges by providing tools to incite or perpetrate mass violence. Social media platforms, for instance, have become notorious as hotbeds for misinformation and disinformation. Through engagement-based algorithms, they often enable the spread of hate speech and polarizing content that can fuel the commission of atrocities in the real world, as was starkly demonstrated in Myanmar.
Similarly, new surveillance technologies are facilitating the large-scale repression and targeting of vulnerable groups globally. A notable example is China’s high-tech surveillance system, which leverages artificial intelligence (AI)-powered facial recognition technology, mass surveillance apps, and big data analytics, among other technologies, to monitor and target its Uyghur minority population. Likewise, Israel’s controversial “Gospel” platform, an AI-driven system used to generate military targets and likely linked to Gaza’s extraordinarily high civilian death toll, was reportedly trained on data gathered through the mass surveillance of Palestinians.
This underscores another risk of civilian harm from emerging technologies: the growing integration of AI into military operations. Beyond its role in target generation, AI is creating a new generation of drones capable of selecting and attacking targets autonomously. These systems can result in unpredictable outcomes on the battlefield, potentially with devastating consequences for civilians, and create an “accountability gap” that complicates efforts to ensure compliance with international law and bring wrongdoers to justice.
At the same time, notwithstanding these risks, advances in AI and other technologies are expanding the toolkit for atrocity prevention. AI is supporting early warning systems – for instance, using machine learning to forecast atrocity risks. During and after atrocity episodes, governments and civil society groups are using geospatial intelligence to document and expose evidence of crimes, while also equipping local actors to collect evidence firsthand. Moreover, new technologies are contributing to justice and accountability, from helping verify evidence of atrocities in real time to sorting and analyzing digital evidence from Myanmar, Syria, and beyond for use in criminal proceedings.
Crucially, conventional technologies still hold significant potential for atrocity prevention as well. Internet access, for example, can be a life-saving tool in wartime, enabling civilians to communicate and obtain vital information about resources, aid, safe escape routes, and more. Donated eSims, while far from a perfect solution, have helped Gazans stay connected, while Starlink satellite Internet has provided Sudanese civilians a lifeline during internet blackouts imposed by the warring parties and allowed aid groups to continue operating.
The recent surge in debate over new technologies such as AI and advancements in social media has often – and understandably – focused on their potential to heighten atrocity risks. But it is equally important to consider how these tools can advance prevention and protection efforts, as well as how existing technologies can make a difference before, while, and after atrocities unfold.
This symposium seeks to address this gap by identifying opportunities for governments and civil society to harness both new and established technologies for atrocity prevention, as well as to proactively mitigate associated risks. Experts will outline, for example, the impact of technology on early warning, how social media has affected atrocity dynamics and how it might be harnessed to further adherence to the laws of war, and even how camera-fitted drones can aid accountability.
The symposium features the following articles. The list will be updated as each installment is published:
- Federica D’Alessandra and Ross Gildea, “Early Warning in Atrocity Scenarios Must Account for the Effects of Technology, Good or Bad“
- Shannon Raj Singh, “How Social Media Interventions Can Aid Atrocity Prevention“
- Miguel Moctezuma and Karina García-Reyes, “Camera-Fitted Drones May Help Locate Graves of Mexico’s Disappeared“