AI-4-Good in War

The United Nations campaign entitled #AI4good highlights positive ways artificial intelligence (AI) can be used for the good of humanity. The #AI4Good Summit in Geneva this week highlights many ways AI can have positive uses – both now and in the future. From the agenda, some areas of positive applications of AI include medicine, education, economic, and law enforcement applications. How could AI relate to those areas? Here are some examples:

  • Medicine: AI can be used to design more effective medicines and obtain more accurate diagnoses of medical scans;
  • Education: AI can enable more effective, adaptive curricula and allow broader access to educational resources;
  • Economic: machine learning can provide more insights into root causes of complex occupational trends, identifying biases (e.g., gender and age), and suggesting opportunities for those out of work;
  • Law enforcement: AI can help identify victims of human trafficking and crack cold cases to enable justice.

These examples illustrate that artificial intelligence is a powerful technology that can be used for good. But this general realization has so far not included an important human endeavor: the waging of war. History teaches us something very clearly: War is a tragic reality for civilians caught in the battlespace. The UN reports that 2 billion people live in countries affected by conflict, violence and fragility. Even though many civilians encounter the humanitarian tolls of war, there is no conversation on how applying artificial intelligence to waging war could ease its tragedies. The Geneva AI4Good conference is also silent on this topic.

It would be worthwhile to think more deeply about how to use AI to reduce the humanitarian tolls of warfare. Thinking creatively, there are ways that AI can improve decision-making and better protect civilians in armed conflict, because of its ability to process large sets of data and rapidly integrate disparate information sources for humanitarian benefits. For example, AI could be used in the following ways to reduce civilian casualties in war:

  • Use of AI technologies to improve distinction and reduce the number of civilians mistakenly misidentified as combatants;
  • Have AI systems monitor targeted areas and detect when collateral damage may have been underestimated from existing processes, avoiding potential civilian casualties;
  • Using AI-driven unmanned systems to allow greater tactical patience and reduce potential risk to civilians;
  • Leveraging AI to detect risk to civilian infrastructure in conflict areas, and take steps to reduce that risk through more precise use of force and identifying alternatives. This will avoid longer-term negative effects—like loss of power, water, and food supply—to local populations;
  • Leveraging AI to improve military training to better prepare combatants to mitigate civilian harm.

These are just a few ways AI can be used for good in the waging of war. Applying AI for positive outcomes in war does not require turning a blind eye to the potential risks. Rather, there are specific ways to improve the safety of AI in military systems. For example, human-machine teaming offers ways to leverage the strengths of man and machine, yielding better outcomes than either separately, similar to what is seen in Centaur chess. Other steps can be taken that collectively create a safety net for the use of AI in war.

AI holds promise for saving lives, in war just as in medicine. This promise could be realized if States choose to have their militaries pursue humanitarian gains from prudent use of AI, if international forums make such positive outcomes a collective goal, and if open society advocates for such goals. As many seek to use AI for the good of the world, leveraging AI to protect civilians on the battlefield is another, thus far untapped, way to pursue AI4good.

Image: Getty

 

About the Author(s)

Larry Lewis

Director of the Center for Autonomy and Artificial Intelligence at Center for Naval Analyses. Lewis spent a decade analyzing real world operations as the project lead and primary author for many of the Department of Defense 's Joint Lessons Learned studies.