Deepfakes 2.0: The New Era of “Truth Decay”

“An unexciting truth may be eclipsed by a thrilling lie.” — Aldous Huxley

Deepfake technology has exploded in the last few years. Deepfakes use artificial intelligence (AI) “to generate, alter or manipulate digital content in a manner that is not easily perceptible by humans.” The goal is to create digital video and audio that appears “real.” A picture used to be worth a thousand words – and a video worth a million – but deepfake technology means that “seeing” is no longer “believing.”  From fake evidence to election interference, deepfakes threaten local and global stability.

The first generation (Deepfakes 1.0) was largely used for entertainment purposes. Videos were modified or made from scratch in the pornography industry and to create spoofs of politicians and celebrities. The next generation (Deepfakes 2.0) is far more convincing and readily available. Deepfakes 2.0 are poised to have profound impacts. According to some technologists and lawyers who specialize in this area, deepfakes pose “an extraordinary threat to the sound functioning of government, foundations of commerce and social fabric.”

The Scope of the Problem

Truth is under attack. In this post-truth environment, one person’s truth is no longer another’s truth, and information can be weaponized to cause financial or even reputational harm. While the harmful use of (mis)information has been around for centuries, technology now allows this to happen at a speed and scale never before seen.  With the proliferation of technology, a teenager sitting at home can create and distribute a high-quality deepfake video on her smartphone in a single afternoon. According to Matthew Turek, a program manager for the Defense Advanced Research Projects Agency (DARPA), “We don’t know where this is going to end. We may be headed toward a zero trust environment.”

Criminals could use deepfakes to defraud victims, manipulate markets, and submit false evidence to courts. Authoritarian governments could use deepfakes to target public opinion and foreign adversaries could use them to erode trust in governments. The proliferation of Deepfake 2.0 technology allows this to be done easily, cheaply, and on a grand scale. RAND recently called this “truth decay.” In fact, the mere idea that this technology could be used to manipulate public opinion is already causing some to start questioning the validity of real events and un-doctored video.

Imagine the following possibilities:

  • Fake Evidence: Manipulated videos being used as evidence in court.
  • Sparking a war: a fake video of Israeli soldiers physically assaulting a Palestinian child could spark a new wave of violence in Israel.
  • Manipulating Markets: fake videos of a CEO used to disrupt an initial public offering.
  • Creating Political Fissures: fake videos intended to sow discord between foreign allies.
  • Influencing Elections: A doctored video of a politician looking sick designed to tip the scales of an election.

Deepfakes 2.0 pose a massive threat for the United States and other Western democracies that value truth, individual liberties, and the independence of the media.

Solutions — A Holistic Framework

How do we prepare for this new era of disruptive technology? It will take a whole of society approach where government, academia, and corporations work collaboratively with international partners and individual citizens. This comprehensive method recognizes that each sector possesses unique strengths, capabilities, and limitations. Finland is widely viewed as the gold standard for this approach in confronting sophisticated disinformation efforts. In 2014, the Finnish elections were the target of a disinformation campaign widely attributed to Russia.  The Finnish government took note and began to aggressively formulate a national strategy, including a national education initiative.  The Finnish recognized, “[i]t’s not just a government problem, the whole society has been targeted.”

The Finnish model includes both technical and non-technical solutions. Finnish schools stress critical thinking and media literacy, teaching students of all ages to be discerning consumers of information. The Finnish have also established a non-partisan journalistic fact-checking service: FaktaBaari. The Finnish model provides a useful starting point for a U.S. model tailored to our unique social, cultural, and legal considerations.

Technical Solutions

Detection. The plan to counter Deepfakes 2.0 must start with detection. Several companies are already developing algorithms using AI to detect deepfakes. For example, Facebook recently announced a partnership with Microsoft and academia to invest in AI systems that identify, flag, and remove harmful deepfakes. The Pentagon is also investing heavily in deepfake-detection technologies such as DARPA’s Media Forensics (MediFor) program to fight AI with AI.

Authentication. We need to establish a credible organization, perhaps through a public-private partnership, to report deepfake detection results. Blockchain technology can create digital fingerprints to help authenticate media. This technology allows videos and photos to be publicly verifiable.

Non-technical Solutions

Education. Over half of Generation Z gets its news and information primarily from social media and messaging apps on their smartphones. Therefore, schools must prioritize critical thinking and media literacy tailored to this new reality. In the decentralized American education system, this requires commitment and resources from federal, state and local governments.

Media Policy. Traditional and social media should assess criteria for evaluating suspicious or unverified, potential deepfakes that could harm society. Some social media sites have already shown a willingness to take down accounts linked to disinformation.

Legislation. Congress is considering multiple legislative proposals, including the DEEPFAKES Accountability Act. Congress should also consider a Finnish-style independent entity that provides confidence or credibility scores for digital content. State governments also play an important role. For example, California recently passed a law restricting the use of deepfakes for political purposes.

Conclusion

There is no doubt that criminals, our adversaries, and other malign actors will use deepfakes to harm the public and manipulate their sense of reality. We need a comprehensive plan to counter this threat. It requires the government, academia, and private industry to work together on both technical and non-technical solutions. Given that it is difficult to change a person’s view once it is formed, speed is a virtue when it comes to detecting deepfakes and educating the public. As the saying goes, “A lie can travel halfway around the world before truth puts on its boots.”

Image: An AFP journalist views an example of a “deepfake” video manipulated using artificial intelligence, by Carnegie Mellon University researchers, from his desk in Washington, DC January 25, 2019. Photo by ALEXANDRA ROBINSON/AFP via Getty Images

 

About the Author(s)

Brig. Gen. R. Patrick Huston

Assistant Judge Advocate General, U.S. Army

Lt. Col. M. Eric Bahm

Chief, Intelligence and Cyber Law Branch National Security Law Division Office of The Judge Advocate General