Protecting the Information Space in Times of Armed Conflict

Information warfare has generated growing international concern in recent years as allegations of adversarial foreign influence operations – directed against democratic decision-making processes and public information spaces – surge. However, so far, the ensuing debate among scholars and policymakers has been focused on international human rights law (IHRL) and other questions of peacetime international law. The legal implications of digital information warfare in the context of armed conflict (in which the applicability or at least the extent of the application of IHRL remain contentious), on the other hand, have so far received less attention. As part of an ongoing project at the Geneva Academy of International Humanitarian Law and Human Rights on “Disruptive Military Technologies,” we have published a research paper in the hope to fill this gap and to serve as a starting point for further debate. This post presents a condensed version of our argument.

To illustrate what is potentially at stake, imagine the following fictitious but, we think, realistic scenario:

During an armed conflict between State A and State B, the military information operations unit of State B launches an open propaganda campaign through social media, video streaming platforms, and State-owned TV channels. As part of the campaign, which is designed to undermine public support for the military campaign of State A, the military information operations unit of State B spreads a video via social media – using networks of fake accounts that appear to belong to ordinary citizens of State A – that ostensibly shows a high-ranking political leader admitting that the armed conflict was actually initiated by State A under false pretenses. Shortly thereafter, the military of State B starts a large-scale cognitive warfare operation aimed at the distortion of the entire online media ecosystem of State A. The content on the websites of all of the most important public broadcasting services and the leading newspaper publishers is subtly, and at first virtually imperceptibly, falsified and manipulated, in line with the official position of State B. Employing micro-targeting algorithms and bots, susceptible parts of State A’s population are flooded with incendiary political messages that contradict the official government position, exploiting preexisting rifts in the country’s social fabric. The military information operations unit even gains access to the servers of several think tanks and research institutes in State A by using sophisticated email spear phishing to install backdoors. It then carefully rewrites the main points of already published expert opinions and academic studies dealing with political issues that are points of contention between the two countries. The combined epistemic assault leads to a lasting corrosion of the media ecosystem of State A and results in widespread and sustained confusion and uncertainty among the civilian population. Even though the original content can gradually be reinstated and it eventually turns out that the video had been fabricated using “deep fake” algorithms, support for the government and the war effort in State A drop significantly. Eventually, the military of State A is forced to retreat due to mounting internal pressure. The upheaval in State A caused by the corrosion of public trust in both the media and political structures proves to be lasting, resulting in a sustained period of political instability that State B further exploits to achieve its strategic goals vis-à-vis State A.

What should we make of such a scenario from the perspective of the laws of armed conflict? Given the large number of allegations of adversarial foreign influence operations recently, this scenario demonstrates possible real implications of such attacks and the need to address protections for these information spaces. However, so far the debate has been focused on peacetime international law questions such as whether and under which circumstances an online disinformation campaign targeting audiences abroad may amount to a violation of the target state’s sovereignty, the principle of non-intervention, or even – in extreme cases – the prohibition of the use of force.  The legal implications of digital information warfare in the context of armed conflict, on the other hand, have so far received less attention. The unprecedented surge of misinformation surrounding the COVID-19 pandemic too has added a new sense of urgency, demonstrating the numerous negative consequences such campaigns could have in times of crisis, while at the same time expanding the scope of legal questions.

What, if any, limits exist concerning digital information operations in armed conflict? Does the humanitarian legal framework adequately capture the humanitarian protection needs that arise from these types of military conduct? Where should the protective bounds of international humanitarian law extend with regards to effects and side effects of digitized information warfare? What are, or what should be, the limits of disinformation campaigns, “fake news,” deep fakes, and the systematic manipulation of a given information space in times of armed conflict? Does IHL, which is traditionally and primarily focused on preventing physical harms, sufficiently account for the potentially far-reaching consequences on societies of operations whose immediate effects are limited to the content layer of network infrastructures? Is it capable of addressing this issue? If not, should it?

While the laws of armed conflict have proven to be flexible enough to accommodate technological innovation in general and are applicable to new means and methods of warfare, as thoroughly discussed in relation to the application of IHL to cyber warfare, it is less obvious whether the protection they provide remains adequate in all instances in which novel forms of warfare are employed. And while it is certainly true that disinformation campaigns, ruses, and other methods of deception and propaganda have always been part of warfare, recent technological developments, especially in the fields of cyber and artificial intelligence, may fundamentally change the game of information warfare. Considering the scale, scope, and far-reaching effects of current peacetime disinformation operations the constantly increasing level of military cyber capabilities, the traditional assumption that all types of disinformation operations short of prohibited perfidy are generally permissible during armed conflict should be revisited.

Information Operations under International Humanitarian Law

Propaganda and influence operations, including operations directed towards the civilian population, have been a common and widely accepted feature of warfare throughout the ages. The applicable legal frameworks address communication and information activities only tenuously and non-systematically, a consequence of IHL’s traditional focus on the physical effects of armed conflicts. Accordingly, the Tallinn Manual 2.0 submits that generally, “psychological operations such as dropping leaflets or making propaganda broadcasts are not prohibited even if civilians are the intended audience” (rule 93, para. 5). In line with this, it has been suggested that “through the longstanding, general, and unopposed practice of States, a permissive norm of customary law has emerged, which specifically permits” such operations “as long as [they] do not violate any other applicable rule of IHL.”

The rules that are potentially applicable to certain information operations, or aspects thereof, include the prohibition of perfidy (Article 37(2) Additional Protocol I (AP I)), the prohibition to terrorize the civilian population (Article 51(2) AP I), the prohibition to encourage violations of IHL (derived from the obligation to respect and ensure respect for the rules of IHL “in all circumstances” pursuant to common Article 1 of the Geneva Conventions (GC I-IV) and Article 1(1) AP I), and the obligation to treat civilians and persons hors de combat humanely. Finally, information operations that qualify as military operations – and especially information operations that amount to an attack even under current IHL – are subject to additional legal constraints, most importantly the rules on targeting, such as the principles of distinction, proportionality, and precautions in attack.

The exception, in this context, are the rules on targeting. But these only apply when the information operation reaches the “attack” threshold. Whether that is conceivable at all opens up difficult questions regarding the required causal nexus between conduct (dissemination of a piece of disinformation) and consequence (e.g. physical harm to an individual civilian). Although it has recently been argued that even the dissemination of false health care information could in fact qualify as a use of force in the sense of Article 2(4) of the UN Charter, the determination of a sufficiently proximate causal relationship is, in any case, not straightforward. If a targeted individual is exposed to a piece of disinformation and, because of it, engages in harmful conduct, for instance by ingesting pure methanol in the mistaken belief that it will help against COVID-19, that person must still make the decision to act on the disinformation.

That is not to say that the cited rules do not impose important limits on certain types of digital information campaigns in armed conflict; the underlying considerations continue to be relevant. However, none of the current rules are aimed at maintaining the integrity of civilian information spaces, or the collection of different media, platforms, and channels where communication and exchange of information between citizens occur, including for the purpose of collective decision-making.

This is particularly significant when discussing cognitive warfare operations that aim at degrading information spaces during armed conflict and causing instability, confusion, and loss of trust in a country’s public institutions, media, and democratic decision-making processes – as exemplified in the scenario sketched at the start of this post. To be sure, as it is the raison d’être of IHL to mitigate the most severe humanitarian impacts of warfare, but not all impacts of conflict, it might be argued that such effects should remain outside the law’s protective scope even under the conditions of 21st century warfare. Clearly, overly restrictive limits on information operations during armed conflict would be utterly unrealistic. At the same time, however, the unprecedented nature, scope, and impact of manipulative information operations occurring in peacetime and their long-lasting divisive and corrosive effects on public trust and societal stability require that more attention be given to these types of operations during armed conflict.

Should IHL Protect Against the Novel Consequences of Information Warfare?

It is precisely for this reason that we believe a thorough reappraisal of the subject is overdue, considering how powerful and consequential such military operations have potentially become in the wake of the digital transformation of society.

Let us recall that a central object and purpose of IHL is the protection of civilian populations against the worst consequences of armed conflict. IHL’s anchoring in 20st century kinetic warfare and its traditional focus on the physical impact of military operations still pervades contemporary understandings and interpretations of the humanitarian legal framework. But shifts in the nature of conflict have seen an emergence of new modes of hybrid warfare combining the employment of traditional kinetic force, cyber operations, and disinformation campaigns to destabilize or gradually demoralize the adversary – the diffuse conflict that has been afflicting Ukraine since 2014 is the most glaring example. Digital technologies allow for information operations that can deeply affect targeted civilian populations and public structures in ways that were hitherto inconceivable.

At the same time, it remains an open question whether the adverse intangible consequences on modern interconnected societies and information spaces are genuine humanitarian concerns, with the implication that IHL should address them. Are the potential harms laid out in this article reflective of protective gaps that humanitarian law should fill? If so, should such protection be achieved on the basis of existing rules, by linking the harms to traditional forms of violence or physical or mental impacts on individuals? Or should systemic values, such as ‘the integrity of national or global information spaces,’ ‘the integrity of public sector services,’ or ‘public trust,’ be seen as 21st century humanitarian values that IHL should protect as such? And is it conceivable that we would leave them legally unprotected in spite of the prospects for increasingly digitalized warfare in the 21st century?

Advancing the Debate

There are essentially two paths available to move forward from here. One is to accept the adverse consequences of information warfare as, in principle, within the ambit of the rationale of international humanitarian law, implying the need for a more progressive re-interpretation and development of the existing body of IHL. The alternative is to consider threats from contemporary information operations to lie beyond the – deliberately limited – reach of IHL.

In the latter case, other rules would have to be developed to protect these values. If they are not, civil societies will be left without any legal protection against some of the most consequential forms of modern conflict, such as those exemplified in the fictitious scenario above. The long-running but as yet unsettled questions of the extraterritorial (including virtual “territory”) and substantive reach of international human rights law in situations of armed conflict, however, suggest that States remain reluctant to proceed with the second option. For the time being, clear and undisputed protection in times of armed conflict cannot be derived from human rights law alone.

To date, States do not seem to be prepared to treat the consequences of information warfare as humanitarian concerns either. This may be due, in part, to the difficult line-drawing and definitional questions inherent in any attempt at broadening the traditional understanding of IHL. In fact, despite growing engagement within the community of international legal scholars, there is a palpable hesitation to address the issue of information warfare within the framework of international law at all. While there is an increasing trend among States to publish their position on the application of international law to cyber operations, the same cannot be said about the growing phenomenon of adversarial conduct against a State’s information ecosystem.

But in light of recent developments that suggest a shift towards more pervasive epistemic attacks that may lead to a large-scale corrosion of public information spaces without pursuing discernible military objectives, it is time to start discussing the implications for the laws of armed conflict. With the ever-increasing digitalization of societies across the globe, the adverse impact of such conduct are too grave to remain unaddressed by IHL. In particular, the adverse consequences of full-spectrum, nation-wide information warfare orchestrated by a militarily capable State in times of war that affects all layers of a target State’s information ecosystem remain underestimated and underexplored. The State-led disinformation campaigns that we have been witnessing over the past few years and that are already raising so much concern might well seem comparatively tame should similar attacks occur during a future armed conflict. Given the further observation that, in information warfare, the lines between times of war and times of peace become increasingly blurred, there even appears to be an emerging need – and room – for a broader rule against systematic and highly corrosive military information operations against civilian information spaces that is not limited to situations of armed conflict but spans the entire spectrum of peace and war.

To be sure, we are under no illusion about the prospects of such a rule materializing any time soon. In our view, all of this first and foremost calls for a policy debate about humanitarian values and protection needs on the future digital battlefield. But at a minimum, we need to move on from the current widespread, instinctive perception that any type of information operation not amounting to prohibited perfidy would automatically be permissible during armed conflict. In view of the possibilities and adverse impacts of contemporary information warfare and for the sake of protecting civilian societies in the digital era, such an approach can no longer reasonably be upheld. The protection and preservation of civilian information spaces should become a humanitarian objective in 21st century armed conflict.

Image: An AFP journalist views a video on January 25, 2019, manipulated with artificial intelligence to potentially deceive viewers, or “deepfake” at his newsdesk in Washington, DC. – “Deepfake” videos that manipulate reality are becoming more sophisticated and realistic as a result of advances in artificial intelligence, creating a potential for new kinds of misinformation with devastating consequences. (Photo credit:  ALEXANDRA ROBINSON/AFP via Getty Images).

 

About the Author(s)

Robin Geiss

Director of the Glasgow Centre for International Law and Security (GCILS), University of Glasgow, and Swiss Chair of International Humanitarian Law at the Geneva Academy of International Humanitarian Law and Human Rights.

Henning Lahmann

Henning Lahmann (@h_lahmann) is a Senior Researcher at the Digital Society Institute at the European School of Management and Technology Berlin, and Associate Researcher at the Geneva Academy of International Humanitarian Law and Human Rights.