We commend Professor Charles Dunlap for his excellent recent post on international law and public support for drone strikes. As he notes, there are many points of agreement between him, Professor Goodman, and ourselves, primarily that when it comes to drone strikes, the American public is interested not just in being safe, but in being compliant with international law. Of course, he points to a number of differences and we appreciate the opportunity to respond to what we take to be the main criticisms he raises.

The first set of criticisms deals with the reliability of Amazon’s Mechanical Turk (mTurk), the survey platform used in the first study recently published in the scholarly journal Research and Politics. mTurk, as Prof. Dunlap notes, is an online service designed to recruit and pay individuals to carry out particular tasks, which in the case of social science research often involves taking surveys. Its greatest virtue is its speed and low cost, which is one reason why social scientists have increasingly turned to mTurk to conduct experimental research across a variety of empirical domains. Articles using mTurk have appeared in all of the top political science journals, including the American Political Science Review, the American Journal of Political Science, and International Organization, among others. While the subjects tend to be younger, more educated, more liberal, and in the case of studies on conflict, more likely to be men, meta-studies comparing mTurk with other more representative samples have shown that they produce roughly comparable treatment effects. To the extent that we are interested in treatment effects — the effect of international legal compliance on public attitudes toward drone strikes — rather than trying to generalize about overall levels of support in the population, we consider the platform to be well worth the potential limitations in terms of representativeness.

Nonetheless, it is worth addressing the questions of demographics and representativeness that Prof. Dunlap raises. While the earlier study did rely on ANES 2008 data for a baseline because of when the study began — in fall 2013, prior to when the 2012 ANES release was available and widely used it is quite possible to re-run the analysis using a different distribution of partisanship, which we did for the purposes of this exercise. The figure below shows the treatment effects assuming the 2014 Pew data that Prof. Dunlap cites — 48% Democrat and 39% Republican by identification — revealing generally comparable effects despite weighting the sample differently based on partisanship. If anything, the updated figure points to greater concerns with domestic authorization than in the original analysis (and somewhat less, if still not statistically significant, concern with international authorization), though Prof. Dunlap’s concerns deal less with questions of legal authorization than with issues of distinction and proportionality, which we will address more substantively below.

 

Figure 1. Original figure showing treatment effects.

Kreps-Wallace-Fig1

Figure 2. Revised figure based on 2014 Pew partisanship data (48% Democrat, 39% Republican).

Kreps-Wallace-Fig2

While this exercise creates more confidence in the robustness of the results, we designed a follow-up study, and then applied for and received support from the National Science Foundation (NSF)-funded Time-sharing Experiments for the Social Sciences (TESS) project, which fields experiments on a representative sample of adults in the United States.

The results from this second study, which more explicitly probed key principles of international law rather than behavior compliant with norms underlying the laws of war (the focus of the first study), lent further support to the initial findings. Support for drone strikes conditional on some of the international legal concerns declines relative to a baseline condition of believing that the strikes are killing terrorists. The fact that this series of experiments conducted on whether individuals are moved by international law suggests that the results of any one experiment were not a fluke. Rather, the results are consistent with a larger dynamic in which the public is not simply interested in whether drone strikes work militarily, but especially whether they are compliant with international law. In other words, while one study might theoretically be an aberration or skewed, the triangulation of multiple studies starts to exhibit a more compelling pattern of attitudes.

Having discussed the question of sampling, we now turn to the more substantive matters raised in Prof. Dunlap’s critique. The main critique in this regard is about experimental design, including the framing and premise of the treatments.

Underlying both sets of experiments that Prof. Dunlap cites (here and here) is the assumption that while polls have consistently registered high support for drone strikes among Americans, these polls have restricted the frame of reference for how individuals evaluate the policy. Many existing surveys on drones have tended to sidestep controversial aspects of the policy, such as compliance with international law. We wondered whether these considerations would affect how individuals think about drone strikes, and whether questions about international legal compliance condition their support. Across all surveys conducted, we found strong evidence that questions of legal compliance dampen individual support for drone strikes.

To probe this question about how international legal compliance matters, we focused on both the types of legal arguments that critics of the drone program have tended to make, as well as the voices that have most frequently made them. We thus sought to evaluate whether specific values or voices in this debate affect public support for drone strikes. In particular, given that outside critiques of the US drone program have revolved around aspects of jus ad bellum (the recourse to force) and jus in bello (generally the principles of distinction and proportionality), we articulated the type of concerns that have been most commonly raised. Prof. Dunlap takes more issue with the latter question of distinction and proportionality, so this is where we will also focus our remarks.

First, Prof. Dunlap questions the wording from the first study suggesting that strikes have “often caused a number of civilian casualties.” We can quibble over the use of the word “often” to describe the extent of the casualties, and while we cite figures from the Bureau of Investigative Journalism (BIJ), other sources of casualty numbers suggest that BIJ’s numbers are far from outliers. A systematic review by Micah Zenko suggests that the BIJ estimates may be on the higher side compared to the other two main sources, the New America Foundation and Long War Journal. Yet even by the lower estimates, civilian casualties have comprised about 7% of the total fatalities between 2002-2014. That lower estimate is from the Long War Journal, run by the Foundation for the Defense of Democracies, not exactly a left-wing organization. The table below showing a meta-study of estimates for Pakistan in 2011 again suggests that BIJ’s numbers for civilian casualties are presumably not outliers. And to be sure, the rate of civilian casualties appears to have declined especially since 2013. Even so, however, critics have cited many strikes since then (not only in Pakistan, but also several other countries like Yemen) in which there was “credible evidence of civilian harm from US strikes.” We need look no further than President Obama’s press conference in April 2015, where he admitted civilians were killed in a January 2015 drone strike, to see that civilian casualties are not a thing of the past. Thus, it is not inappropriate to say that these strikes have — fill in the blank, “often,” “not infrequently,” “sometimes” — killed civilians, and quite reasonable to probe how the public reacts to such situations.

Variation in fatality estimates across studies of drone strikes in Pakistan in 2011

Kreps-Wallace-Fig3

Another concern arises with respect to the treatment involving “signature strikes” in the first study, a term that is not used explicitly but implicitly, referring to targeting “individuals who appear to behave in similar ways as terrorists — for example, going to a meeting with community elders — but who may not be confirmed terrorists. Such a broad definition may mean there are more civilian deaths than are actually reported.” The question of signature strikes has been the subject of considerable debate, since the government does not acknowledge the practice, but there is a widespread understanding that the government targets individuals who are associating with militant groups, in suspected terrorist compounds, or by some accounts military-aged males in a combat environment. While there is some belief that the rules of engagement tightened after a major Obama policy speech in May 2013, some accounts suggest that operations in Pakistan were exempted and the US appears to have continued killing individuals without appearing to have sufficient evidence that individuals were members of organised armed groups or directly participating in hostilities, which would not be considered lawful. Again, we can quibble with the specific evidentiary standards used to engage a target, but it is nearly indisputable that the US has carried out “signature strikes” and we were interested in whether the public finds that sort of practice acceptable; the empirical evidence strongly suggests it does not.

Next, in reference primarily to our second study dealing with outside “voices” on the drone program, we were quite interested in whether the groups that Prof. Dunlap implicitly discredits — Human Rights Watch, for example — nonetheless can shape the debate. We were agnostic about whether they are credible but probed whether the public would theoretically see these groups as credible and whether those groups would have an impact on wider attitudes. They did. While they were not seen as rivaling the credibility of the government, they nonetheless had a larger impact on attitudes, which should be reassuring to the groups that are expending considerable time and resources carrying out and distributing work on the consequences and implications of drone strikes. Prof. Dunlap mentions that the government is rarely actually defending its practices in the same way that critics are assailing those same practices, but if so, maybe it is missing an opportunity. We find that the public views the government as credible on this issue, offering an opportunity to be more transparent, which might also defang some of the criticisms that gain traction in the current vacuum of information.

Despite some of the differences outlined above, we are ultimately in agreement with Prof. Dunlap that questions concerning international law and public opinion toward drone strikes remain important. While in our work we seek to offer a window into some of the public’s thinking toward the use of drones in counterterrorism operations, we fully acknowledge that much more work remains to be done. We again thank Prof. Dunlap for his serious engagement with our work and look forward to benefiting from future exchanges.