Does government need academia? Do theories matter in the pragmatic, day-to-day grind of national-security policymaking?

I think they do. Academics and other independent analysts have advantages that policymakers often lack. These include critical distance and the broader perspective it provides; deeper grounding in theory and conceptual frameworks; and the luxury of time to think about tomorrow’s problems and opportunities, not just today’s.

During my tenure as Chairman of the Privacy and Civil Liberties Oversight Board, my colleagues and I benefited greatly from the contributions of outside experts: think-tank researchers, professors, technologists, legal scholars, civil-society groups, and former officials.

Some topics that we reviewed had been well covered by researchers. Surveillance experts have for years debated issues surrounding Sections 702 and 215, two of the main authorities at issue in Snowden leaks. Encryption and the “going dark” debate are evergreens—unresolved for nearly three decades and likely to be with us for several more. The Court of Justice of the European Union’s decisions on transatlantic data transfers have elicited torrents of commentary from academics, lawyers, and other experts.

Other areas within our jurisdiction, however, had not been comprehensively studied and seemed—forgive the jargon—undertheorized. Existing frameworks, such as the Fair Information Practice Principles, are quite general and map awkwardly onto classified surveillance programs. (For instance: What does “Individual Participation” mean when the whole point is to conceal the surveillance from the subject?)

Why does this matter? Without some generalized but practical framework to guide decisions, oversight bodies and policymakers alike have little choice but to rely on unstructured intuitions (informed, to be sure, by the best information and advice available). “Does this seem excessive to me?”—the cleared person’s gut-check, as proxy for the uncleared public—is better than nothing. But augmenting that intuition with a considered schema for determining what is reasonable will produce better results. Those judgments are likely to be more defensible, persuasive, and consistent over time.

In this article, I identify four opportunities for greater scholarly attention to help improve policy:

  1. More rigorous conceptual frameworks for analyzing screening and vetting programs.
  2. Broader, more systematic research on comparative surveillance law.
  3. Blue-sky thinking on FISA and other foundational elements of our regime for regulating intelligence programs.
  4. A well-theorized taxonomy of privacy interests that arise in the distinctive context of clandestine surveillance programs.

I. Screening and Vetting Against Large Datasets

As recently as thirty years ago, the idea that millions of ordinary Americans would be “screened” or “vetted” by security agencies of the federal government would have seemed farfetched. Today, it is routine. Quietly, screening and vetting have become part of the furniture of post-9/11 life in the United States.

What do we mean by screening and vetting? These programs share common elements: a population subject to vetting; a transaction or event that triggers the vetting; criteria for an adverse result (and, explicitly or implicitly, standards of proof for meeting those criteria); and some set of government holdings (watchlists, datasets, etc.) against which the person will be screened for derogatory findings.

Americans most frequently undergo screening and vetting when traveling by air. TSA’s Secure Flight program “compares passenger and non-traveler information to the No Fly and Selectee List components of the Terrorist Screening Database (TSDB) and, when warranted by security considerations, against other watch lists maintained by TSA or other federal agencies.”

Screening or vetting is also required for some government employment. For example, people seeking security clearances are vetted before being entrusted with classified information. TSA also vets people seeking access to secure areas of airports and maritime facilities.

Screening and vetting are a growth industry—and, in recent years, an urgent priority at the highest levels of government. The Trump administration’s NSPM-9 directed the establishment of a “National Vetting Center” to “coordinate agency vetting efforts to identify individuals who present a threat to national security, border security, homeland security, or public safety.” In December 2018, the NVC “began supporting its first vetting program,” the Electronic System for Travel Authorization, which permits many foreign travelers to visit the United States without obtaining a visa.

The focus on screening and vetting has continued into the new administration. The Biden administration recently enlisted the Departments of Homeland Security, State, and Defense “to process, screen, and vet Afghans who have worked for and on behalf of the United States and for other vulnerable Afghans” as part of “Operation Allies Refuge.”

And it’s not just Afghans. The new administration’s National Strategy for Countering Domestic Terrorism lists enhanced screening and vetting as one of its strategic goals, promising to “[e]nsure that screening and vetting processes consider the full range of terrorism threats.” Earlier this year, the deadly January 6 riot at the U.S. Capitol prompted “the FBI to vet all of the 25,000 National Guard troops coming into Washington” for the inauguration two weeks later. And all Department of Defense personnel already undergo “continuous vetting,” which will eventually include screening “troops’ and DOD employees’ social-media posts for extremist views or behavior.”

Disparate as they are, these screening and vetting programs share common elements—yet we lack a unified theoretical framework for evaluating them. How should we assess whether it’s appropriate to screen a particular class of people, upon a particular triggering event, in order to search for particular types of derogatory findings, in particular government holdings or watchlists?

Presumably such a framework would account for the different privacy and legal concerns attaching to different types of government holdings: reviewing publicly available information, for example, may be less sensitive than querying databases of information collected under FISA. Watchlists have their own limitations and attendant concerns.

The identity of the subject is another factor. Different populations receive different levels of protection under the Constitution and various statutes. Nationality, location, and connection to the United States will likely be important variables in evaluating any national-security vetting program.

II. Comparative Surveillance Law

Global interest in U.S. surveillance practices and laws spiked after 2013’s Snowden leaks. President Obama’s Presidential Policy Directive 28, the passage of the CLOUD Act, and revisions to Section 702 attracted considerable attention overseas. U.S. surveillance law has featured prominently in the two Schrems decisions issued by the Court of Justice of the European Union. As a result, European experts in intelligence and digital privacy closely follow U.S. legal developments.

Other countries’ surveillance laws and practices are less comprehensively documented. Why?

To some extent, this reflects the greater transparency that the United States has provided about its laws and practices, particularly since the disclosures of 2013. (To be sure, those leaks and the ensuing blowback stimulated many of these transparency initiatives.) The copious public materials about U.S. intelligence programs provide ample fodder for domestic and foreign researchers.

Another factor: Our materials are in English—a language that is almost universally read and spoken by Europe’s researching class. As a result, it’s much easier to examine and critique U.S. surveillance law than that of Hungary or Finland.

Why does this discrepancy matter?

First, it fosters the perception that U.S. law is uniquely permissive. The Schrems decisions, which upended two painstakingly negotiated transatlantic deals, rested largely on this belief. These consequences might be warranted if U.S. law is indeed notably looser. But without more rigorous comparative study, how can we observers assert with confidence that this is so?

Second—and perhaps more importantly—Americans can learn from other countries’ attempts to reconcile the power reposed in secret intelligence services with privacy and democratic accountability. As long as these agencies exist, we will be refining the laws and mechanisms that constrain them. As we do so, we can surely benefit by learning from others’ successes and mistakes.

This calls for genuine comparison between systems—understanding how each country’s laws and institutions would handle similar scenarios. It will also be helpful to know more about other countries’ capabilities and practices, though these are inevitably clouded by secrecy, and most governments are less leaky than ours on classified matters.

The good news is that cross-border research on surveillance law has accelerated in the past several years. For example, aboutintel.eu, a project of Berlin’s innovative Stiftung Neue Verantwortung, publishes an eclectic mix of expert commentaries on European and American surveillance programs, law, and institutions. Oversight, which is naturally more visible than operations, has received considerable attention from cross-border researchers. The next step for scholars: weaving insights about particular systems into a broader discipline of comparative surveillance law.

III. Big-Picture Thinking on FISA

For years, the Foreign Intelligence Surveillance Act has rarely been out of the news. Since 9/11, successive enactments have expanded some powers, trimmed back others, and modestly strengthened the law’s oversight mechanisms.

Controversy has never been far behind. The NSA’s bulk collection of call detail records, which rested on an aggressive, secret interpretation of one provision of FISA, was trimmed back by 2015’s USA Freedom Act and ultimately abandoned. More enduring have been the PRISM (now “downstream”) and Upstream programs conducted under Section 702, which yield great intelligence value but have provoked concerns about “incidental collection” of information about Americans.

Most recently, the DOJ Inspector General’s discovery of numerous serious flaws in applications to surveil Trump campaign aide Carter Page — plus subsequent revelations of non-compliance with internal procedures — has brought renewed scrutiny to FISA’s oldest mechanism: court orders based on probable cause.

The accumulated doubts contributed to Congress’s allowing several provisions of FISA to lapse last year. That left a gap in the government’s toolkit for monitoring foreign agents in the United States. Yet there seems to be little appetite in Congress to take up renewal.

Where should FISA go from here?

Commentary on FISA has tended to take as a given the law’s fundamental structure, focusing instead on important but narrow issues within it. Should agencies be permitted to query data acquired under Section 702 to search for information about Americans? Should Congress circumscribe the purposes for which foreign persons can be targeted under 702? Should it strengthen the requirement that the Department of Justice disclose when information “derived from” FISA is used in criminal prosecutions? When should the FISA Court be required to appoint an amicus curiae to present alternative views?

This preference for fine tuning makes sense: proposals for statutory tweaks are typically more useful to policymakers than big-picture theorizing. Lawmakers are understandably loath to open long-serving statutes to fundamental changes. Modest bills are far more likely to successfully run the legislative gauntlet.

And FISA has worked. Over four decades, the Act has largely succeeded in erecting guardrails around sensitive intelligence powers. Before FISA and other reforms of its era, intelligence powers were repeatedly abused to collect political kompromat and surveil nonviolent political movements, often in direct contravention of established law or policy. The controversies around FISA today, though important, should not deceive: there is no comparison between the pre- and post-FISA eras of national-security surveillance.

Still, few statutory schemes endure forever, and some of FISA’s pillars look increasingly anachronistic. The most important reason involves dramatic changes in the technological environment in which surveillance programs operate.

FISA is nearly fifty years old. Communications networks are vastly more complex, and there is exponentially more data available to be collected. Communications networks transcend national borders, commingling Americans’ digital traffic with the world’s.

The privacy stakes are also different. The scale of potential collection in the digital age, and the ease of querying and analyzing the data collected, would have been difficult to imagine in the era of copper wires and paper files. With a wider aperture for collection on the front end, back-end limits on how data can be used take on outsized importance. Yet FISA says comparatively less about those.

Institutions, and what the public expects from them, have also evolved. Now under the DNI, the intelligence community is more genuinely integrated than before 9/11, when triple-hatted Directors of Central Intelligence struggled to forge unity of effort. Meanwhile, as the public’s trust in institutions has generally declined, expectations for transparency and accountability have increased.

Will an opportunity arise for a broader rethinking of FISA? It seems farfetched today. In a crisis, however, reforms that were previously unimaginable can become unavoidable. When that happens, legislators leap to make major changes that address the felt needs of the moment. Non-governmental experts, who have the time and distance to consider issues remote from policymakers’ day-to-day concerns, are well-positioned to generate ideas that may be useful in a crisis.

If a window opened to reconsider the fundamental structure of FISA, what would we change? Some questions that might arise:

  • Does it make sense for FISA’s processes to be so similar for foreign nationals and Americans (more accurately, “U.S. persons,” a statutory term of art that includes lawful permanent residents, companies, and some unincorporated associations)? Or should the protections available to U.S. persons differ more markedly from those provided to temporary foreign visitors?
  • Should a more permissive model exist for targets whose status as foreign powers or agents of a foreign power is openly acknowledged?
  • Do FISA’s rigid geographic distinctions still make sense? As David Kris has noted, “packets from domestic and international communications may increasingly be found in the same locations.” To some extent, this makes the site of collection less predictive of whether domestic content will be ensnared.  International travel has also become more common since the late 1970s.  Given these shifts, a target’s connection to the United States and the manner of collection, one could argue, better indicate the requisite degree of protection than the place of collection, which is an imperfect proxy for other concerns.  Section 704, added in the FISA Amendments Act, implicitly reflects this approach: It requires a full-dress FISC order to target a U.S. person anywhere in the world if the acquisition would require a warrant if conducted inside the United States for law-enforcement purposes.  Could this logic be extended to other categories of FISA targets?
  • Should the definition of electronic surveillance, which assumes electronic communications are most usefully categorized as either “wire” or “radio,” be updated (or replaced altogether)? Would criteria based on Fourth Amendment standards make more sense than fixed technological categories?
  • Should other surveillance activities that aren’t currently regulated by statute be included?
  • Should the “certification” model used in Section 702 be extended to other activities currently requiring individualized orders?
  • Should requirements for internal executive branch oversight be spelled out in the statute, rather than left largely to the discretion of agencies?

To be sure: a fundamental overhaul of FISA is probably not imminent. Nor, however, is it wildly improbable that FISA will be substantially revised in the years ahead. If a major crisis arises, legislators will reach for existing proposals to address the needs of the moment. What will they find?

IV. A Taxonomy of Privacy Interests Affected by National-Security Surveillance

We tend to presume that collecting personal information harms privacy. At a very basic level, that intuition is probably right. To weigh the costs and benefits of a program, however, oversight bodies and other analysts need to ask more granular questions: which aspects of the program harm privacy, in what respect, and how significant is that harm?

In his article A Taxonomy of Privacy, Professor Daniel Solove set out to “define the activities [that affect privacy] and explain why and how they cause trouble.” That is a helpful way of describing the task, and Solove’s taxonomy contains many useful concepts for analyzing surveillance programs—for example, his distinction between dignitary harms and “architectural” harms, which enhance the risk that a harm will occur or alter the balance of social relations in some undesirable way.

Intelligence programs, of course, are different from other privacy intrusions. Unlike most commercial data collection, national-security surveillance serves a public function legitimated by our democratic processes.

Another important difference: It is usually done in secret. Concepts like consent, notice, and choice may be relevant in analyzing commercial and other governmental data collection, but have little purchase when it comes to clandestine surveillance. Other principles, like transparency and accountability, are still relevant, but must be implemented in ways that reflect the distinctive needs of the classified realm. For these reasons, general privacy frameworks may be a poor fit for analyzing intelligence programs.

A taxonomy designed for assessing clandestine intelligence programs would consider the following factors, and others like them:

  • The difference between scanning or filtering by automated systems and review by humans.
  • The difference between transitory scanning and long-term storage.
  • Whether data is actually queried during long-term storage or, alternatively, remains unreviewed before being deleted (recognizing that, at the time of collection, whether a record will be called up by a human before being deleted is a question of probability).
  • Gradations of harm (or risk) based on whether, how, and how widely information is disseminated.
  • Risks arising from the possibility of abuse by malicious insiders.
  • Risks arising from the possibility that data will be leaked, inadvertently exposed, or stolen (taking into account that copies of records collected by the government may also exist in less-secure private databases).
  • Risks arising from the possibility that compliance practices or other norms will erode over time, exposing the data to uses unintended at the time of collection.
  • Risks arising from the possibility that later-invented technologies will enable the government to learn more from the data that was believed possible when it was collected.
  • Whether the possibility of being caught in a crime by foreign-intelligence surveillance is a cognizable harm (on the theory that standards for approving foreign-intelligence surveillance are more permissive than for criminal wiretaps).
  • The degree to which safeguards within an agency, or imposed from outside an agency, reduce privacy harm by mitigating risk.
  • How to categorize and weigh intangible or ambient harms (the eerie sensation of being watched, chilling effects, creeping mistrust of government, etc.).

That last point is important: to be cognizable in this policy balancing, harms to privacy need not be “concrete,” linked to some real-world outcome, or even individualized.  We understand intuitively that surveillance exerts these intangible effects, though they can be difficult to measure or even define with precision.  Policymakers, whose judgment is not bound by the Article III standing principles that limit the jurisdiction of federal courts, can and should consider such harms.

* * *

Without well-conceived frameworks to guide their analysis, policymakers, judges and overseers will tend to rely on unstructured intuitions about surveillance and privacy. Supported by good scholarship, their work, and therefore our institutions, will be stronger.

Image: Getty