New Technologies, New Problems – Troubling Surveillance Trends in America

There is growing concern that government surveillance in the United States is headed in a dangerous direction. Accelerated by publicity from the government’s heavy-handed response to the Black Lives Matter (BLM) protests, media outlets have reported on extensive aerial surveillance carried out by law enforcement agencies, increased police reliance on artificial intelligence (AI)-powered social media surveillance, and the use of facial recognition technology to detect individuals involved in suspected crimes. Some commentators even suggest that the surveillance techniques deployed in the United States are starting to resemble tactics used in far more repressive countries, such as China or Russia.

David Carroll, for example, writes in Quartz that the main difference between U.S. and Chinese surveillance programs is that the Chinese are honest about the dystopian elements of their strategy, whereas the United States does a better job of “camouflaging it” and pretending that its surveillance programs still uphold civil liberties. Similarly, Derek Thompson compares two “surveillance states” in The Atlantic: Xinjiang, China and Brooklyn, New York. While he clarifies that nothing in the United States quite compares to the surveillance apparatus implemented by Chinese authorities, “the use of novel surveillance tools to monitor, terrify, and even oppress minority citizens is not a foreign concept.”

Such comparisons can be highly misleading. Technological capacity alone is not determinative of whether a country will use surveillance repressively. As a democracy, the United States has many more rights protections in place than does China and fundamentally greater accountability between citizens and leaders.

Nevertheless, as America’s history of past surveillance abuses indicates – such as the FBI and National Security Agency’s (NSA) actions in the 1960s and 1970s to spy on civil rights leaders like Martin Luther King, Jr – even well-established democracies struggle to maintain an appropriate balance between law enforcement imperatives, on the one hand, and citizens’ rights on the other. The rapid advent of powerful new digital technologies that enable greater mass surveillance raises the question of whether the United States will be able to maintain such a balance in the years ahead. Several troubling trends make that question especially pertinent now.

Troubling Surveillance Trends

To start, many advanced technologies that U.S. police departments are adopting for surveillance purposes are not yet well understood, contain significant flaws, and are highly intrusive.

Take biometric technologies like facial recognition, which police departments have embraced with startling rapidity and minimal oversight. The New York Police Department (NYPD) has made nearly 3,000 arrests based on facial recognition searches in the first five and a half years of using the technology. Florida law enforcement offices run an average of 8,000 searches per month using Pinellas County’s facial recognition system. More broadly, a 2016 report from Georgetown’s Center for Law and Technology found that one in two American adults are searchable within a law enforcement facial recognition network.

Yet the technology displays alarming deficiencies. A landmark study released by the National Institute of Standards and Technology in 2019 found that certain algorithms were more likely to misidentify African American or Asian individuals than White males “by factors of 10 to beyond 100 times.” This startling error rate is particularly worrisome considering that police departments are relying on facial recognition technology to facilitate arrests, charges, detentions, and criminal convictions. Technologist Roger McNamee describes the problem well: “The flaws of new products like facial recognition and AI are not inevitable; they result from a culture that ships products at the earliest possible moment, without consideration for the impact on the people who use or are affected by them.” In the context of policing, this all but guarantees that companies will sell products – in this case, to law enforcement agencies eager for powerful technological tools – without adequate consideration of the due process rights of suspects, because the market does not provide incentives to prioritize such concerns.

Second, the design logic of information and communications technology, from smart phones to digital platforms, is oriented towards maximizing the collection of user data with little oversight or transparency. While authoritarian and democratic states employ varying surveillance approaches, the end result facilitates commercial and government exploitation.

In authoritarian states, such as China, apps feature built-in censorship and surveillance components to comply with government regulations. When a user logs into WeChat, for example, their postings, messages, and transactions undergo content surveillance. The information they access and the content they send out is filtered through a system that prevents the reception and dissemination of politically sensitive material.

In democracies, governments frequently exploit user data by piggybacking on market-based surveillance models. Smartphone apps and phone carriers accumulate a surprising amount of information on a user’s physical location and activity on their device. This facilitates what Shoshanna Zuboff describes as a “surveillance capitalism” ecosystem of commercial exploitation, which then makes it possible for governments to acquire this data for surveillance and investigative purposes.

During the BLM demonstrations, for example, AI startup Dataminr scanned the contents of millions of social media posts, forwarding crucial information to police departments so agents could track and surveil protests. Additionally, major technology companies maintain their own databases of user information, which law enforcement agencies can later access. Last year, The New York Times reported how Google’s Sensorvault database collects location information derived from hundreds of millions of devices. Law enforcement agencies access Sensorvault’s records by filing “geofence warrants” specifying a geographic area and timeframe of interest, and then use that information to narrow down potential suspects or witnesses. Such methods essentially allow officers to work backwards from certain locations and times to subsequently identify suspects, the very sort of fishing expedition prohibited by the Fourth Amendment.

Facial recognition systems likewise engage in massive and intrusive data collection, leaving citizens vulnerable to abuse. These systems are designed to facilitate the widespread collection and mass monitoring of sensitive personal data without individualized suspicion. Facial recognition-powered cameras in public squares can be used to quickly pull up a trove of personal information – citizenship, age, educational status, criminal history, employment, and even political affiliation – on individual citizens, without their knowledge.

Third, the decentralized nature of policing in the United States means that many local jurisdictions, particularly those with weak accountability standards or a history of questionable practices, are driving the deployment of advanced surveillance tools.

The NYPD has been one of the most aggressive adopters of new digital tools. In addition to widely using facial recognition techniques and aerial surveillance, it has pioneered the use of predictive policing algorithms, such as PredPol, despite concerns about the technology’s racial bias. Its “gang database” relies heavily on social media monitoring to populate its entries. Other jurisdictions in the United States also display troubling surveillance conduct, from Baltimore’s experimentation with aerial surveillance drones and Houston’s bid to install citywide video surveillance, to a “secretive police intelligence agency” in Maine that has expended considerable resources monitoring racial justice protests.

Efforts to gain democratic oversight of police use of novel technology for surveillance have so far yielded mixed results. For example, the NYPD has for years stonewalled efforts by civic groups to obtain basic information about its inventory of tools. However, the BLM protests have sparked some promising reforms; New York City recently passed a public disclosure act mandating that the NYPD report on all of the surveillance technologies it employs and publish a “surveillance impact and use policy” for each tool. Overall, these transparency efforts remain patchy across thousands of local jurisdictions.

Finally, the illiberal leanings of the Trump administration have exacerbated surveillance trends. Under Trump’s leadership, law enforcement agencies, particularly those with an immigration mandate, have been empowered to procure high-tech instruments and granted sweeping authority to carry out their missions.

On the U.S.-Mexico border, the government uses digital “sentry towers” that rely on laser sensing and artificial intelligence to spot illegal border crossers from as far away as two miles. Away from the border, persistent pressure from the White House to increase deportation numbers has led agencies like Immigration and Customs Enforcement (ICE) to turn to big data tools to identify undocumented immigrants for arrest and deportation. According to a New York Times investigative piece, ICE “sucks up terabytes of information,” drawing upon hundreds of sources, from private data brokers and social networks to state and local government databases, to bolster its program.

More recently, Trump’s incendiary rhetoric in response to BLM protests has paralleled a sharp uptick in surveillance activity. Federal agencies have authorized domestic surveillance and intelligence collection to counter “threats to damage or destroy any public monument, memorial, or statue.” Journalists who reported on DHS surveillance operations have in turn become the subjects of intelligence reports compiled from their social media activity (subsequent reporting reveals that DHS agents may have also accessed encrypted communications from the digital platform Telegram). In municipalities like Portland, unidentified federal agents have cracked down on ongoing protests using a variety of surveillance techniques, including monitoring YouTube and other livestream feeds, in order to identify and arrest suspects.

Thinking Through Next Steps

The growing availability of new digital technologies to enhance surveillance raise important questions for law enforcement in the United States, and all democracies determined to maintain a positive balance between rights and law enforcement imperatives. It is crucial therefore that U.S. authorities develop appropriate frameworks to guide their use of these new technologies in ways that strike the correct balance between citizens’ rights and law enforcement needs.

Devising responsible guidelines for using complex technologies like facial recognition will take considerable time and effort. As a start, policymakers could adopt a simple standard for digital surveillance tools: technology known to contain structural flaws or biases should not be used for any decisions that meaningfully affect people’s lives until those issues are rectified. (Or as Will Douglas Heaven asserts about predictive policing algorithms, “if we can’t fix them, we should ditch them.”) This means, for example, that police reliance on facial recognition technology should undergo a moratorium until companies can demonstrate that such technology will not result in inordinately high false positive rates for certain demographic groups. It also means that extensive police use of social media surveillance targeted against certain individuals should cease until there are clear privacy guidelines about what content is appropriate for law enforcement to access and what content necessitates additional approvals. Aside from algorithmic bias and the equality issues that these biases raise, new surveillance technologies bring serious civil liberties implications that require critical public examination. To initiate these conversations, policymakers should request a transparent accounting from law enforcement agencies of the types of technologies currently in use and the guidelines for their deployment.

The recent announcements that Amazon, IBM, and Microsoft have voluntarily suspended selling facial recognition technology to the police are encouraging, albeit temporary, steps. But such decisions should not be outsourced to corporations. The federal government has an obligation to set appropriate rules of use for digital surveillance technology – its failure to do so represents a severe abdication of its responsibilities. It also behooves law enforcement agencies to become radically more transparent about which tools they are deploying and for what ends. Taking such steps is the only way to restore public trust and legitimacy in surveillance measures and law enforcement agencies.

Digital technology is not destined to do harm. But a failure to establish clear and enforceable guidelines about how law enforcement agencies can operate powerful new surveillance tools will make it more difficult to protect citizens’ rights as these new technologies are increasingly deployed. In the current polarized climate, with the Trump administration directing federal agents to adopt militarized postures against civilian protesters in U.S. cities, the risk of surveillance abuse demands immediate public attention and congressional action.

(Editor’s note: Readers may also be interested in this related article: “In the Drive to Curb Police Abuses, Rein in Their Tech Too,” by Lauren Sarkesian.)

Image: peterhowell – Getty Images

 

About the Author(s)

Steven Feldstein

Senior fellow at the Carnegie Endowment for International Peace’s Democracy, Conflict and Governance Program. You can follow him on Twitter (@SteveJFeldstein).

David Wong

David Wong is a former James C. Gaither Junior Fellow with the Carnegie Endowment for International Peace’s Democracy, Conflict and Governance Program.