Coming to Europe from the United States to talk about law enforcement use of facial recognition (FRT) at multi-stakeholder gatherings is like walking through the looking glass. It’s not clear exactly what metaphor fits best. Where Europeans have a lush forest of legal regulations for police use of technology and data—and still feel they are lacking what they need—in the United States, we live in a desert landscape bereft of laws where police do what they wish with virtually no regulation at all.

To be clear—and much more on this below—there’s every reason to be skeptical of some of the legal justifications offered for FRT particularly in the United Kingdom. And some of those countries may be moving too quickly, beyond even where police in the United States tread. But that, in its own way, is the point: there are legal justifications required, and given, and people know them, and can call them out as insufficient if they just don’t measure up. In the United States, it is all hush-hush, maybe even with a dose of deceiving the public mixed in, making it nearly impossible to hold law enforcement to account.

Facial recognition in practice

For the past couple of years now, the United States has been in mini-turmoil over police/law enforcement/government use of FRT. San Diego departments did it and were forced to stop. Washington County, Oregon and Orlando, Florida, jumped in using Amazon’s Rekognition, creating a national stir. Members of Congress held hearings and expressed outrage. San Francisco and a few other municipalities have banned it. The latest scandal—and it is indeed a scandal—surrounds Clearview AI, the company that scraped social media platforms like Twitter and Facebook to create a database of millions of faces (with name and geolocation identifiers), and then “licensed” police departments to use it. Many departments were tight-lipped or denied participation until someone hacked Clearview’s customer base and Buzzfeed and the New York Times started to tell us what was really up.

The controversy about FRT is driven by a set of reasonable concerns about use of the technology—from intrusions into privacy and chilling the public exercise of constitutional rights like protest and free speech, to overcriminalization and racial justice.  These concerns are furthered by the fact that FRT algorithms have greater difficulty identifying women and people of color than white men, which can lead to enforcement disparities.

Europe and the United Kingdom have seen their own share of controversy over the practice. The South Wales police have run live FRT trials several times, including at a major soccer championship. An FRT van scanned thousands of faces to see if any matched a “watchlist” of several hundred people. The High Court in Cardiff said this was lawful, a decision now on appeal. London’s police department, The Met, has since run its own very public trials. Police in Scotland announced that they’d be operational with FRT by 2026, until a parliamentary sub-committee trashed the idea, and under pressure, the Scottish police backed off—for now. The European Commission issued a White Paper on the regulation of AI; an early draft hinted at a moratorium or ban on FRT, but that has gone by the wayside.

Legal differences and indifference

There’s a decided difference, though, between things at home and abroad, and that difference is law. In the United States, the police are using FRT somewhat defiantly, and not always openly or candidly, while politicians wring their hands (or at least a few of them do) about what is to be done. Challenges, to the extent they exist, largely will be based on the Constitution’s Fourth Amendment, which notoriously has little to say about things that happen out in the open, like capturing images of faces on display in public(ish) places.

In Europe, there’s lots of law in place governing police use of surveillance techniques, and still, a concession in many places that there is not yet enough legal regulation to move forward. There’s the European Convention on Human Rights, the General Data Protection Regulation, the Law Enforcement Directive, and all the data protection measures of member states. Still, the Berlin police ran a trial of FRT with a volunteer “watchlist” but concluded they could not use it for real because the legislature must decide the criteria by which people are included on watchlists. FRTance’s gendarmes have used it for investigation but also concluded any live use of FRT lacked legislative authorization. And the European Commission clearly recognizes that serious regulation of AI is needed.

A case in point is the decision rendered on September, 4th, 2019 by the High Court sitting in Cardiff in Bridges v. the South Wales Police, the challenge to SWP’s trials of live FRT. The court upheld police use, a decision that is now on appeal. But that opinion sounds like nothing you’re likely to hear around policing in the United States. “The debate in these proceedings has been about the adequacy of the legal framework in relation to AFR Locate,” the court explained, setting out in detail all the relevant provisions that had been offered up, among them not only Europe’s Law Enforcement Directive, and the Data Protection Acts of 1998 and 2018, but a number of government guidelines, and the South Wales Police’s own written policy on the use of FRT. (How much of this sort of analysis survives Brexit is someone else’s field, not mine, and seems to be anyone’s guess in some respects.)

The court set out methodically what was required for AFR Locate to have a “sufficient legal framework” including “necessary qualities of foreseeability, predictability, and hence of legality,” as Bridges argued was necessary. Before the police could proceed there had to be “some basis in domestic law.” What the police were doing had to be “accessible,” which is to say, “published and comprehensible, and it must be possible to discover what its provisions are.” It had to be “possible for a person to foresee its consequences for them.” And, finally, the governing rules “should not confer a discretion so broad that its scope is in practice dependent on the will of those who apply it, rather than on the law itself.”

A visitor from the United States would have to be forgiven for falling off of her proverbial chair hearing of such regulation of the police. And here’s something else you’re unlikely to hear about policing in the United States. The court recognized the South Wales Police’s (SWP) written policy on FRT might be amended as learning progressed, but “[n]onetheless, for the duration of their lives, such policy documents provide legally enforceable standards against which South Wales Police use of AFR locate can be judged.” Police policies being enforceable in court?

To be clear—and it is important to be very clear here—things in Europe, and in the UK—are decidedly not rosy on the FRT front. Whereas most U.S. departments are using FRT in a static way, for example by comparing an image taken at a crime scene to a pre-existing database of images in order to identify images of people committing crimes, the trials in Europe are running the algorithm on live images from camera feeds to search for people on watchlists. And, to be even clearer, the UK court’s conclusion that the use by the SWP was “according to law” was way too generous to the police and required reverting to “common law” authority.  So too its assessment of the “necessity” of using FRT to combat crime, as well as its “proportionality”—the court conducted an analysis of competing costs and benefits that mostly ignored the costs and had little rigor to it at all.

The key takeaway here, though, is that the court’s weak reasoning was out in the open to criticize. So too SWP’s policy. And what SWP did was done publicly, notoriously, and transparently—and had it not been, even this court would have declared it unlawful. But for these requirements, one can only imagine what the police would be doing.

There’s simply no reason the use by police in the United States of FRT—or any other technology for that matter—should be utterly without legal authorization. The requirements set out by the High Court were, after all, nothing more than what we here in the United States think of as bedrock rule of law. The South Wales court said the question before it was whether “AFR Locate [was] ultra vires the SWP”—meaning outside its power. That’s a basic concept in United States administrative law too—agencies can’t just do what they want without legislative authorization, and agency policies themselves are subject to judicial review. And, police departments are agencies of government, like others. Except we don’t apply these basic concepts of the rule of law to the police. We never really have. We give the police broad authority to enforce the law, and then they just do as they will unless they trip over the Constitution’s very generous (to the police) limits.

Requiring that the police use technology “according to law” at least requires the sort of transparency and regulatory guidance that allows the public to have input. Not just informally like this article, but in the formal houses of government. It requires legislators and bureaucrats to do studies, take positions, and draft rules.

There’s promise for keeping us safe in some of the new policing technologies, which explains why there are segments of public support and an eagerness on the part of the police to try. But there’s also a grave threat to individual liberty, privacy, and racial justice. A balance needs to be struck.

But it will not be struck by continuing to act lawlessly, which is to say without real legal authorization.

Image: Getty