(Editor’s Note: This article is the third installment of the Symposium on AI Governance: Power, Justice, and the Limits of the Law). 

The use of AI in government is a response to the problem of how to dispense justice at scale. This takes many forms, but there are two main ones.

The first occurs by outsourcing. Digital platforms have now become a significant agent of the state. This is not necessarily an AI- or platform-specific phenomenon — it can be seen in the United States’ recruitment of big oil, big pharma, and big banking as well as big tech to enforce its laws. The big tech variation might look something like Google, and not a country’s Data Protection Authority, being the first port of call for an E.U. citizen with a complaint under the General Data Protection Regulation. You could think of this as the “privatization of privacy.”

The second occurs via GovTech — the procurement of technology from small and medium enterprises and start-ups to help plug gaps in public service delivery or otherwise automate aspects of public administration through the use of AI-enabled technologies.

But when big tech and GovTech change the way a state does its job, the state changes its relation to the private sector. This is nothing new. The relation between rights, the public interest, and the private sector have had to be recalibrated on many occasions throughout history. The welfare state greatly curbed the freedom of contract, and rightly so. Environmental imperatives have steadily eroded the scope of proprietary rights in many countries over the past century. Streamlined procedures, class actions, mass torts, and the availability of administrative remedies all effected subtle and not-so-subtle changes in the relation between individuals, the public, and the private firm.

Someone who holds that rights are inviolable might be perplexed at this framing. But the inviolability of rights is one thing, and their amenability to social evolution — to revision and recalibration in light of changing social conditions — is quite another. Both are social facts that require accommodation in any adequate theory of rights and society.

But with new forms of adjudication and claims settlement, new problems arise. The most acute of these problems is perhaps the easiest to recognize — how, in an already crumbling liberal order, can the state preserve basic liberal values? Rights may be refined and redefined (including scaled back), new ones may be forged (such as the “right to be forgotten”), but amid all these changes in form, are individuals still being afforded the protections they have enjoyed, in principle, since the mid-twentieth century?

Rather than answer this question upfront, I’ll describe one set of changes that public law must contend with, and ultimately accommodate, if it is to remain in good working order. These changes do involve refining (and possibly limiting) the scope of certain rights — ones that are in fact much older than the Universal Declaration of Human Rights. But I’m going to suggest that this is no bad thing, and, in fact, more like business as usual. The rights may change their forms, but they won’t for that reason alone be a spent force. They may, for instance, be availing in different circumstances than hitherto, or the values underpinning them may be realized in novel ways. Of course, it may be that the scope of these rights will in the end diminish considerably. But that remains to be demonstrated.

Process Rights in the Firing Line?

The rights in question are what are often called “procedural” or “due process” rights. They generally apply when scarce resources stand to be allocated to individuals by a person or public authority invested with discretion for that purpose. Such allocation procedures conventionally include the right to be heard, the right to impartial adjudication, the right to reasonable (or rational) decision-making, the right to duly authorized decisions, and the right to have one’s case considered on its merits. Sometimes the right to reasons is discussed in connection with procedural justice, but it’s not universally recognized as a right, either by common law or civil law systems.

Not all such rights are in the firing line. But the rationale behind three of them, in particular, may become increasingly ill-suited to the conditions of any society that has decided — as I think those with the means to do so will — that machines are better at handling many resource allocation questions than human beings. I’m not saying that questions of distributive justice can be fobbed off to machines — the ethical and political principles by which resources are allocated to individuals will remain in human hands for the foreseeable future. But once a principle of allocation is settled for a particular administrative agency dealing with a particular resource allocation problem, automated procedures that optimize for those principles will almost invariably be better at handling the allocation than a human or human team acting alone.

The first process right in the firing line is the right to reasonable, rational decision-making. Who knows, perhaps artificial intelligence isn’t so alien to our own. In October, a group from DeepMind purported to show that with sufficient compute and the right objective function, a generative image classifier can exhibit human-like shape bias, out-of-distribution accuracy, and understanding of perceptual illusions (bistable illusions and pareidolia — the phenomena by which humans can shift between percepts when viewing Jastow’s duck-rabbit figure and intuit objects in cloud formations, respectively). This suggests the line that’s been pushed until now, including by me, that machine learning algorithms don’t “reason” the way we do needs peddling with greater care.

But so long as we bear that in mind, it’s just a fact that most GovTech systems on the market don’t reason the way humans do. A machine learning algorithm might recommend a course of action on the weirdest of grounds. (Think, for example, of the famous “Move 37” by Alpha Go in its successful match against the leading world Go champion, Lee Sedol, in 2016 — which struck keen observers of the game as a most unhuman-like maneuver.) The null hypothesis should be that future AI-enabled systems won’t think as humans do either. And the moral here must be that an AI’s irrationality or “unreasonableness,” as judged by the prevailing legal (read: human) standard, shouldn’t invalidate its recommendations ipso facto. But under the law now, it does — unreasonableness is one of several “grounds of judicial review,” and there is no recognized exemption for technologically-mediated decision-making (short of legislative intervention).

The right to properly authorized decisions also needs to be reconsidered. Only someone with proper authority can make allocation decisions. What happens when the proper authority improperly delegates their authority to a machine through automation bias? The machine’s decision becomes a nullity. But here’s the rub: as colleagues and I conjectured, there will come a point where optimal human-AI (HAI) team performance demands that we ignore automation bias because a system may be so much better than its human teammates that allowing humans to intervene will compromise overall HAI team performance. Adherence to the old rule — continuing to honor the old right — risks degrading the quality of public services. The public interest will be at odds with the individual’s right.

Lastly, consider the right to personalized decisions — the right of a decision subject to have their case addressed on its merits, so that a decision-maker doesn’t tie their hands (“fetter their discretion”) when allocating benefits and burdens. There are some contexts where, the existence of discretionary power notwithstanding, decisions must be made uniformly and with as little concession to individual circumstances as possible. Where the context is ill-suited to such dispensation, a court can view the deployment of an AI with more scepticizm. But at least where technology is concerned, there eventually may be little point in the operation of a presumption that discretion should be unfettered. Why should a public body that routinely relies on a system it’s gone to every trouble to ensure meets impact assessment, third-party auditing, and accreditation standards be under constant suspicion of having surrendered its judgment? The question has greater salience when one appreciates that fettering its discretion may be the best thing a public body can do (for the same reason it may eventually be best to ignore automation bias).

The risk of human judgment is the unequal treatment of equals (noise), while the risk of machine judgment is the equal treatment of unequals (rigidity). It’s the latter of these two risks which the fettering doctrine tries to prevent, and by relaxing the application of that doctrine, a revamped public law would in effect be saying that it’s marginally better to treat people equally regardless of their individual circumstances than to tailor outcomes on a case-by-case basis. The net result would be a small step away from the right to individualized justice and a small step toward a very pure form of equality. But so long as the technology does reach a point where human intervention degrades service quality, the rule against fettering will be counterproductive.

Rethinking AI in Public Services

The straightforward workaround in all these cases is for legislation to authorize the use of new technology, either en masse or for each department or agency separately. Another option would be to have legislation confer discretion on heads of department, allowing them to sanction the use of specialist systems within the organizations they lead as they see fit.

But I believe a more fundamental rethink of the rules may be required. In a world where data-driven technology will transform public administration, it seems fitting for public law to accommodate that transformation in a less ad hoc and unprincipled fashion. Public (or “administrative”) law can, after all, be aptly thought of as the “law of public administration.” The current working assumption that all delegation and fettering of discretion is impermissible absent clear legislative sanction will soon look anachronistic.

Preoccupation with the proper authorization of technology in public administration should at some point give way to a concern with its legitimate procurement and validation — a different side of the nitty gritty business of public administration, to be sure, but one which should be within the purview of a true law of public administration. The law then would be less concerned with whether the use of an AI system constituted a fetter on a decision-maker’s discretion and more concerned with the existence of documentation certifying the technology’s provenance and efficacy, the disclosure of any commercial interests involved, and an evaluation of its likely impacts on affected subjects, particularly by reference to the specific context in which the technology is deployed. This is where public law should go — in the regions of procurement practice which, up until now, it has hardly touched.

In this way, the fundamental concerns implicit in the old rights will have been channelled elsewhere. My best guess is that the rights will survive in spirit in the ways I suggested earlier. And this is because I suspect that concern with procurement and documentation ultimately derives from the same values that justify concern with the fair and legitimate exercise of power. But this is, as I say, only conjecture. Time will tell.

Image: Human-AI collaboration (via GettyImages).