Editor’s Note: The author participated in an event titled “Protecting and Promoting AI Innovation: Patent Eligibility Reform as an Imperative for National Security and Innovation” as part of a symposium on Security, Privacy, and Innovation: Reshaping Law for the AI Era, co-hosted by the Reiss Center on Law and Security, the National Security Commission on Artificial Intelligence, the Berkman Klein Center, and Just Security.

The ability, or rather inability, to reliably obtain patents covering artificial intelligence (“AI”) inventions is a serious concern. Although the importance of AI technology, and its ubiquity in all areas of life, should continue to increase dramatically over the next many years, the lack of patent protection (and subsequent inability of companies who invest in developing these technologies to recoup their expenses) could end up slowing what should be a dramatic shift in how things work. Specifically, AI patent applications are often denied because the technology falls outside of current patent eligibility rules. I suggest that part of the concern lies in the way AI has been described and the metaphors that have been conjured to explain AI. To solve the problem of patent eligibility for AI inventions, it’s time to change that story.

There is no question that AI and AI-based inventions are becoming more important, particularly as the technology matures. Applications range from playful, like an AI that has learned to play Go, to critical, including AI that has learned to interpret CAT scans. Functional applications vary widely and include computer vision, natural language processing, robotics, and more. AI also is being found in more diverse arenas – from the home kitchen rice cooker or washing machine, to autonomous driving cars, to factory floors and medical facilities, and of course, national security. While innovations in the AI field are in part driven by rapidly improving computing capabilities, the incentives to produce these innovations are lagging because patent protection is often unavailable.

Patent eligibility in the United States is based on section 101 of the Patent Act, which provides that “[w]hoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent.” This broad conception of eligible subject matter in the statute is limited by three exceptions created by the Supreme Court – laws of nature, natural phenomena, and abstract ideas. While computer-related inventions had generally been eligible for patenting through the late 1990s and early 2000s, a set of Supreme Court opinions issued between 2010 and 2014 called the patent-eligibility of these types of invention into question. The final decision in this set, Alice Corp. v. CLS Bank International, created significant uncertainty surrounding the patent eligibility of many of today’s newest technologies. Of most relevance to AI, the Alice decision revived the “mental steps” doctrine as one way of demonstrating that an invention is a patent-ineligible abstract idea. This doctrine attaches a presumption of ineligibility for patenting if the steps of a method or process patent claim can be characterized as something a human could do in her mind or with paper and pencil.

The reason behind the Supreme Court’s exception of abstract ideas from patent eligibility is largely based on preemption. If an idea is truly abstract and untethered to a particular application, then to grant a patent on that abstract idea would create an exclusive right in the patent holder that would preempt the use of the idea, even for applications that the patent’s inventor had never envisioned. For abstract ideas that are described as falling under the mental steps doctrine, there is the secondary concern that a patent could be used to accuse a person of infringement – based on his or her thinking or scribbling on a piece of paper.

For a very simplistic example, consider an algorithm to add two two-digit numbers (in the old-school way). The steps may include adding the two digits that are in the ones column, subtracting ten from any sum that is ten or greater, putting the result in the ones column of the answer, adding the two digits that are in the tens column, adding one to that sum if the sum of the digits in the ones column was ten or greater, and putting the result in the tens column of the answer. This algorithm, if someone tried to patent it, would be an abstract idea – because it is an idea that is untethered from a particular application. The original inventor may have contemplated its use for balancing financial books, but it is also used as the first step in determining an average. To protect the abstract idea would preempt its use in other applications. Moreover, because a human being – following the above instructions – could perform the algorithm (via mental steps or on a scrap of paper), a person who was, for example, figuring out the total cost of supplies for a project could be found to infringe.

An example of an invention deemed an ineligible abstract idea is the 1972 case of Gottschalk v. Benson. The patent claims in that case were directed toward an algorithm that could be used to change binary coded decimal numerals into pure binary numbers. It was not limited to any particular application – nor was it limited to a specially programmed machine. The Supreme Court held that this was an unpatentable abstract idea and stated that to allow patent protection would preclude anyone from using the algorithm in any field. This case is a valuable illustration of the mental steps doctrine. By following the patent claims in that case, I can (and have) – on a chalkboard in front of students – performed the claimed algorithm, which would have constituted patent infringement if a patent had been issued on the algorithm.

Artificial intelligence, in part because it began with the idea of training a computer to act like a human, essentially walks itself into the mental steps doctrine. In early days, AI was defined as “the ability of machines to do things that people would say require intelligence.” More modern definitions have coalesced around the idea that machines can perform as well as, or even surpass, humans at performing a task, with little human oversight, which would otherwise be performed by a human. But while AI uses computational methods to reproduce the results of human mental activity, the means by which the machine achieves those results is often quite different from the biological and cognitive processes a human would undertake to reach the same conclusion. “[T]he parallels between artificial intelligence and human mental steps are ultimately superficial. There is a fundamental conceptual difference between a claimed invention that seeks to emulate or replace, rather than simply cover, functions ordinarily carried out by a human.” The actual computational procedures performed by a computer differ in form and process from how a human performs a task. For example, a computer will not go about the task of adding two two-digit numbers using the steps described above; it will obtain the result in a different (and likely more efficient) way. The denial of patent protection based on the mental steps doctrine is inapposite for AI inventions.

Moreover, the concern about preemption that underlies the patent ineligibility of abstract ideas is not present for AI inventions. Unlike a mathematical algorithm for changing one numeral to another numeral, as was the case in Gottschalk v. Benson, AI, especially in its current state, is directed towards a particular application – playing Go or interpreting CT scans. However, even as AI technology improves to realize generalized AI inventions, the claimed technology will not be something that I (or anyone else) will be able to readily do on a chalkboard in real-time. This is not just because the AI allows the result to be found quickly, but because the process used will not be one that any human could, or would, do to solve the problem.

Even as the technology that underlies AI innovations evolves away from the manner in which a human would go about solving a problem – we’re no longer trying to build a computer that mimics a human being in terms of process — there still exists the problem of metaphor. Metaphors help convert abstract or conceptual ideas to a more tangible idea, based on common experiences that most of the listening audience should understand. One example of this in the patent space is a case where Justice Stevens, in dissent, explains software as thus: “It is more like a roller that causes a player piano to produce sound than sheet music that tells a pianist what to do.” Unfortunately, the “how” of AI is often difficult to explain or understand, and so resort to metaphorical explanation is natural. To help explain the inexplicable, it is quite natural for the explanation to fall back on notions related to human thinking. This, however, brings the focus – wrongly – back to the mental steps doctrine and ineligible abstract ideas.

Today’s AI inventions, not to mention those in the future, have less in common with Gottschalk v. Benson and more in common with innovations in computer technology, medicine, industrial applications and more – few of which would ever fall under the mental steps doctrine. But as long as AI inventions are described, either literally or metaphorically, in terms of mimicking human thinking, they are likely to be found ineligible for patenting under the Alice v. CLS Bank case. It’s time to change the story and start talking about AI inventions for the truly revolutionary advances that they are.

Image: Getty