The use of applications such as Signal, WhatsApp, iMessage, and Facebook Messenger for communications secured by end-to-end encryption has exploded over the past few years. Today, regular users of these and similar services number in the billions around the world. The U.S. Justice Department and the FBI have claimed repeatedly that the extensive use of such services hampers their ability to conduct investigations, because they cannot access encrypted communications. Officials in other countries have issued similar complaints, and Ian Levy and Crispin Robinson from the U.K.’s Government Communications Headquarters (GCHQ, roughly equivalent to the U.S. National Security Agency) have put forward a proposal they say would provide the needed access. But due to the fundamentals of public key encryption, this regime would end up raising most of the same concerns as other encryption back-door proposals floated in recent years.
Levy and Robinson suggest authorizing government officials to force the companies that operate these secure communications services to surreptitiously add another party to an encrypted chat and suppress any notification to the users about the existence of that party. This method has been dubbed the “ghost key” or “ghost user” solution.
An article by Nate Cardozo on Just Security argued against Levy and Robinson’s assertion that they are “not talking about weakening encryption or defeating the end-to-end nature of the service.” The article touched briefly on what technical steps would be necessary to implement a ghost key functionality, but his main argument focused on whether the proposal would present the same threats to security and privacy as would an encryption back door of the sort that GCHQ claimed they were trying to avoid.
This article delves into greater detail on the technology behind the encryption these messaging services use, outlines how the ghost key proposal would operate, and further explores some of Cardozo’s reasons why the proposal would still pose serious risks to privacy and digital security.
Levy and Robinson should be commended for their effort to open a rational discussion about government access to encrypted communications, and the six principles they set forth at the beginning of their article are good ones. However, the following will show that their specific proposal is highly problematic.
(A note on terminology: When discussing encryption, the word “key” can be amorphous and thus intimidating. At its simplest, an encryption key is just like its physical equivalent — a piece of digital data that, because of its unique “shape,” can be used to lock or unlock something.)
Public Key Encryption
To understand the ghost key proposal, some background knowledge of a technology called “public key cryptography” is useful. Prior to the 1970s, private communications used a process called symmetric key encryption: the same key that was used to scramble a message was used to unscramble it. This raises the unavoidable “chicken and egg” problem of getting the secret key to an intended recipient securely, if you don’t already have an encrypted means of doing so. Spies during the Cold War often set out from home with long pages of what looked like gibberish — their copies of the symmetric keys needed to communicate with their home government. While this was a secure method, it was inefficient.
To make encrypted communications easier to accomplish, researchers in the late 1970s developed a system in which each person creates two mathematically interrelated keys (really just very large numbers), usually called a “public key” and a “private key.” In today’s end-to-end services that implement this type of system, a key pair is usually generated for each device, even if it belongs to the same person. In this system, called “public key” or “asymmetric” cryptography, the public half of the key is usually distributed as widely as possible, and is used, along with an agreed-upon method of scrambling the data being transmitted, to encrypt messages destined for the owner. However, the term “public” does not necessarily mean that the key must be broadcast widely, only that possession of the public half does not provide access to an encrypted message.
The private key is kept as secret as possible and is used to decrypt messages received by the owner. Typically, the private key is safely stored on each device. One might assume today’s spies no longer need to carry those long pages of secret keys, since they can simply use their private keys on their devices. Or, if their device were somehow lost or compromised, they could generate a new public and private key pair and use their government’s public key to re-establish communications.
Another way of understanding public key encryption is to introduce two friends well known in the field of cryptography: Alice and Bob. Alice and Bob are two people who want to communicate securely. They both use their computers to generate a key pair and exchange their public keys with each other using whatever method they have available. When Alice wants to send Bob a message, she takes Bob’s public key and her message and applies a certain mathematical operation to them, resulting in what looks like a random string of characters. When Bob receives Alice’s message, he combines his private key with the message using another mathematical operation to reveal Alice’s original message.
This type of encryption surrounds us every day: online commerce, government, banking, social media, journalism, and other fields all use public key encryption for security.
While advancements are always making the system faster and more secure, the math and implementation of public key encryption is more or less a solved problem in 2019. The true difficulty in online secure communications today is how to be sure that the public key you have actually belongs to the person you’re trying to reach. In other words, the public key system has resolved the problem of making sure the message can’t be read by anyone other than who we encrypted it to, but we still need to make sure the person on the receiving end of the message is the intended recipient. This authentication problem is currently a topic of a lot of discussion and innovation, as so much of our day-to-day lives has moved online.
The problem of how to make sure everyone has the right public keys for everyone else has two major solutions today. Either you trust some third party to keep a database of everyone’s public keys and hand them out properly when two people want to communicate, or you don’t trust anyone and you have to find another way to confirm a person’s public key, such as meeting them in person and comparing notes. In practice, most end-to-end encrypted services today rely on a little of both methods.
Most people don’t think deeply about encryption when they start a new conversation using one of these services. They trust the service provider to connect them securely with the correct person and only the correct person. That trust is generally good enough for most uses.
For other circumstances where people can’t trust the service unequivocally, such as journalists trying to protect sources, human rights advocates, or members of persecuted groups in some countries, some services provide another alternative. People can verify that they have the correct keys by comparing a certain string of letters and numbers derived from the public keys of the participants.
This code is available within end-to-end encrypted apps, though it is called different things in different services, such as a “safety number” in Signal and a “security code” in WhatsApp. It is the way to be sure that the person on the other end is who they say they are. Two people can verify that they have accurate safety numbers for each other by sitting down together with their devices, or by talking over another trusted form of communication like a phone call for people who know each other’s voices, or a different encrypted service. Most people don’t bother verifying these numbers because they aren’t concerned about this kind of attack.
Sometimes a person’s public and private keys change (most often when they get a new phone). When that happens, or when a new person is added to a group chat, the service will let the devices belonging to everyone in a conversation with the person know the new public key. The app will then display a message to everyone that the key has changed, or that a new participant has joined the chat. This also gives people who don’t feel like they can trust the service the opportunity to compare the authentication codes again to reaffirm the security of the conversation.
Other services, such as Apple’s iMessage, instead associate a cluster of keys (one key per device owned by a person) with a user identity. If a new device is attached to an account, all of the other devices on the account are notified.
How Ghost Keys Would Work
The Levy and Robinson “ghost keys” proposal would require end-to-end service providers to undermine the mechanisms described in the previous section to enable government officials to gain access to the encrypted content. Under the proposal, a government actor — in this case GCHQ — would require encrypted messaging services to add a new key owned by law enforcement to a target’s conversations without informing the participants in the conversation.
This could be accomplished by surreptitiously converting a two-person chat into a group chat, or by adding – also covertly — a key to a user’s list of “devices,” depending on how the system is structured/designed. Once that key was inserted, the participants’ devices would begin encrypting the conversation using the government’s ghost key as well, allowing law enforcement direct access to the decrypted messages.
In their proposal, Levy and Crispin assert that the ghost keys proposal would not “touch” encryption. That claim is simply not true by any normal definition of “encryption.” While the proposed method may not always involve changing the fundamental encryption algorithms (though, in reality, it would require exactly that for services based on the Signal Protocol, such as Signal and WhatsApp, which uses substantially different algorithms for two-party chats and group chats), it would require “touching” and modifying the encryption keys. The processes of key distribution and authentication and the keys themselves are integral pieces of the entire encryption system. Weakening those has a similar impact on security as undermining the algorithm itself.
From a purely technical point of view, a ghost keys requirement would most likely require messaging services to take two independent actions, both of which would have negative impacts on the security of the overall system. Note that there are at least a handful of different designs that a service provider might use to fulfill a ghost keys requirement, but the following two-step process covers the most likely arrangements, without substantive differences in terms of security impact between this and other possibilities.
- First, every service would need to make changes to the software distributed to users’ devices. The software would have to be modified to enable the provider to suppress the usual warnings about parties added to a chat or the existence of a new device that the app would otherwise have displayed when the government’s keys were added. The software also would need to be modified to suppress all of the user interface indications that normally distinguish one-to-one chats from group chats where appropriate (i.e. a “group chat” between two people and a ghost should look like a one-on-one chat, but an existing group chat that has a ghost added should not).
- The second large-scale technical change that services would have to undertake would be in their own server software. Providers would need to modify the software for their services to create a mechanism that they could activate on particular devices in response to government demands. Specifically, they would need to create a mechanism that would enable them to instruct the apps belonging to the government’s targets to silently add a key upon demand.
The Security Threats of Ghost Keys
Forcing providers to modify their software to implement a ghost key system would seriously damage the overall trust that people have in end-to-end encrypted communications. Most of these providers have advertised and differentiated themselves specifically as providing secure communications services, explicitly noting that the companies do not have access to the content of messages thanks to end-to-end encryption. That differentiation, particularly in the increasingly privacy-focused marketplace of 2019, is exceedingly valuable and would be significantly undermined by subjecting providers to a ghost keys-style order.
This kind of functionality also presents serious security risks. Any new code can include unanticipated vulnerabilities, and the required software updates create risks that anyone could gain access and manipulate the service and silently insert keys into anyone else’s conversation, thus rendering the encryption moot. Deliberately introducing potential vulnerabilities such as this can only ever make software less secure.
There are other practical problems involved with the distribution of such a modified app. If people using end-to-end encrypted software to coordinate criminal or terrorist activity (the government’s actual targets) know that service providers have been forced to implement a ghost keys solution as of a certain date, it will be relatively easy for them to simply refuse to upgrade their version of the app beyond that date. By doing so, they guarantee that their devices are not susceptible to the ghost key attack. Ironically, in such a situation, it will be only the rest of the well-meaning users of the service who would be subjected to the reduced digital security caused by the ghost key modification.
As explained further below, there is also a risk that general users would lose confidence in the security offered by service providers and refuse to install updates, so they would fail to benefit from future bug fixes and other security updates that are actually designed to improve cybersecurity.
Governments could attempt to force all devices to upgrade to the latest version of apps. One method would be to demand that the app store vendors (e.g. Apple, Google, and Amazon, among others) force upgrades of all end-to-end services’ apps. Alternatively, the government could compel the end-to-end messaging providers to modify their server software so that it would refuse to work with prior, uncompromised, versions of their own apps. Both of those solutions are heavy handed, however, and would seriously undermine user trust.
More specifically, no matter how the government sought to implement a ghost proposal, it would cause considerable damage to the overall trust of users in the system of software updates. There is already considerable resistance by some people to updating software regularly, for a variety of reasons ranging from inconvenience to dislike of changes to interfaces. This resistance leads directly to global cybersecurity challenges as users continue to run versions of software that have known vulnerabilities. One example is last year’s “WannaCry” ransomware attack that led to large-scale shutdowns of businesses and government entities such as the U.K.’s National Health Service.
Installing updates is critical to patching vulnerabilities and to protecting overall cybersecurity and the health of the global internet. Further resistance to updating software as a result of what inevitably would be seen as government intrusion would be particularly unfortunate when the cybersecurity community has finally been making progress in encouraging individuals and enterprises to take updating seriously.
The second step of implementing a ghost key system — in which providers would need to create a mechanism that they could activate on particular devices in response to government demands — would create additional security threats. Each service provider would have to create their own mechanisms to make sure that all devices only comply with authentic commands to silently add keys, in addition to the verification that the app normally undertakes.
These new mechanisms, both in the providers’ servers and in the on-device applications, would immediately become the target of intense interest to hackers. These bad actors could range from anyone trying to collect and sell people’s information; to nation-state professional hacking groups conducting espionage; to malicious insiders spying on exes. In addition, to the extent that politicians, political parties, and business leaders have taken to using these services, they would be ripe targets for the sort of hacking-based election interference that has become prevalent in recent years as well as abuse of eavesdropping mechanisms that we’ve seen in the past.
Public key cryptography is ubiquitous today, forming the backbone of end-to-end encrypted chat programs and also protecting online commerce, social media, and cybersecurity. It is figuratively the glue that holds the digital world together.
While everyone can now trust that the apps they install on their phones will deliver their communications securely, authenticating the identity of the person you’re talking to is still an area that is challenging to get right, and one which is vitally important to the safety of a wide range of at-risk individuals. The “ghost keys” proposal, while it would not directly tamper with the encryption algorithms themselves, would still undermine those authentication mechanisms. Although technically different from a mandate that tech companies create actual encryption back doors, the ghost key proposal poses very similar threats to digital security and individual rights.