This week, Twitter lobbed the latest volley in what has been both a fascinating and encouraging repositioning of technology companies vis-à-vis the U.S. government—a pivot that began last summer, in the wake of the initial startling revelations about the National Security Agency’s vast surveillance apparatus. On October 7, the company sued the government in federal court, arguing that the First Amendment prohibits the broad gag orders that the Department of Justice contends restricts what the company can say publicly about the national-security requests it receives—as well as those it doesn’t receive. As Twitter V.P. Ben Lee put it, “It’s our belief that we are entitled under the First Amendment to respond to our users’ concerns and to the statements of U.S. government officials by providing information about the scope of U.S. government surveillance—including what types of legal process have not been received. We should be free to do this in a meaningful way, rather than in broad, inexact ranges.”
The lawsuit has been praised by civil-liberties groups, like the American Civil Liberties Union and the Electronic Frontier Foundation, which for years have challenged the government’s use of gag orders to silence the recipients of surveillance requests in national-security investigations. (The ACLU won rulings against the “national security letter” gag-order provisions in 2005 and 2008 in the Second Circuit, and Wednesday the Ninth Circuit heard EFF’s challenge to the same provisions.) In one sense, then, Twitter’s new suit is the latest in a line of cases challenging national-security-related gag orders under the First Amendment—a suit plainly special because of the plaintiff (a high-profile technology company, rather than an anonymous recipient of a surveillance request), but not entirely original as a species of litigation.
In another sense, though—and through a close reading of its complaint—Twitter’s suit is seeking to establish something quite different than the NSL cases: a constitutional right to truthfully inform its customers and the broader public that it has not received particular types of surveillance requests. In other words, Twitter is seeking judicial endorsement of its right to publish a “warrant canary.” What’s a warrant canary? As EFF explains, a warrant canary “is a colloquial term for a regularly published statement that a service provider has not received legal process that it would be prohibited from saying it had received. Once a service provider does receive legal process, the speech prohibition goes into place, and the canary statement is removed,” thereby informing the public that the process has been received.
As I explain in this post, litigation surrounding the constitutionality of warrant canaries was inevitable once companies began to issue them—but Twitter’s suit has upended the posture of judicial review over the First Amendment issues in play in a very interesting way. In the expected warrant-canary case, a court would be faced with the question of whether the government can compel a lie—whether it can force a company to continue providing the public and its customers with information that has become factually incorrect (in order to, say, protect a particular ongoing national-security investigation). But Twitter’s suit presents a different question: whether a company can truthfully disclose to the public that it has not received a particular kind of request that, when served at some point in the future, would be accompanied by a gag order.
Before I dig further into the significance of Twitter’s latest move, a bit of background is in order.
1. Background to Warrant Canaries and Twitter’s Dilemma
In one of the more damning slides published by The Guardian and The Washington Post in their reporting about the NSA’s PRISM program in early June, 2013, the world learned that over a roughly five-year period, America’s largest technology companies—including Microsoft, Yahoo, Google, Facebook, AOL, and Apple (but not Twitter)—had become participants in a vast surveillance program operating under the FISA Amendments Act (“FAA”) that gave the NSA access (however “incidentally”) to the contents of Americans’ and others’ private data housed on those services. Some of those companies tried to push back almost immediately: Mark Zuckerberg posted to his personal Facebook page that he hadn’t even heard of PRISM until he read the name of the program in the newspaper; Google founder Larry Page said the same thing. Those claims turned out to be half-truths—the companies do participate in PRISM, but the founders simply didn’t know about that participation by name—but their animating sentiment has had staying power. The companies’ public rejection of NSA spying grew more confident over time. When the companies learned that the NSA was not just relying on their cooperation but was actively pilfering their data overseas, the companies decided to get tough. (As one Google engineer memorably wrote on his social-media page last fall, “Fuck these guys.”)
Thus began a dance that continues to this day. Caught red-handed—if not in reality, at least in perception—as willing handmaidens to the emerging and alarming U.S. surveillance state, and under threats to their bottom lines from the worldwide revulsion over their cooperation with the NSA’s activities, many Silicon Valley firms have begun to take their users’ privacy seriously. Collectively, they have jointly lobbied for surveillance reforms on Capitol Hill. Google implemented encryption protocols that protect emails in transit, both internally and when sent to users of other email services—and it blogged about it. Yahoo followed suit. Microsoft challenged an order issued under the Stored Communications Act for a foreign user’s emails stored in Ireland—and General Counsel Brad Smith took to The Wall Street Journal’s Op-Ed page to rally support for and attention to its efforts to check government surveillance on behalf of its customers. Yahoo recently pushed for declassification of its previously secret 2008 challenge, in the Foreign Intelligence Surveillance Court and Foreign Intelligence Surveillance Court of Review, to a government directive issued under the Protect America Act (the FAA’s predecessor statute). The Sunnyvale, California–based company also published 1,500 pages of litigation documents along with a promise to its users that it would “continue to contest requests and laws that we consider unlawful, unclear, or overbroad.” Apple closed a major security loophole in its mobile-phone software—with much fanfare (and concomitant apoplexy from law-enforcement officials). And in announcing its First Amendment suit, Twitter couldn’t resist hashtagging its own efforts, titling the blog post in which it announced the suit “Taking the fight for #transparency to court.”
This—technology companies’ efforts to compete on privacy by enthusiastically courting the public with demonstrations of their willingness to stand up to the government and inform the citizenry about the government’s actions—is undoubtedly a positive development. As my former ACLU colleague Ben Wizner recently told Guernica:
[O]ne of the great contributions that Snowden has made is to make some very powerful tech companies adverse to governments. When these companies and government work hand in glove, in secret, that is a major threat to liberty. But these tech companies, which are amassing some of the biggest fortunes in the history of the world, are among the few entities that have the power and the clout and the standing to really take on the security state.
One of the important ways in which technology companies have begun to assert themselves in this regard has been through so-called “transparency reports.” These reports provide the public with limited information about the kinds of law-enforcement and national-security requests the companies receive in a certain time period. The basic structure of the reports emerged from a series of lawsuits filed by several major companies (including Google, Yahoo, and Facebook—but not Twitter) in the Foreign Intelligence Surveillance Court in the fall of 2013. Prior to the lawsuits, the government had authorized the companies to publicize only the aggregate totals of surveillance requests received in a given reporting period—whether from state and local law enforcement for run-of-the-mill crimes, or from the FBI in national-security investigations. The companies sued, claiming that the government’s unwillingness to permit publication of more specific numbers amounted to an unconstitutional prior restraint under the First Amendment.
In January 2014, the companies and the government came to an agreement that permitted the companies to report the number of national-security requests they received in greater detail. Still, there is reason to question how useful the new reporting structure actually is. The companies can report the number of national-security requests only in bands of 1,000, ranging from 0–999, 1,000–1,999, and so on. They can issue reports only every six months, and with a six-month publication delay. Finally, companies can say nothing at all for two years about any order that is the first of its kind “served on a company for a platform, product, or service (whether developed or acquired)”—so-called “New Capability Orders.”
As mentioned, Twitter did not sign the January 2014 agreement (the “DAG Letter”)—and its newly filed complaint gives us a good idea why. Twitter repeatedly emphasizes that its central objection to the reporting framework to which its competitors agreed is that, “since the permitted ranges begin with zero, service providers who have never received an NSL or FISA order apparently are prohibited from reporting that fact.” Complaint ¶ 27; see id. ¶¶ 5 (“In fact, the U.S. government has taken the position that service providers like Twitter are even prohibited from saying that they have received zero national security requests, or zero of a particular type of national security request.”), 6 (“Twitter is entitled under the First Amendment to respond to its users’ concerns and to the statements of U.S. government officials by providing more complete information about the limited scope of U.S. government surveillance of Twitter user accounts—including what types of legal process have not been received by Twitter—and the DAG Letter is not a lawful means by which Defendants can seek to enforce their unconstitutional speech restrictions.”); see also id. ¶¶ 30; 39; 43; 47; 49. The company’s blog post likewise highlights the “zero” element.
This suggests that while Twitter is certainly arguing that the government’s restriction of surveillance-request reporting to large bands of 1,000 cannot be sustained under First Amendment strict scrutiny, the company’s primary objective is to win the right to say what sorts of orders it has not received. Why would Twitter care so much about “zero”? One likely answer is that because almost all Twitter posts are public (as opposed to email services like Gmail and Yahoo), the government has little national-security interest in the data stored on the company’s servers. Given the relative paucity of surveillance requests it receives relative to its Silicon Valley brethren, Twitter would like to distinguish itself by informing its users that it stands alone at “zero,” and not merely with other companies who have received “0–999” requests.
2. Twitter’s Legal Strategy
Accordingly, in the lawsuit, Twitter is emphasizing its right to tell the public that it has not received any surveillance requests of a given type in a given period. This is, on one view, quite curious. If Twitter has not received a certain kind of request, surely it is not under any legal obligation to keep that fact secret in the first place. Indeed, this is Twitter’s position in the litigation. See Complaint ¶ 4 (“Defendants provided no authority for their ability to establish the preapproved disclosure formats or to impose those speech restrictions on other service providers that were not party to the lawsuit or settlement.”). Indeed, on the “zero” question, Twitter’s suit seems like a slam dunk. So why not just publish these facts? Twitter’s complaint explains that the reason it did not simply release its transparency report to the public, zeros and all, is that the government, in private discussions with Twitter’s lawyers, has maintained that even though Twitter did not sign the DAG Letter, it is bound by its terms as a “similarly situated” company. Yes, you read that correctly—the government has taken the position that Twitter is gagged by a legal settlement negotiated with distinct parties, in a lawsuit in which Twitter did not participate, concerning types of legal process that Twitter has never received. Interesting theory, to say the least—and a plainly unlawful prior restraint.
Notably, other companies did not take Twitter’s approach. As far back as 2010, the cloud-storage service rsync.net published a warrant canary stating that it had not received any warrants of any kind for user data, without first asking for the government’s permission. Even after the Snowden disclosures and the DAG Letter settlement, companies like SpiderOak and Tumblr posted warrant canaries on their own sites. Apple may or may not have posted (and then removed) a warrant canary as well. These canaries were published in the spirit of the very first warrant canaries: signs posted by librarians informing the public that the FBI had not come knocking—and to watch closely for the removal of the signs.
Judging from its complaint, Twitter is not exactly thrilled about these smaller companies publishing warrant canaries on their own initiatives while it played by the Justice Department’s rules over the past nine months. In one of the document’s more revealing passages, Twitter writes: “Notwithstanding the fact that the DAG Letter purportedly prohibits a provider from disclosing that it has received ‘zero’ NSLs or FISA orders, or ‘zero’ of a certain kind of FISA order, subsequent to January 27, 2014, certain communications providers have publicly disclosed either that they have never received any FISA orders or NSLs, or any of a certain kind of FISA order.” Complaint ¶ 30. Twitter’s suit can be seen as a way for it to assure itself that it has the right to say, “Us, too.”
But here’s the really interesting thing about Twitter’s decision: Most warrant-canary observers, including myself, had anticipated that litigation over canaries would be defensive in nature, and that they would involve the First Amendment question of whether the government can compel a lie. That is: (1) a company publishes canary for a particular type of surveillance request; (2) the government serves that type of surveillance request on the company; (3) the government seeks to prohibit the removal of the canary from the company’s site; (4) the company sues on First Amendment grounds, arguing that the government cannot compel it to lie to the public (i.e. that it has not received a type of request when, in fact, it has). By submitting its canary to the government for approval, and suing upon the government’s rejection (but before publication), Twitter has effectively turned the tables on the government by engaging in offensive litigation on its own terms.
That decision has some disadvantages. The First Amendment question of whether, under strict scrutiny, the government can compel a lie is a fascinating one, and it is distinct from the question of whether, under the same standard, the government can require silence. Compelled speech has only rarely been upheld in our constitutional history, but compelled lies are anathema. As Judge Robert Sack of the Second Circuit Court of Appeals has written, “it is possible that in some circumstances not before us today, government compulsion to speak (or indeed to act) may well be more strictly limited than government compulsion not to speak (or act).” Jackler v. Byrne, 658 F.3d 225, 246 (2d Cir. 2011) (Sack, J., concurring). Indeed, Judge Sack invoked both the Soviet purge trials of the 1930s and the trial of Galileo Galilei to accentuate the valence of the compelled-lies quandary:
The Soviet purge trials of the 1930’s remain notorious in large measure because they were marked by confessions made under pressure of intensive torture and intimidation. And it seems unlikely that Galileo’s dispute with Church authorities about Copernican theory . . . would be as infamous had he been forbidden to assert—as he apparently believed—that the earth moves about the sun, rather than forced to state publicly and contrary to his conviction that the sun revolves around the earth.
See id. at 246 (quotation marks and alterations removed) (citations omitted). By taking the approach it has, Twitter has seemingly forfeited what might have been an extraordinarily effective arrow in its quiver. (Then again, perhaps not: Twitter may be able to persuasively argue that by requiring it to say it has received “0–999” requests rather than “0,” the government is forcing Twitter to, if not lie, then significantly mislead the public about a truthful matter of exceptional public concern. After all, it seems clear that the public would understand there to be a vast difference between a company that has never received an order of a certain type and a company that has received 999 of them.)
At the same time, though, Twitter’s move is fairly clever as a litigation strategy. The biggest problem for any company who outright published a canary and later faced government pressure to keep it in place would be the enormous pressure from the government—which would likely only force litigation over the issue when its factual case for secrecy was relatively strong—concerning the risks to ongoing investigations that removal of the canary might represent. Not knowing the underlying facts would, as it often has in national-security–gag-order litigation, put litigants at a crucial disadvantage—and perhaps make some bad law in the process. (Despite that dynamic, litigants have had some success in convincing courts to place First Amendment–based limits on such gag orders.)
Because Twitter’s suit does not present the paradigmatic warrant-canary question under the First Amendment—“Can the government force a company to continue publishing a false statement?”—it won’t ultimately resolve it. But warrant-canary watchers should keep a close eye on the Twitter litigation, for the central question the suit does present—“Can a company like Twitter truthfully disclose to the public that it has never received a particular kind of request?”—will undoubtedly loom large if (and let’s be honest—when) the government ever tries to bring a warrant canary back from the dead.