Years ago, at a conference of national security types, I asked a pointed question of two former senior intelligence officials about encryption, and whether it was good for the technology and national security communities to be so at odds over it.

One lamented the perceived hostility of tech toward government, arguing for constitutional duties we all share. The second was circumspect, capturing my attention and my ambivalence. “Every day I struggle with this balance, and every day I feel differently about it,” he said.

I’ve been revisiting this moment these last few weeks, watching as Google employees debate whether and how the Defense Department ought to use what is becoming the century’s most potent technology: Artificial intelligence (AI). Now, as a former national security official myself, I feel compelled to offer a few words, to the authors of the Google letter protesting the Pentagon’s program, but also to technologists and policy wonks who are watching this debate unfold with the same weary concern.

*****

For several years, two communities that have fundamentally contributed toward America’s strength–science and technology experts in one corner, and our civil servants in another–have been mired in a slow-moving crisis. Beginning with revelations about the stealthy expansion of use of drones for airstrikes against terrorists, and then the Snowden disclosures, Americans, newly troubled that breath-taking innovation may have darker implications, are increasingly asking big questions about ethics and technology.

Facebook is the most recent actor be engulfed in controversy but it is only the latest example of this continuing crisis, now fanned by fears that not only can privacy be invaded, but our political preferences be skewed by “computational propaganda” or other Russian active measures. Yet, amidst Washington’s focus on Facebook, a much-less noticed event, involving Google’s work with the Defense Department (DoD) may say more about where this debate is headed.

Weeks ago, while Facebook CEO Mark Zuckerberg was preparing for congressional testimony, 3,100 Google employees penned a letter to the CEO of their parent company, Alphabet. The letter urged Alphabet to cease involvement in an obscure, but publicly acknowledged Pentagon program, which uses AI, called Project Maven.

Maven, its opaque name aside, is largely happening in the open, reportedly focusing on unlocking AI’s capabilities to manage backlogs of DoD intelligence. The employees’ letter seizes on Google’s reported involvement in the project, and specifically the analysis of “Wide Area Motion Imagery,” and the possibility that AI will be used for “military surveillance,” which could lead to “potential lethal outcomes.” The letter concludes by calling for the program’s cancellation.

It was a striking letter for a few reasons. That more than 3,000 employees of any organization would gather to express such a sharp view testifies to how strongly this issue tugs at personal ethics, and its relationship to security. Enlisting a computer to aid in decisions for which humans bear some moral responsibility is an ethical conundrum all of us will increasingly face.

It also shows how fiercely Google employees guard their freedom to speak up, even as the company’s corporate culture evolves from the playful axiom, “don’t be evil” to something a little more conservative. It was also a refreshing departure from what is increasingly seen as an amoral streak within Silicon Valley.

The security stakes are also high. Though I’m far from an expert on AI, others, like Paul Scharre of the Center for a New American Security, have written extensively on the possibility that as AI transforms society, it will also change the character of war, possibly in some very scary ways. To me, these stakes alone oblige us to consider many views, especially from those driving this unprecedented technological change.

As I read the letter, however, I also felt compelled to offer some observations in defense of this project, and how an ethical person might see their way to accepting it. I don’t offer these views as an expert on AI, but instead as someone who has worked in the security and intelligence space in government, who has valued how technology can transform how government serves society, but who also has come of age amidst government actions that were either misguided and tragic, like the invasion of Iraq or the use of torture after 9/11; or controversial, and perhaps excessive, such as the expansion of surveillance. (Full Disclosure: Though I was never involved directly with Project Maven, I used to work at and consult with the Office of the Secretary of Defense, which oversees it.)

Intelligence remains a tricky, subjective business: even information selection in support of analysis can introduce bias. However imprecise though, it is not going away. If that’s the case, and if AI could correct, or at least reduce human error, are we not obliged to use it? Imagine how a corrected human error might expose a drone strike’s potential for blow-back and recruitment, or simply reduce civilian casualties, as AI expert Larry Lewis and others have asked. Having once helped steer U.S. approaches toward counter-radicalization, I might have pushed for this. Imagine as well the potential for failure of human analysis of expected Soviet actions during the Cuban Missile Crisis. By reducing human error or making up for human limits, AI can help avoid unwanted consequences, or near global catastrophes.

I’d also ask the authors of the letter to consider the question of government efficiency. Since 9/11, we are dealing with a national security community within government that has ballooned substantially. Its budget demands have also continued to grow and will continue to do so unless technology introduces dramatic increases in efficiency. So, the same arguments you see being made in finance and manufacturing for the efficiency gains of AI hold doubly true for intelligence and national security, because we could reinvest those savings in other areas where they are sorely needed.

Take the cybersecurity mission, a military function, where DoD is spending hundreds of millions, if not billions, of dollars, and online insecurity still rampant. At the RSA cybersecurity conference in San Francisco last week, a DoD officia lestimated that most of the intrusions it faces are from known vulnerabilities – a startling admission of human failure to upgrade software. If AI can identify and close these gaps, why shouldn’t we use it, instead of spending hundreds of millions more? Indeed, there is already a program piloted through DARPA, which could do exactly that. I’d also go a step further. As AI supplants critical, but exhaustingly routine, defense functions, we could seize the moment and use the time and resources now saved for other priorities also critical for security and prosperity like diplomacy, education and social services.

Finally, when it comes to AI or any other emerging technology, it’s ultimately leaders and their decisions that will shape their use by the government. Advocates for technology’s ethical use should focus their attention here, as the letter’s authors have done, but the agenda needs to widen, and perhaps become more flexible. For the issue that appears to have most motivated the Google letter’s authors–terrorism–we simply have not created any kind of broad-based coalition to force serious questions about the authorities given to the President, and, by extension, the military following September 1, 2001. Only recently, with bipartisan legislation that aims to revise and limit those powers, are we grappling with the possibility that these long wars might need to be wound down, and some believe the new bill doesn’t go nearly far enough.

We will likely continue to use the U.S. military all over the world, whether it is supported by AI-processed intelligence or not. A more flexible answer may not be to ban AI’s use by the government entirely, but to press for a robust debate about the ongoing efficacy of today’s wars. The imperative of getting to the issue at its source will be true for other ethical dilemmas involving AI as well. More broadly, however, war itself is not going anywhere. We desperately need to bring technologists into not the debate, but also the process of creating policy and law that shapes this future with expertise and ethics.

Something that gives me mild hope is that ours is the only government today pursuing AI robustly, where we can debate ethics and security. Already, Google is responding to this letter. Pentagon leaders are insisting that a human be kep in the loop of all AI-related actions.

Meanwhile, Russia, though supposedly lagging behind the U.S., is barreling forward with AI-related investments in its military, to include lethal uses. It has shown no principled or democratic leadership in other security affairs, and AI will not be any different. China, which some analysts fear is surpassing the United States in AI-related research (fueled partly by Chinese companies poaching experts from the American private sector), has similarly troubling use cases in mind, one of which could be its new citizenship scoring system (like a credit rating, but more nefarious). Is current Chinese leadership equipped to hear and address concerns regarding its use of AI in these areas? I’m skeptical. In the United States, we have a much greater ability to shape these uses, provided engagement is sustained, and we remain open to the compromise that sometimes can be necessary in democracy. At the same time, the actions of these other governments should not leave us blind to the impact of a future with AI that could exist without our active involvement and shaping.

One thing is for sure: We might be noodling on Facebook for the time-being, but our future will likely be marked by more discussions about Google, AI, and whether we are prepared for the transformations that may yet ensue.

Image: Justin Sullivan/Getty