Visualization of a brain and artificial intelligence

AI’s Hidden National Security Cost

AI tools are reshaping how Americans learn, work, and solve problems. These tools exist on a spectrum, from the machine learning and natural language processing that powers Siri in iPhones, to major defense initiatives like Project Maven that rapidly generates lists of targets by synthesizing intelligence and a myriad of inputs. But a certain subset of these tools – generative AI, like chatbots and large language models (LLMs), advertised as research and writing supports—come with a particular risk that matters deeply for the U.S. national security workforce whose decisions can carry life-and-death consequences.  Regular use of generative AI programs may erode many of the very cognitive skills U.S. security depends on. Unless policymakers act, the same tools marketed as efficiency boosters could undermine national security professionals’ ability to think critically, respond rapidly, and outmaneuver adversaries. Considering how risk averse the national security space tends to be, this is a consequence worth acknowledging and addressing.

When I served as a U.S. Deputy Assistant Secretary of Defense (DASD) from 2021 to 2024, a typical day had me in as many as thirteen meetings between 8 am and 4 pm. Often, the only time I had to review the materials was as I walked to the meetings themselves, which were about everything from allocating millions of dollars of tax-payer money for training capabilities to drafting policy on professional military education. My effectiveness rested on my ability to rapidly apply my up-to-the-minute knowledge of my office’s priorities and our shifting relationships across the Department of Defense (DoD) to almost hopelessly large quantities of information in order to make sound recommendations. Put simply, critical and analytical thinking were the load-bearing walls of my job.

Like any tool, AI’s value is tied to how well it executes a given task, and in many cases AI is supposed to make tasks easier. From the teacher in the classroom to the doctor in the clinic to the pilot in the fighter jet, AI is sold as an enhancement for human efforts. But everything comes with a cost, and AI is no different. Some of its costs have already been estimated, like AI’s implications for the job market as well as the climate and water supply, but there is growing evidence that GenAI levies a silent cost on our own cognitive skills.  At scale, this will have a grave consequence for the U.S. national security workforce that depends upon—and who we collectively require to have—razor-sharp critical thinking skills.

Skills America Cannot Afford to Lose 

Educators have noticed a change in their students after GenAI entered the classroom. As one professor expressed, “That moment, when you start to understand the power of clear thinking, is crucial. The trouble with generative AI is that it short-circuits that process entirely.” Said another, education teaches you “how to do something difficult, and maybe, along the way, you come to appreciate the process of learning. But the arrival of AI means that you can now bypass the process, and the difficulty, altogether.” As a result, America’s educators are on the fence about whether this technology, as it is used today, helps or hurts students.

Research validates this concern. A significant study from earlier this year found that GenAI use shifts a brain’s focus from gathering information, solving problems, and analysis to verifying AI-generated information, integrating its responses, and stewarding the AI as it does the task. This study further found that GenAI tools make it more challenging for “knowledge workers” to recognize when critical thinking is needed “especially when the tasks are perceived to be less important, and when users trust and rely on GenAI tools.” The researchers argue that, while it may seem reasonable to offload critical thinking in “low-stakes” situations, this can translate to a lack of practice that, over time, degrades cognitive abilities. This creates “risks if high-stakes scenarios are the only opportunities available for exercising such abilities.” Another study (more controversial largely because it has yet to be peer reviewed) found that when people wrote with AI assistance, their brains were less connected across key regions than when writing on their own, suggesting the tool reduced natural creativity and idea generation. And one niche but nonetheless startling study found that “continuous exposure to AI” might “deskill” endoscopists.

These findings cast current requirements of the national security workforce in a new, more vulnerable light. One of the most important skills for intelligence analysts and operators, State Department officials, Pentagon employees, and other national security professionals is critical thinking. Analytical thinking and research skills cut across almost every job description in national security, be it in foreign affairs, intelligence, or logistics. These are also some of the skills that most frequently lean on GenAI for their execution—and are therefore the skills that studies show are weakening from its use. Meanwhile, the Pentagon started incorporating various kinds of AI in 2018 and published an adoption strategy in 2023, and earlier this year OpenAI launched an initiative to incorporate ChatGPT and other tools across the federal government.

Those in the private sector are already familiar with these tools, as use of GenAI has become increasingly ubiquitous, doubling in the past two years.  It is important to remember, as one essay points out, “what matters is how many people are actually using [these tools], how long they are using them, and what they are using them for.” Almost 30% of American white-collar workers—those most likely to end up in public service roles—use AI either daily or weekly,  and while the majority report using AI to generate ideas, only 26% say that it actually made them more creative and innovative. 44% of white-collar organizations have integrated AI one way or another. Meanwhile, children and young adults will encounter AI at ever earlier stages of life—including in their childhood toys. AI will be with them on their education journey, too; an Executive Order in April of this year paved the way for AI incorporation in public school classrooms from Kindergarten to 12th grade. Indeed, today’s rising high school seniors are the last class of students who will remember education before ChatGPT. By the time these students reach the workforce, AI use—with or without adequate guardrails—may be so thoroughly part of everyday life that it will be challenging to avoid, let alone do without. At AI’s rate of inclusion across U.S. education and the economy, tomorrow’s national security workforce will have encountered and used GenAI tools at every stage of life, from Kindergarten through their first job.

There Is No Substitute for Brainpower in National Security Work

When I raised my right hand and swore in as DASD, I came to that role with about 25 years of experience to lean on. According to some predictions, that may be approximately the same amount of time it will take AI to achieve general intelligence, loosely defined as a system capable of replicating a human brain’s ability to execute cognitive tasks and learn as it goes. Some argue AGI will be achieved even sooner.. Given this reality, how do we want national security professionals making decisions in the future? Do we want them to simply verify and steward the outputs of an AI-enabled machine—and how certain are we that shepherding an AI through prompt engineering will be sufficient for the problems we think we will face? The American people expect their government to keep them safe, and currently the national security workforce faces converging threats that include decoding China’s nuclear signals, managing Russian incursions into U.S. airspace, the effects of climate change on the spread of infectious diseases, and securing the critical minerals that everything from defense equipment to consumer electronics to solar panels rely on. Undoubtedly, we will need public servants to maximize the advantages that AI offers in support of national defense. But  will we have successfully structured and guided their use to minimize the downstream cognitive cost?

The task before us is to intentionally shape society’s engagement with AI, especially GenAI, from the schoolhouse to the workplace. Balancing the positive aspects of GenAI—faster knowledge retrieval, assisted drafting, training support, and always-available expertise—with the risks will be challenging. That balance should begin with a sober exploration and articulation of what those risks are and where they manifest. Generation Z is ahead of the curve as it relates to the risks GenAI poses to their education. They see AI as a tool that carries risks—Gen Z knows AI can affect how they think,and they want their schools to help them figure out how to use it well.  As critical as it is to develop AI tools responsibly, it is equally important to seize the opportunity to proactively develop those who would use them, too. Skilling the workforce to ensure their use of AI meets the current and future job requirements is sound strategic planning.

Guidelines for thoughtfully integrating AI tools—from the classroom to the boardroom—should support human needs without undermining human cognition. The first step is to provide a standardized curriculum for AI literacy that can be taught in the classroom that would cover the terminology and history of AI as well as how to use it. The U.S. government seems to be moving in this general direction; however, the goal of AI literacy should not be, as the Trump administration defines it, securing America’s “global dominance in this technological revolution” as much as it should be about  ensuring every American’s cognitive prowess while using these tools. The second step, the other side of the literacy coin, is to better articulate AI’s value proposition so we can more quickly decide where AI tools should—and, even more importantly, shouldn’t—play a role. AI should not be seen as the technological equivalent of a flavor enhancer, appropriate for  giving anything a little boost. A tool this powerful should be applied judiciously and with intent. With use cases in hand, the third step may be the hardest one of all: identifying what skills we might be able to do without. In some respects, this happens naturally. The skills required of a census enumerator in 1950 didn’t include computer skills; jobs prior to the COVID-19 pandemic didn’t require much familiarity with Zoom. But when technology’s rapid advances force a shift, some skills fall by the wayside – for good and for ill. Just ask a recent computer science graduate, who is having a harder time finding a job than an  art history major. This essay puts forward the argument that critical thinking and the like are immutably important to national security and must be protected, but there will be skills that are ancillary and therefore safe to offload to AI and other technological helpmeets. Proactively identifying which those are will help fit AI to purpose. Fourth and finally, the capstone step rests on policymakers  to develop AI governance that shepherds tool development and diffusion responsibly across sectors.

Marine Corps General James Mattis famously said, “The most important six inches on a battlefield is between your ears.”  We can’t ever be sure what crises will pop up in the future, but it’s a good bet that critical thinking will never cease to be important. How we sharpen that skill while simultaneously providing advanced technological tools will do more to secure our national security than those tools alone.

Regardless of how advanced it becomes, technology will remain a tool. Let’s not forget the importance of also strengthening the hands—and minds—that wield it.

Filed Under

, , , , , , , , , , ,
Send A Letter To The Editor

DON'T MISS A THING. Stay up to date with Just Security curated newsletters: