Emerging from the first of a planned series of listening sessions on AI on September 13, Senate majority leader Chuck Schumer reported that, during the closed-door meeting, every single person in attendance—mostly CEOs of tech companies and some civil society representatives—raised their hands when he asked if “government is needed to play a role in regulating AI.” Lawmakers and witnesses also expressed support for AI regulation during hearings held this past week by the Senate Judiciary and Commerce committees. But the shape of such regulation remains elusive, with well-trodden themes and tensions on display.
A central thesis again on display at these hearings—often pushed by the leaders of the companies that have developed AI—is that excessive regulation will stifle innovation, a fear compounded by the perceived threat of China’s technological progress. The representative of the trade group Information Technology Industry Council warned that “overly broad and prescriptive regulation […] could undermine [the U.S.] leadership position and cede it to U.S. competitors, including authoritarian nations.” Microsoft CEO Brad Smith told the committee that maintaining U.S. leadership in this field requires ensuring that “individual academics and entrepreneurs with a good idea can move forward and innovate and deploy models without huge barriers.” Smith and NVIDIA’s chief scientist assured lawmakers that their companies were working to identify and address risks as they deploy AI more broadly.
Lawmakers, too, are concerned about quashing technological advances, inspiring several bills and frameworks aimed at promoting AI innovation. But some senators were wary of taking a reactive approach to AI, with Senator Josh Hawley [R-MO] pointing to Congress’s failure to address the harms of social media as “nearly an unmitigated disaster.” AI is “fundamentally different” from social media, the tech CEOs claimed, because companies like Microsoft “not only have the capacity, but we have the will and we are applying that will to fix things in hours and days.” Hawley, at least, seemed unconvinced, noting that this approach merely corrects harms “after the fact” and essentially asks lawmakers to trust AI companies to correct their errors. Senator Richard Blumenthal [D-CT] similarly urged his colleagues “to learn from our experience with social media that if we let this horse get out of the barn…it will be even more difficult to contain.”
Boston University’s Professor Woodrow Hartzog, an expert on surveillance technology and AI, encouraged lawmakers to go beyond “half measures,” such as “post-deployment controls” that would not fully protect against the harms of AI. While addressing discrete issues such as bias is important, Hartzog advocated for establishing a “duty of loyalty” on the part of AI companies. He favored “creating strong bright-line rules for the development and deployment of AI systems.” For the “most dangerous designs and deployments”—such as emotion recognition, biometric surveillance in public spaces, predictive policing, and social scoring—Hartzog argued for outright prohibitions (similar constraints have been proposed in the European Parliament’s draft AI legislation).
During both hearings, lawmakers discussed the known risks of AI, including bias, privacy violations, scams, fraud, cyber-attacks, discrimination, and misinformation, and potential approaches to addressing and mitigating them. The National Institute of Standards and Technology (NIST) AI Risk Management Framework—standards aimed at increasing the trustworthiness of AI technologies and fostering the responsible design, development, implementation, and evaluation of technologies—drew extensive attention at the Commerce committee hearing. While some companies have voluntarily adopted the framework, Victoria Espinel, CEO of the software industry trade group BSA, argued that requiring companies to adopt key NIST standards like impact assessments and risk mitigation is “essential” to bringing “clarity and predictability” to AI systems and ensuring responsible use. According to Senator Amy Klobuchar [D-MN], she and Senator John Thune [R-SD] are planning to introduce a bill to do just that, with the Commerce Department tasked with oversight.
Both witnesses and lawmakers placed a great deal of emphasis on transparency as a means of building trust in AI systems and the need to consider international standards. But Hartzog, while agreeing on the need for “meaningful notice and transparency,” argued that transparency by itself would not be sufficient to prevent or mitigate harms.
Lawmakers also considered how the United States can address the harms of manipulated media—like mis- and disinformation, deepfakes, and other AI-generated deceptions. Disclosure requirements, watermarks, and prohibitions on certain content were all discussed, with Senator Amy Klobuchar highlighting a bill she subsequently introduced that prohibits manipulated media of candidates in federal elections. Several senators noted, however, that such bans could run afoul of the First Amendment, for example by limiting the use of satire and parody. Sam Gregory, of the human rights and technology non-profit WITNESS, recommended a “privacy centered” approach to combating these harms, arguing that those using generative AI tools “should not be required to forfeit their right to privacy to adopt these emerging technologies.” He proposed technical solutions that would allow for the identification of AI-generated content through metadata without government tracking of the individuals creating content.
Issues of who should be regulated generated further discussion. BSA’s Espinel noted that risk mitigation requirements should be tailored to a company’s role as an AI developer or deployer, because the “two types of companies will have access to different types of information and will be able to take different actions to mitigate risks.” Microsoft’s Smith cited aviation as an illustrative example: if Boeing builds an airplane and sells it to United Airlines for commercial use, both Boeing and United must possess certain licenses, abide by specific regulations, and acquire requisite certifications.
The Commerce committee hearing highlighted the close linkage between data privacy and AI regulation. Committee chair, Senator Maria Cantwell [D-WA], noted that privacy regulation goes “hand in hand” with combatting many harms caused by the collection or use of personal data by AI tools. As John Hickenlooper [D-CO] explained, “AI trains on publicly available data, and this data can be collected from everyday consumers, everywhere, in all parts of their lives.” He argued that comprehensive data privacy rules would address “open questions about what rights people have to their own data and how it’s used” and would “empower consumers [and] creators” and thus “grow our modern AI-enabled economy.” Congress has been working on comprehensive privacy regulation to establish baseline privacy rights for consumers and limit companies’ collection, transfer, and processing of consumer data. Cantwell introduced the Consumer Online Privacy Rights Act in 2019, and the American Data Privacy and Protection Act (ADPPA) was introduced in 2022, but neither bill has been reintroduced yet this Congress.
Finally, many observers noted that tech CEOs dominated the invite list at the Senate AI Insight Forum. As Maya Wiley of the Leadership Conference, who participated in the forum, stated, there was a real “power differential in the room between those of us focused on people and companies focused on competition.” A similar imbalance has been on display in congressional AI hearings more broadly and fuels concerns about whether Congress is being too deferential to corporate power in deciding on how to address AI.