Editor’s Note: This article was cross-posted with Tech Policy Press here.
Last Wednesday, Senate Majority Leader Chuck Schumer (D-NY) unveiled his SAFE Innovation Framework, a set of policy objectives for an “all-hands-on-deck effort” to contend with artificial intelligence (AI). He called this a “moment of revolution” that will lead to “profound, and dramatic change,” and invoked experts who “predict that in just a few years the world could be wholly unrecognizable from the one we live in today.”
With all the hype following the release of generative AI systems such as OpenAI’s ChatGPT, it’s no surprise that U.S. policymakers are keen to generate shiny new legislative proposals. To his credit, Sen. Schumer is calling for a serious, systematic approach to set priorities and develop legislation that preserves what he calls “our north star – innovation.” But while the “Insight Forums” he proposes to convene this fall will no doubt be interesting, the reality is that most of what Congress needs to do most is fairly basic – and it can take these steps today.
In a new report from New York University’s Stern Center for Business and Human Rights, “Safeguarding AI: Addressing the Risks of Generative Artificial Intelligence,” we argue that the U.S. government should address AI first by applying existing consumer protection, competition, and privacy laws to AI businesses. AI doesn’t deserve a pass on compliance with laws that are already on the books just because it is new and “revolutionary.” What’s more, many of the most important legislative interventions are already on the table.
Privacy is a great example of a topic that, in his zeal to respond to the media hype, Sen. Schumer largely passes over in his new framework. The word “privacy” isn’t in the framework itself, though it does appear to be a focus of the final “Insight Forum,” which will focus on “privacy and liability,” according to Sen. Schumer’s remarks. But many of the worst abuses of AI technology – from biases in algorithms to the delivery of highly personalized disinformation – are exacerbated by a lack of protection for personal data. There is already a significant proposal for federal privacy legislation – the American Data Privacy and Protection Act (ADPPA) – but last year, Sen. Schumer reportedly refused to bring it up for a floor vote.
The Federal Trade Commission (FTC) already has the relevant authority to tackle many of the potential harms of AI systems. But the agency is underfunded and understaffed, particularly when it comes to technically adept personnel. Despite this reality, the agency is doing its best to get out ahead of AI, issuing blog posts warning about potential harms and alerting companies that it will investigate abuses. It has a significant chance to deal with the role of cloud computing infrastructure in shaping AI through its just-closed public Request for Information (RFI) on the subject, and to address questions related to competition in its antitrust division. Bolstering these efforts with more funding and a sharper mandate may not be a new idea, but it would likely have tremendous impact.
In his framework, Sen. Schumer does appear to acknowledge that one way to lead on innovation in AI is not just through technology, but also through “security, transparency and accountability.” Mandating transparency in particular is a key priority when it comes to AI, but here again there are already legislative proposals that provide a roadmap for how to do it. In the Senate, the bipartisan Platform Accountability and Transparency Act and in the House, the Democrat-sponsored Digital Services Oversight and Safety Act offer models that will help independent researchers evaluate technology platforms while protecting user privacy and trade secrets. These bills were written largely with the problems of social media in mind; they should be revised where necessary to address concerns specific to AI.
There are dozens of other existing proposals relevant to AI. Anna Lenhart, a Knight Policy Fellow at the Institute for Data, Democracy & Politics at the George Washington University, recently collected a list of federal legislative proposals that would “govern the processing of data, including the generative AI tools currently capturing the nation’s imagination,” and address other concerns such as market power, discrimination, and the proliferation of harmful content. Legislation related to AI may see more bipartisan compromise compared to when the focus was solely on social media. But few of these proposals made any progress in the last Congress, and Sen. Schumer’s initiative is set to kick off against the inhospitable calendar of a presidential election year.
Finally, in his bid to host these high-profile “Insight Forums” on Capitol Hill, Sen. Schumer should be careful about the mix of experts he invites into the room. While industry leaders who draw the media spotlight – including Sam Altman, the CEO of OpenAI – publicly call for AI regulation, behind the scenes they often oppose the particulars. We’ve seen this movie before: Mark Zuckerberg welcomed the regulation of social media in his appearances before lawmakers, even as his army of lobbyists moved to quash it. Sen. Schumer should be especially wary of advice from big tech companies: if their executives get too much say in making the rules, it could end up favoring existing businesses over new ones.
While Sen. Schumer has already set an industry-friendly “north star” for his effort, he would do well to remember that the oath he and his colleagues took says nothing of protecting corporate interests. Let the CEOs hype the technology; the Senate should preserve itself as a place to address the fundamentals of regulation.