The Department of Homeland Security (DHS) is likely the single largest collector and consumer in the U.S. government of detailed, often intimate, information about Americans and foreigners alike. The department stores and analyzes this information using vast data systems to determine who can enter the country and who is subjected to intrusive inspections, including by parsing through travel records, social media data, non-immigrant visa applications, and other information to detect patterns of behavior that the department has determined are worthy of scrutiny. As we explain in a new Brennan Center report, these systems and the data that powers them operate behind a veil of secrecy, with little meaningful documentation about how they work, and are too often deployed in discriminatory ways that violate Americans’ constitutional rights and civil liberties.

The risk assessment process begins with Customs and Border Protection (CBP) and the Transportation Security Administration, which use law enforcement data, classified intelligence, and “patterns of suspicious activity” to formulate “rules” capturing patterns of behavior that putatively indicate someone presents a higher risk of committing a crime — anything from importing contraband agricultural products to terrorism. These processes are driven by the Automated Targeting System (ATS), an algorithmically powered analytical database owned and operated by CBP. ATS mines the oceans of data it contains, including airline records, data obtained from border crossings, department of motor vehicle registration data, and more, to detect traces of information that match these rules. ATS also compares travelers’ information against the federal government’s watch lists of known or suspected terrorists, as well as law enforcement databases. Travelers who match one of ATS’s rules, an identity on a watch list, or a law enforcement record are subjected to increased scrutiny, whether by analysts who conduct additional vetting of travelers against databases or agents who inspect travelers and their belongings at the airport and once they enter the United States. CBP also uses ATS to conduct predictive threat modeling using historical data in ATS in response to “more generalized threats.”

As set out below, ATS’s operation is in significant tension with the White House’s October 2022 Blueprint for an AI Bill of Rights, which is intended to guide the “development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.” The blueprint sets out five overarching principles and related commentary to guide the development of automated systems, like ATS and its associated systems, to ensure they are safe, effective, equitable, transparent, and fair. While these principles are not binding on the federal government, and currently exclude law enforcement and national security systems, implementing them would significantly improve the automated systems that DHS already uses.

First, a core principle of the blueprint is that automated systems should be accompanied by explanations for both system operators and affected persons that are “technically valid, meaningful, and useful.” DHS does not come close to meeting this standard. While the department has published a variety of reports on ATS, they offer little meaningful information about how ATS carries out its predictive functions, and almost none about how the predictive threat modeling process works, what the resulting models look like, or how the capability is operationalized.

Second, the blueprint calls for automated systems to be tested and designed to ensure that they are safe and function effectively. Even after two decades of operation, however, there is little objective evidence that ATS, or the programs it helps facilitate, measurably contribute to the country’s safety. DHS has not conducted a public empirical evaluation that demonstrates that ATS is effective, despite multiple reports from the Government Accountability Office and DHS’s own inspector general urging the department to measure whether ATS and its associated systems achieve their intended purpose and ensure the quality of the data that powers these systems. Effectiveness of any system cannot be taken for granted, much less when that system is operated by DHS, which has a history of implementing ill-conceived programs without setting benchmarks to determine whether they are useful.

Third, the data that undergirds ATS’s risk predictions raise concerns. The White House AI blueprint recommends that automated systems should limit their data collection to the information that is “strictly necessary” for their proper functioning. ATS’s vast sweep virtually ensures that it contains data that is not clearly useful to determine whether someone presents a risk or that is susceptible to misuse. For instance, CBP, a sub-agency of DHS, purchases social media data from commercial aggregators—information that the government has repeatedly determined is of questionable or “no value” to multiple national security screening processes.

Additionally, once ATS flags a traveler as a high risk, CBP officers may conduct intrusive inspections that can capture information protected by the First Amendment and funnel their findings into ATS to inform future vetting. Officers may search travelers’ electronic devices or extract devices’ data and retain it in ATS for analysis. This can reveal intimate details about individuals’ beliefs, political activities, and associations, implicating the blueprint’s caution against employing surveillance systems that may undermine constitutionally protected rights and other democratic values.

Fourth, some of the data in ATS is likely to reflect bias against communities of color, another risk highlighted by the AI blueprint. For instance, one of ATS’s key data sources is the federal government’s terrorist watch lists. The watch lists have weak standards for inclusion and a long history of drawing in individuals who pose no evident terrorist threat. And its subsidiary lists, the No Fly and Selectee Lists, are composed almost entirely of Muslim individuals, according to a statistical analysis conducted by the Council on American-Islamic relations.

It is past time for DHS to stop improvising how it designs and implements its automated systems, with inadequate mechanisms for evaluation and oversight, weak standards, and disproportionate impacts on marginalized communities and individuals. DHS must disclose additional information about its systems, including the policies that govern their operations and reports explaining how they are used. An independent body should undertake a rigorous investigation of DHS’s automated systems, evaluating whether they are useful and accurate, assessing how they function, and determining whether they contain sufficient safeguards to protect privacy, civil rights, and civil liberties. In addition, the White House should take steps to give the AI blueprint teeth by making it applicable to federal agencies, including national security and law enforcement agencies such as DHS, as civil society groups and elected officials have urged. These steps, taken together, would ensure that DHS’s risk assessment functions are an effective use of resources, preserve civil rights and civil liberties, and reflect best practices for the use of automated systems.

IMAGE: Binary code (via Getty Images)