Despite years of tireless advocacy by a coalition of civil society and academics (including the author), the European Union’s new law regulating artificial intelligence falls short on protecting the most vulnerable. Late in the night on Friday, Dec. 8, the European Parliament reached a landmark deal on its long-awaited Act to Govern Artificial Intelligence (AI Act). After years of meetings, lobbying, and hearings, the EU member states, Commission, and the Parliament agreed on the provisions of the act, awaiting technical meetings and formal approval before the final text of the legislation is released to the public.  A so-called “global first” and racing ahead of the United States, the EU’s bill is the first ever regional attempt to create an omnibus AI legislation. Unfortunately, this bill once again does not sufficiently recognize the vast human rights risks of border technologies and should go much further protecting the rights of people on the move.

From surveillance drones patrolling the Mediterranean to vast databases collecting sensitive biometric information to experimental projects like robo-dogs and AI lie detectors, every step of a person’s migration journey is now impacted by risky and unregulated border technology projects. These technologies are fraught with privacy infringements, discriminatory decision-making, and even impact the life, liberty, and security of person seeking asylum. They also impact procedural rights, muddying responsibility over opaque and discretionary decisions and lacking clarity in mechanisms of redress when something goes wrong.

The EU’s AI Act could have been a landmark global standard for the protection of the rights of the most vulnerable. But once again, it does not provide the necessary safeguards around border technologies. For example, while recognizing that some border technologies could fall under the high-risk category, it is not yet clear what, if any, border tech projects will be included in the final high-risk category of projects that are subject to transparency obligations, human rights impact assessments, and greater scrutiny. The Act also has various carveouts and exemptions in place, for example for matters of national security, which can encapsulate technologies used in migration and border enforcement. And crucial discussions around bans on high-risk technologies in migration never even made it into the Parliament’s final deal terms at all. Even the bans which have been announced, for example around emotion recognition, are only in place in the workplace and education, not at the border. Moreover, what exactly is banned remains to be seen, and outstanding questions to be answered in the final text include the parameters around predictive policing as well as the exceptions to the ban on real-time biometric surveillance, still allowed in instances of a “threat of terrorism,” targeted search for victims, or the prosecution of serious crimes. It is also particularly troubling that the AI Act explicitly leaves room for technologies which are of particular appetite for Frontex, the EU’s border force. Frontex released its AI strategy on Nov. 9, signaling an appetite for predictive tools and situational analysis technology. These tools, which when used without safeguards, can facilitate illegal border interdiction operations, including “pushbacks,” in which the agency has been investigated. The Protect Not Surveil Coalition has been trying to influence European policy makers to ban predictive analytics used for the purposes of border enforcement. Unfortunately, no migration tech bans at all seem to be in the final Act.

The lack of bans and red lines under the high-risk uses of border technologies in the EU’s position is in opposition to years of academic research as well as international guidance, such as by then-U.N. Special Rapporteur on contemporary forms of racism, E. Tendayi Achiume. For example, a recently released report by the University of Essex and the UN’s Office of the Human Rights Commissioner (OHCHR), which I co-authored with Professor Lorna McGregor, argues for a human rights based approach to digital border technologies, including a moratorium on the most high risk border technologies such as border surveillance, which pushes people on the move into dangerous terrain and can even assist with illegal border enforcement operations such as forced interdictions, or “pushbacks.” The EU did not take even a fraction of this position on border technologies.

While it is promising to see strict regulation of high-risk AI systems such as self-driving cars or medical equipment, why are the risks of unregulated AI technologies at the border allowed to continue unabated? My work over the last six years spans borders from the U.S.-Mexico corridor to the fringes of Europe to East Africa and beyond, and I have witnessed time and again how technological border violence operates in an ecosystem replete with the criminalization of migration, anti-migrant sentiments, overreliance on the private sector in an increasingly lucrative border industrial complex, and deadly practices of border enforcement, leading to thousands of deaths at borders. From vast biometric data collected without consent in refugee camps, to algorithms replacing visa officers and making discriminatory decisions, to AI lie detectors used at borders to discern apparent liars, the roll out of unregulated technologies is ever-growing. The opaque and discretionary world of border enforcement and immigration decision-making is built on societal structures which are underpinned by intersecting systemic racism and historical discrimination against people migrating, allowing for high-risk technological experimentation to thrive at the border.

The EU’s weak governance on border technologies will allow for more and more experimental projects to proliferate, setting a global standard on how governments will approach migration technologies. The United States is no exception, and in an upcoming election year where migration will once again be in the spotlight, there does not seem to be much incentive to regulate technologies at the border. The Biden administration’s recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence does not offer a regulatory framework for these high-risk technologies, nor does it discuss the impacts of border technologies on people migrating, including taking a human rights based approach to the vast impacts of these projects on people migrating. Unfortunately, the EU often sets a precedent for how other countries govern technology. With the weak protections offered by the EU AI act on border technologies, it is no surprise that the U.S. government is emboldened to do as little as possible to protect people on the move from harmful technologies.

But real people already are at the centre of border technologies. People like Mr. Alvarado, a young husband and father from Latin America in his early 30s who perished mere kilometers away from a major highway in Arizona, in search of a better life. I visited his memorial site after hours of trekking through the beautiful yet deadly Sonora desert with a search-and-rescue group. For my upcoming book, The Walls have Eyes: Surviving Migration in the Age of Artificial Intelligence, I was documenting the growing surveillance dragnet of the so-called smart border that pushes people to take increasingly dangerous routes, leading to increasing loss of life at the U.S.-Mexico border. Border technologies as a deterrent simply do not work. People desperate for safety – and exercising their internationally protected right to asylum – will not stop coming. They will instead more circuitous routes, and scholars like Geoffrey Boyce and Samuel Chambers have already documented a threefold increase in deaths at the U.S.-Mexico frontier as the so-called smart border expands. In the not so distant future, will people like Mr. Alvarado be pursued by the Department of Homeland Security’s recently announced robo-dogs, a military grade technology that is sometimes armed?

It is no accident that more robust governance around migration technologies is not forthcoming. Border spaces increasingly serve as testing grounds for new technologies, places where regulation is deliberately limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of people’s lives. There is also big money to be made in developing and selling high risk technologies. Why does the private sector get to time and again determine what we innovate on and why, in often problematic public-private partnerships which states are increasingly keen to make in today’s global AI arms race? For example, whose priorities really matter when we choose to create violent sound cannons or AI-powered lie detectors at the border instead of using AI to identify racist border guards? Technology replicates power structures in society. Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion, particularly around areas of no-go-zones or ethically fraught usages of technology.

Seventy-seven border walls and counting are now cutting across the landscape of the world. They are both physical and digital, justifying broader surveillance under the guise of detecting illegal migrants and catching terrorists, creating suitable enemies we can all rally around. The use of military, or quasi-military, autonomous technology bolsters the connection between immigration and national security. None of these technologies, projects, and sets of decisions are neutral. All technological choices – choices about what to count, who counts, and why – have an inherently political dimension and replicate biases that render certain communities at risk of being harmed, communities that are already under-resourced, discriminated against, and vulnerable to the sharpening of borders all around the world.

As is once again clear with the EU’s AI Act and the direction of U.S. policy on AI so far, the impacts on real people seems to have been forgotten. Kowtowing to industry and making concessions for the private sector not to stifle innovation does not protect people, especially those most marginalized. Human rights standards and norms are the bare minimum in the growing panopticon of border technologies. More robust and enforceable governance mechanisms are needed to regulate the high-risk experiments at borders and migration management, including a moratorium on violent technologies and red lines under military-grade technologies, polygraph machines, and predictive analytics used for border interdictions, at the very least. These laws and governance mechanisms must also include efforts at local, regional, and international levels, as well as global co-operation and commitment to a human-rights based approach to the development and deployment of border technologies. However, in order for more robust policy making on border technologies to actually affect change, people with lived experiences of migration must also be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement.

IMAGE: Passengers use biometric passports at an automated ePassport gate equipped with a facial recognition system at the British border of the Eurostar at the gare du Nord in Paris. (Photo by PHILIPPE LOPEZ/AFP via Getty Images)