Regulating migration tech: how the EU’s AI Act can better protect people on the move

Kenya-Jade Pinto

As the European Union amends the Artificial Intelligence Act (AI Act), understanding the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?

From AI lie-detectors, AI risk profiling systems used to assess the likelihood of movements outside the limited scope of regular pathways, to the rapidly expanding tech-surveillance at Europe’s borders, AI systems are increasingly a feature of the EU’s approach to migration.

On the ‘sharp-edge’ of innovation

While the uptake of AI is promoted as a policy goal by EU institutions, for migrants and people on the move, AI technologies fit into a wider system of over-surveillance, discrimination and violence. As highlighted by Petra Molnar in Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up, AI systems are increasingly used in efforts to restrict migration, affecting millions of people on the move. In this context, more and more ‘innovation’ means a ‘human laboratory’ of tech experiments, with people in already dangerous, vulnerable situations as the subjects.

How do these systems affect people? AI is used to make predictions, assessments and evaluations about people in the context of their migration claims. Especially worrying is the systematic use of AI to assess whether people who want to come to or enter Europe present a ‘risk’ of unlawful activity or security threats. These systems tend to pre-judge people based on factors outside of their control, relying on discriminatory assumptions and associations. Along with AI lie detectors, polygraphs and emotion recognition, we see how AI is being used and developed within a broader framework of racialised suspicion against migrants.

Not only do AI systems create these severe harms against individuals; they are also part of a broader – and growing – surveillance eco-system developed at and within Europe’s borders. Increasingly, racialised people and migrants are over-surveilled, targeted, detained and criminalised through EU and national policies. Technological systems form part of those infrastructures of control.

Specifically, many AI systems are being tested and used to shape the way governments and institutions respond to migration. This includes AI for generalised surveillance at the border, such as ‘heterogenous robot systems’ at coastal areas, and predictive analytic systems to forecast migration trends. There is a significant concern that predictive analytics will be used to facilitate push-backs, pull-backs and other ways to prevent people from exercising their right to seek asylum, and leading to the use of more dangerous routes. This concern is especially acute in a climate of ever-increasing criminalisation of migration, and of people helping migrants.  While these systems don’t always make decisions directly about people, they dramatically affect the experience of borders and the migration process, shifting even further toward surveillance, control, and violence throughout the migration experience.

Regulating Migration Technology: What has happened so far?

In April 2021, the European Commission launched its legislative proposal to regulate AI in the European Union. The proposal categorises some uses of AI in the context of migration as ‘high-risk’ – but fails to address how AI systems exacerbate violence and discrimination against people in migration processes and at borders.

Crucially, the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the context of significant power imbalances in which these systems operate. The proposal also includes a carve-out for AI systems that form part of large scale EU IT systems, such as EURODAC. This is a harmful development, and means that the EU itself will largely not be scrutinised for its use of AI in the context of its migration databases.

In many ways, the minimal technical checks required under the proposal of (a limited set of) high-risk systems in migration control could be seen as enabling, rather than providing meaningful safeguards for people subject to, these opaque, discriminatory, surveillance systems.

The proposal does not include any reference to predictive analytic systems in the migration context, or to the generalised surveillance technologies at borders, in particular those that do not make decisions about, or identify, natural persons. Therefore, systems that pose harm in the migration context in more systemic ways seem to have been completely overlooked.

In its first steps to amend the proposal, the IMCO-LIBE committee did not make any specific amendments in the migration field. There are major steps to be taken to improve this from a fundamental rights perspective.

Amendments: How can the EU AI act better protect people on the move?

Civil society have been working to develop amendments to the AI act to better protect against these harms in the migration context.  EU institutions still have a long way to go to make the AI Act a vehicle for genuine protection of peoples’ fundamental rights, especially for people experiencing marginalisation. The AI act must be updated in three main ways to address AI-related harms in the migration context:

  1. Update the AI act’s prohibited AI practices (Article 5) to include ‘unacceptable uses’ of AI systems in the context of migration. This should include prohibitions on: AI-based individual risk assessment and profiling systems in the migration context that draw on personal and sensitive data; AI polygraphs in the migration context; predictive analytic systems when used to interdict, curtail and prevent migration; and a full prohibition of remote biometric identification and categorisation in public spaces, including in border and migration control settings.
  2. Include within ‘high-risk’ use cases AI systems in migration control that require clear oversight and accountability measures, including, namely, all other AI-based risk assessments; predictive analytic systems used in migration, asylum and border control management; biometric identification systems; and AI systems used for monitoring and surveillance in border control.
  3. Amend Article 83 to ensure AI as part of large-scale EU IT databases are within the scope of the AI Act and that the necessary safeguards apply for uses of AI in the EU migration context.

The amendment recommendations on AI and migration were developed in coalition, reflecting the broad scope of harms and disciplines this issue covers. Special thanks to Petra Molnar, Access Now, European Digital Rights (EDRI), the Platform for International Cooperation for Undocumented Migrants (PICUM), Statewatch, Migration and Technology Monitor, European Disability Forum, Privacy International, Jan Tobias Muehlberg, and the European Centre for non-profit Law (ECNL). The original writing for this blog was kindly provided by Sarah Chander, Senior Policy Advisor at EDRI.