Today’s vote by the European Parliament’s civil liberties and internal market committees on the AI Act overwhelmingly endorsed important protections against harmful uses of AI in migration. PICUM and the Border Violence Monitoring Network (BVMN) – representing close to 180 organisations across Europe – welcome the Parliament’s strong stand on these issues but are concerned about gaps that remain in the legislation.
The AI Act will be the world’s first binding legislation to regulate the use of artificial intelligence, including in the migration field. The text voted for on 11 May bans harmful uses of AI and subjects “high-risk” uses to enhanced safeguards.
Bans on harmful uses of AI
Among the bans that were voted, we welcome the prohibition, under Article 5, of:
- emotion recognition technologies, which explicitly extends to the EU’s borders. Emotion recognition technologies claim to detect people’s emotions based on assumptions about how someone acts when feeling a certain way, including to assess their credibility. Immigration authorities in some Member States already tested the use of these types of systems to inform decisions about who qualifies for protection, and does not.
- biometric categorisation systems that use personal characteristics to classify people and to inform inferences based on those characteristics. This kind of software is used to determine whether someone’s dialect matched the region they say they are from.
- predictive policing systems, which use preconceived notions about who is risky to make decisions about the policing of certain groups and spaces.
All these technologies are based on unscientific and often biassed, discriminatory assumptions, which then inform real-world decision-making that has a real and detrimental impact on people’s lives. While we welcome the predictive policing ban, which encompassess some forms of algorithmic profiling used in the migration context, it does not fully capture the several uses and implications that automated risk assessment and profiling systems have in that context. The Committees also missed the chance to prohibit predictive analytics systems used to curtail migration movements and that can lead to push-backs.
Hope Barker, Senior Policy Analyst for BVMN says, “We call for all algorithmic profiling systems used in migration to be regulated separately, with a migration specific ban to address the many uses and practices that have been found to cause direct or indirect discrimination.”
“We are Black and border guards hate us. Their computers hate us too.” – Adissu
(Taken from the Technological Testing Grounds Report)
Enhanced safeguards on “high-risk” uses of AI
The text voted by the Parliament categorises certain uses of AI as “high-risk” and subjects them to enhanced safeguards. For instance the requirement of both manufacturers and deployers to assess impact on and risks of violation of fundamental rights prior to putting systems into use.
We welcome the categorisation as “high-risk” of:
- forecasting tools that claim to predict people’s movements at borders. Alyna Smith, Deputy Director of PICUM, said: “Forecasting tools are often based on unreliable data, and – even if it is not the original intention – often used to enable operations to push people back from the border, preventing them from reaching safety and putting them in harm’s way. It’s good news that these tools have been categorised as “high-risk”. But forecasting tools that could be used to facilitate pushbacks must be subject to bans without exceptions.”
- surveillance technologies such as drones, thermal imaging cameras, radar sensors, and others that are used to detect and locate transit groups for the sole purpose of subjecting them to violence and pushing them back.
Since 2018, the BVMN and its members have collected 38 testimonies, impacting 1,076 people, that recount the presence of a drone prior to their illegal pushback. (see this testimony collected by Collective Aid)
“Drones are not inherently harmful, they can and should be used to identify vulnerable groups and organise search and rescue operations. However, they are currently used to locate groups and illegally push them back, often after subjecting them to high levels of violence” said Barker. “We welcome the obligations introduced to users of border surveillance technologies and hope that this will result in further safeguards around how they are used to target people on the move”.
- non-remote biometric identification systems that include hand-held devices that scan faces, fingerprints or palms, and voice or iris identification technology used by police to identify a person and check their residence status. Smith said “It’s incredibly important that these uses are included among high-risk under the Act. Police in a growing number of member states are being equipped with high-tech handheld devices that can scan faces and fingerprints to identify on the spot a person’s migration status. There’s already evidence these tools drive more discriminatory profiling by police against anyone who looks like they might be undocumented – citizens and non-citizens alike.”
The EU’s large-scale migration databases
The EU has been collecting millions of records, including biographic and biometric data, about third-country nationals in large-scale databases that are being interconnected for immigration enforcement purposes. This interconnectedness not only refers to EU Member States but also to third countries which the Commission is seeking to entrench and strengthen security-related information sharing mechanisms with. Without sufficient fundamental rights impact assessments, safeguards and guarantees, this practice throws up considerable data and privacy rights concerns.
We welcome the European Parliament’s decision to ensure that the AI Act’s protections and safeguards apply to these databases. However, the Parliament position should have gone much further: the Parliament is proposing a 4 year period for EU migration databases to comply with the AI Act.
Smith said: “The EU’s migration databases already present risks in terms of data protection and increased discrimination, since they’re built on the assumption that all non-citizens are potential threats. Explicitly including these systems within the scope of the AI Act is certainly positive, but we’re concerned that legislators might use the four-year grace period to dampen safeguards.”
NOTES TO THE EDITORS
- PICUM (Platform for International Cooperation on Undocumented Migrants) is a network of NGOs primarily based in Europe that work to advance the rights of undocumented people. The Border Violence Monitoring Network is an independent network of NGOs, associations, and collectives that monitor human rights violations at and within Europe’s borders and advocates to stop structural and actual violence against people on the move.
- Links to resources on:
- the AI Act and AI uses in migration: www.protectnotsurveil.eu
- digital technologies used in policing and immigration enforcement in PICUM’s briefing “Digital technology, policing and migration: what does it mean for undocumented migrants?”
- the EU’s migration databases, their interconnections and impact on people’s lives in PICUM and Statewatch’s report “Data Protection, Immigration Enforcement and Fundamental Rights: What the EU’s Regulations on Interoperability Mean for People with Irregular Status”
- the use of border surveillance technology to facilitate fundamental rights violations in BVMN’s submission to the OHCHR on the role of technology in illegal pushbacks from Croatia to Bosnia-Herzegovina and Serbia and to the Committee on Enforced Disappearances.
- For media enquiries, please contact PICUM’s Communications Officer Gianluca Cesaro on firstname.lastname@example.org or BVMN on email@example.com
Cover image: ryzhi – Adobe Stock