Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

New research finds that EU funds digital walls and police dogs at the EU’s borders

© Gabriel Tizón (www.utopiaproject.es)

New joint study by European refugee and migrant rights’ networks ECRE and PICUM finds that EU funds for border management are being used to build harmful infrastructure to control external borders, leading to human rights violations. Crucial information on the assessment of such programmes and how the Commission addressed risks of rights violations was only accessible for this research through freedom of information requests.

The study focuses on the Border Management and Visa Instrument (BMVI), which boasts a 6.2 billion euros budget for 2021 – 2027 to fund equipment, personnel capacity, infrastructures, and technology to be used at the EU’s external borders. Overall, member states have so far received 4 billion euros, an increase of 45% compared to the resources received under the Internal Security Fund – Borders & Visa for 2014 – 2020.

Despite the European Commission ruling out the possibility to use these funds to build fences and walls, we find that the BMVI can already support measures that may disproportionately impact the rights of migrants and refugees. For example, some countries are using BMVI funding for border surveillance technology to either complement or replace physical surveillance.

Key examples:

  • Estonia will spend 2 million euros on mobile remote sensing systems to increase border surveillance in areas “where it is not economically feasible to build a permanent infrastructure”.
  • Poland aims to “reduce the physical surveillance of the border” by investing in light-detecting systems on watch towers, alarm systems, portable thermal and night vision devices, and motion-activated security cameras at the borders with Russia, Belarus and Ukraine.
  • Croatia, Lithuania, Poland and Spain have acquired or are planning to acquire sniffer dogs to help border guards in patrolling borders and chasing and apprehending people who have crossed the border. In Croatia, dogs have already been used to threaten and bite migrants to push them back to the border.
  • Estonia, Greece, Hungary, Italy, Poland, Romania and Slovakia will invest in new vehicles equipped with integrated thermal imaging cameras, satellite communication, and x-ray identification systems, with off-road capabilities. In Lithuania, EU resources will allow the purchase of a stationary search detector for persons hidden in vehicles.
  • Croatia, Lithuania, Slovenia, and Spain are planning to purchase vehicles for transporting migrants apprehended at the borders to police outposts and facilitate their expulsion to neighbouring countries. This practice constitutes “internal pushbacks” to other member states in informal procedures which have been found illegal by courts in ItalySlovenia and Austria.
  • Hungary’s strategy includes integrating artificial intelligence into vehicles for ground and air reconnaissance operations, potentially involving the employment of drones and other unpiloted vehicles. Greece and Estonia will also use drones and other unpiloted aircraft to expand their aerial surveillance capacity.
  • Despite continuously documented challenging conditions of reception centres, Greece is using BMVI resources to run the hotspots on the islands and Cyprus is using BMVI funding to operate the first reception centre in Pournara (just outside Nicosia). Greece is also the country receiving the largest proportion of BMVI resources in absolute terms, with more than 1 billion euros, despite multiple European Court of Human Rights condemnations and continuous reports of persisting degrading conditions, prison-like conditions and restriction of movements in the state-managed centres. Some of these concerns were confirmed by the European Commission’s decision to launch an infringement procedure against the reception conditions in the hotspots in January 2023.
  • The BMVI can also finance measures targeting support for people with vulnerabilities and international applicants. This may include procedures for the identification of vulnerable persons and unaccompanied children, information provision to and referral of people in need of international protection and victims of human trafficking, as well as the development of integrated child protection systems. However, the research found that 0.04% of the national programmes (around 1.3 million euros) is devoted to assistance and protection priorities (Croatia and Finland).

Monitoring and evaluation

Member states monitor the implementation of the BMVI programme through dedicated national monitoring committees, which should include experts in fundamental rights like civil society organisations, national human rights institutions, and potentially the EU Fundamental Rights Agency. But our research finds that civil society organisations are often underrepresented in monitoring committees and are not given the means to contribute meaningfully.

The Commission plays a key role in assessing the programmes’ compliance with fundamental rights. However, only thanks to requests for access to documents were we able to see that the Commission had questioned several national programmes during the programming phase, including on lack of access to asylum procedures in Greece, reception and detention conditions in Cyprus and Greece, allegations of pushbacks and discrimination issues in Poland, and deficiencies in judicial independence in Hungary.

What remains unclear is how the Commission came to eventually approve all programmes following murky exchanges with the member states in question. The European Parliament criticised this lack of transparency in the assessment process and initiated a lawsuit against the Commission concerning its decision to disburse over 10 billion euros of EU funds to Hungary, including BMVI funding.

Chiara Catelli, Policy Officer at ECRE and PICUM, said: “The European Commission’s refusal to allow financing of walls and fences is a fig leaf to cover other harmful measures that border funds can already support in member states. Our research finds that BMVI funding is used for an increasingly complex and digitalised system of border surveillance, forming an interconnected web of controls which harms people who come to the EU’s borders.”

“The EU and its member states must ensure that they respect the fundamental rights of people at the borders, including their protection from refoulement, inhuman or degrading treatment and right to life, as well as their often neglected right to access legal support and legal remedy.”

Beyond walls and fences: EU funding used for a complex and digitalised border surveillance system

The EU Migration Pact: a dangerous regime of migrant surveillance

© Jürgen Jester

On 10 April 2024, the European Parliament adopted the New Pact on Migration and Asylum, a package of reforms expanding the criminalisation and digital surveillance of migrants. 

Despite civil society organisations’ repeated warnings, the Pact “will normalise the arbitrary use of immigration detention, including for children and families, increase racial profiling, use ‘crisis’ procedures to enable pushbacks, and return individuals to so called ‘safe third countries’ where they are at risk of violence, torture, and arbitrary imprisonment”.

The New Pact on Migration and Asylum ushers in a deadly new era of digital surveillance, expanding the digital infrastructure for an EU border regime based on the criminalisation and punishment of migrants and racialised people. 

This statement outlines how the Migration Pact framework will enable and in some cases mandate the deployment of harmful surveillance technologies and practices against migrants. We also highlight some grey zones where the Pact leaves open the possibility for further harmful developments involving intrusive and violent surveillance and data processing practices in the future. 

Migration Pact enables the digital surveillance of migrants 

As more intrusive technology will be deployed at borders and in detention centres,  people’s personal data will be collected in bulk and exchanged between police forces across the EU, and biometric identification systems will be used to track people’s movements and increase policing of undocumented migrants. The New Pact on Migration will mandate a whole range of technological systems to identify, filter, track, assess and control people entering or already in Europe. 

These systems will reinforce an already cruel status quo. European policymakers have opted for years to treat the movement of people into Europe mainly as a security issue. The result is very limited safe and regular pathways to come to Europe, the widespread criminalisation of many who make the journey, and systematic exploitation and discrimination against those already living here. Investing in technology to serve this already harmful system will mainly benefit the tech and security firms who reap the financial rewards of this agenda – while pushing people into more dangerous routes and giving more licence for racial profiling at our borders and in our communities. 

Here are the main ways the Migration Pact creates a dangerous system of migrant surveillance:

  • Migrants as suspects: A vast regime of digital monitoring

The Migration Pact expands a wide system of data collection and automatic exchange, leading to a regime of mass surveillance of migrants. The  changes in the Eurodac Regulation will mandate the systematic collection of migrants’ biometric data (now also including facial images), which will be retained in massive databases up to 10 years, exchanged at every step of the migration process and made accessible to police forces across the European Union for tracking and identity checks purposes. The minimum age for data collection was lowered from fourteen to six, with the possibility to use coercion should ‘child friendly’ methods fail.

Further, newly created screening procedures and border procedures (Screening Regulation) will mandate various security checks and assessments of all people entering Europe irregularly, including to seek asylum, with a potential for automated and AI-based decision making. These procedures will require the personal and biometric data of every person who enters the EU to be cross-checked against multiple national and European policing and immigration databases, as well as systems operated by Europol and Interpol, increasing the possibility of transnational repression of human rights defenders. People identified as posing a “risk to national security or public order” will be pushed into accelerated border procedures with fewer safeguards for the processing of the asylum application (Asylum Procedures Regulation and Return Border Procedure Regulation). Not only are concepts of national security and public order dangerously vague and undefined terms leaving wide discretion for Member States, they also pave the way for potentially discriminatory practices in screening procedures, using nationality as a proxy for race and ethnicity in these assessments. Further, even families with children and unaccompanied children could be held in border procedures, with a high risk of being de facto detained.

In the context of asylum procedures, the Pact will enable intrusive technological practices in various stages of asylum processing. The Asylum Procedures Regulation provides for increased searches of personal items, paving the way for invasive practices like the extraction of mobile phone data, which involves seizing and mining personal electronic devices (such as phone or laptop) to extract data that may be used to find evidence to  assess the truthfulness of their claims (for instance, in an asylum proceeding) or check their identity, age or country of origin. Such invasive practices have been successfully challenged in Germany and in the UK but continue to be used in several European countries. Moreover, the Asylum Procedures Regulation also allows for the use of remote interviews and videoconferencing for people in detention and during the appeal procedure. This not only raises privacy and data protection concerns, it heightens the isolation of people who are already in a vulnerable situation and risks negatively affecting the quality and the fairness of the procedures.

  • Technological management of prison facilities for migrants

The newly introduced screening and border procedures will lead to more people, including children and families, being held in prison-like detention facilities modelled on the “Closed Controlled Access Centres” already operating in Greece. These centres are characterised by motion-sensors, cameras and fingerprint-access, modelling a system of digital management of immigration facilities that relies on high-tech surveillance to monitor and control people.  Under the Pact, a minimum of 30,000 people are expected to be in “border procedures” at any one time, likely involving  detention or restrictions on movement. Far from treating detention as a “last resort”, chillingly, the Pact foresees the expansion of detention across Europe. 

  • Tech-enabled racial profiling at the EU’s internal borders 

Alongside the Migration Pact are other legislative changes to EU migration policy. The Schengen Borders Code Reform, set to be adopted on 24 April 2024, will generalise police checks for the purpose of immigration enforcement, facilitating the practice of racial profiling within EU territory.

This new law encourages the increased use of surveillance and monitoring technologies at both internal and external borders. Technologies such as drones, motion sensors, thermal imaging cameras, and others are used for the identification of people crossing borders prior to arrival and have been shown to facilitate pushbacks

Opening the door to future expansion of the border surveillance complex 

The Migration Pact sits upon existing frameworks governing the use of digital surveillance in migration. The EU Artificial Intelligence Act introduces a lenient framework for the use of AI by law enforcement, migration control and national security agencies, provides loopholes and even encourages the use of dangerous surveillance systems on the most marginalised in society. 

In this framework, combined with the Migration Pact and new existing developments in surveillance technology, we can expect:

  • Automated profiling and risk assessments for security and vulnerability checks in order to allegedly facilitate decisions related to asylum procedures, security assessments, detention, and deportation of migrants. The Pact alludes to numerous instances in which AI-based decision making may be used, such as during the screening procedure to assess if someone represents a “national security risk” or a threat to the “public security”, or to assess the level of vulnerability of an asylum applicant. Not only may this lead to numerous violations of data protection obligations and infringements of privacy, but by nature violate the right to non-discrimination in the insofar as they codify assumptions about the link between personal data and characteristics with particular risks. The introduction of automated assessment in asylum procedures will mean fewer protections and safeguards, and further divergence from a principle of case-by-case, individualised and needs- based assessments in the access to international protection. 
  • The use of forecasting tools that build on biassed statistical data collected on irregular entries and asylum applications to attempt to predict large-scale movements of people, and that can be used to inform actions on the ground to deter or interdict those movements. A similar tool has been tested in the Horizon 2020 project ITFlows
  • Lie-detectors that claim to tell if someone is being truthful by analysing facial movements, which are dangerous and unreliable enough to be banned under the EU’s AI Act – except in the border and policing contexts.
  • Dialect recognition systems and other intrusive technologies used in the context of asylum or visa applications, to assess the veracity of applicants’ claims. This technology, in addition to reinforcing a generalised framework of suspicion towards people seeking asylum, is based on unscientific and often biassed, discriminatory assumptions that inform real-world decisions that have a huge and detrimental impact on people’s lives.
  • Border surveillance technologies such as remote biometric identification in border areas, drones and thermal cameras to prevent border crossings into and within the European Union. While some surveillance technologies are already in use, a wide range of systems are heavily tested in EU-funded projects like FOLDOUT Solution, ROBORDER, BorderUAS, Nestor. Their use at internal borders is encouraged by the Schengen Borders Code

What’s next?

In its final version, the Pact represents the further embedding of surveillance technologies in the EU, and beyond, as an increasingly key part of its arsenal to sustain Fortress Europe. It therefore represents a further erosion of fundamental rights, and the normalisation of digital surveillance at, and within, borders, justified by an approach to migration policy based on repression rather than rights.  

As the #ProtectNotSurveil coalition, we will continue to challenge the use of digital technologies at different levels of EU policies and practice and advocate for the ability of people to move and to seek safety and opportunity without risking harm, surveillance or discrimination. The coalition will release a more detailed analysis of the digital impacts of the Migration Pact in due course. 

To learn more about the coalition’s work or join our efforts to challenge digital policing in migration, get in touch: info@protectnotsurveil.eu

The #ProtectNotSurveil coalition

Access Now, Equinox Initiative for Racial Justice, European Digital Rights (EDRi), Platform for International Cooperation on Undocumented Migrants (PICUM), Refugee Law Lab, AlgorithmWatch, Amnesty International, Border Violence Monitoring Network (BVMN), EuroMed Rights, European Center for Not-for-Profit Law (ECNL), European Network Against Racism (ENAR), Homo Digitalis, Privacy International, Statewatch, Dr Derya Ozkul, Dr. Jan Tobias Muehlberg, and  Dr Niovi Vavoula.

A dangerous precedent: how the EU AI Act fails migrants and people on the move

© Alexander
© Alexander

On 13th March 2024, the EU Artificial Intelligence Act (AI Act) was adopted by the European Parliament. Whilst the legislation is widely celebrated as a world-first, the EU AI Act falls short in the vital area of migration, failing to prevent harm and provide protection for people on the move.

In its final version, the EU AI Act sets a dangerous precedent. The legislation develops a separate legal framework for the use of AI by law enforcement, migration control and national security authorities, provides unjustified loopholes and even encourages the use of dangerous systems for discriminatory surveillance on the most marginalised in society. This statement outlines the main gaps in protection with respect to AI in migration.

Why the EU AI Act fails migration

The EU AI Act seeks to provide a regulatory framework for the development and use of the most ‘risky’ AI within the European Union. The legislation outlines prohibitions for ‘unacceptable’ uses of AI, and sets out a framework of technical, oversight and accountability requirements for ‘high-risk’ AI when deployed or placed on the EU market.

Whilst the AI Act takes positive steps in other areas, the legislation is weak and even enables dangerous systems in the migration sphere.

  • Prohibitions on AI systems do not extend to the migration context. The legislation introduces (limited) prohibitions for harmful AI uses. EU lawmakers refused to ban harmful systems such as discriminatory risk assessment systems in migration and predictive analytics when used to facilitate pushbacks. Further, the prohibition on emotion recognition does not apply in the migration context, therefore excluding documented cases of AI lie detectors at borders.
  • The list of high-risk systems fails to capture the many AI systems used in the migration context and which, eventually, will not be subjected to the obligations of this Regulation. The list excludes dangerous systems such as biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict and curtail migration.
  • AI used as part of EU large-scale databases in migration, such as Eurodac, the Schengen Information System, and ETIAS will not have to be compliant with the Regulation until 2030.
  • Export of harmful surveillance technology: the AI Act did not address how AI systems developed by EU-based companies impact people outside the EU, despite existing evidence of human rights violations facilitated by surveillance technologies developed in the EU in third countries (e.g., China, Occupied Palestinian Territories). Therefore, it will not be prohibited to export a system banned in Europe outside of the EU.

A dangerous precedent: enabling harmful surveillance by police and migration authorities

Perhaps the most harmful aspect of the EU AI Act is the creation of a parallel legal framework when AI is deployed by law enforcement, migration and national security authorities. As a result of pressure exerted by Member States, law enforcement and security industry lobbies, these authorities are explicitly exempted from the most important rules and safeguards within the AI Act:

  • Exemptions to transparency and oversight safeguards for law enforcement authorities. The Act introduces transparency safeguards requiring public authorities using high-risk AI systems to register information about the system onto a publicly accessible database. The AI Act introduces an exemption to this requirement for law enforcement and migration authorities, instilling secrecy for some of the most harmful AI uses. This will make it impossible for affected people, civil society and journalists to know where AI systems are deployed.
  • The exemption on national security will allow member states to exempt themselves from the rules for any activity they deem relevant for “national security”, in essence a blanket exemption to the rules of the AI Act which could in theory be used in any matters of migration, policing and security.

These exemptions effectively codify impunity for the unfettered use of surveillance technology, setting a dangerous precedent for the use of surveillance technology in the future. In effect, AI Act lawmakers have vastly limited crucial scrutiny of law enforcement authorities and have enabled more and more use of racialised and discriminatory surveillance. First and foremost, these loopholes will harm migrants, racialised and other marginalised communities who already bear the brunt of targeting and over-surveillance by authorities.

Fundamental rights, surveillance tech and migration: what’s next?

The EU AI Act will take between 2-5 years to enter into force. In the meantime, harmful AI systems will continue to be tested, developed and deployed in many areas of public life. Furthermore, the EU AI Act is only one legal context in which the EU is enabling surveillance technology. From the Screening Regulation, Eurodac, to many others, we see an expanding legal framework that surveils, discriminates against and criminalises migrants.

The #ProtectNotSurveil coalition started in February 2023 to advocate for the AI Act to protect people on the move and racialised people from harms emanating from the use of AI systems. This coalition will continue to monitor, advocate and organise against harmful uses of surveillance technology. Crucial next steps will be:

  • For EU and national level bodies to document and respond to harms stemming from the use of AI in migration and policing contexts, ensuring protection against the violation of peoples’ rights.
  • For civil society to contest further expansion of the surveillance framework, reversing and refusing trends that criminalise, target and discriminate against migrants, racialised and marginalised groups.
  • For all to re-evaluate the investment of resources in technologies that punish, target and harm people as opposed to affirming rights and providing protection.

The #ProtectNotSurveil coalition

Access Now, European Digital Rights (EDRi), Platform for International Cooperation on Undocumented Migrants (PICUM), Equinox Initiative for Racial Justice, Refugee Law Lab, AlgorithmWatch, Amnesty International, Border Violence Monitoring Network (BVMN), Digitalcourage, EuroMed Rights, European Center for Not-for-Profit Law (ECNL), European Network Against Racism (ENAR), Homo Digitalis, Privacy International, Statewatch, Dr Derya Ozkul, Dr. Jan Tobias, and Dr Niovi Vavoula.

PICUM’s submission to the European Commission’s call for evidence on the EU Anti-Racism Action Plan (implementation)

AI Act: European Parliament endorses protections against AI in migration

Today’s vote by the European Parliament’s civil liberties and internal market committees on the AI Act overwhelmingly endorsed important protections against harmful uses of AI in migration. PICUM and the Border Violence Monitoring Network (BVMN) – representing close to 180 organisations across Europe – welcome the Parliament’s strong stand on these issues but are concerned about gaps that remain in the legislation.

The AI Act will be the world’s first binding legislation to regulate the use of artificial intelligence, including in the migration field. The text voted for on 11 May bans harmful uses of AI and subjects “high-risk” uses to enhanced safeguards.

Bans on harmful uses of AI

Among the bans that were voted, we welcome the prohibition, under Article 5, of:

  • emotion recognition technologies, which explicitly extends to the EU’s borders. Emotion recognition technologies claim to detect people’s emotions based on assumptions about how someone acts when feeling a certain way, including to assess their credibility. Immigration authorities in some Member States already tested the use of these types of systems to inform decisions about who qualifies for protection, and does not.
  • biometric categorisation systems that use personal characteristics to classify people and to inform inferences based on those characteristics. This kind of software is used to determine whether someone’s dialect matched the region they say they are from.
  • predictive policing systems, which use preconceived notions about who is risky to make decisions about the policing of certain groups and spaces.

All these technologies are based on unscientific and often biassed, discriminatory assumptions, which then inform real-world decision-making that has a real and detrimental impact on people’s lives. While we welcome the predictive policing ban, which encompassess some forms of algorithmic profiling used in the migration context, it does not fully capture the several uses and implications that automated risk assessment and profiling systems have in that context. The Committees also missed the chance to prohibit predictive analytics systems used to curtail migration movements and that can lead to push-backs.

Hope Barker, Senior Policy Analyst for BVMN says, “We call for all algorithmic profiling systems used in migration to be regulated separately, with a migration specific ban to address the many uses and practices that have been found to cause direct or indirect discrimination.”

“We are Black and border guards hate us. Their computers hate us too.” – Adissu

(Taken from the Technological Testing Grounds Report)

Enhanced safeguards on “high-risk” uses of AI

The text voted by the Parliament categorises certain uses of AI as “high-risk” and subjects them to enhanced safeguards. For instance the requirement of both manufacturers and deployers to assess impact on and risks of violation of fundamental rights prior to putting systems into use.

We welcome the categorisation as “high-risk” of:

  • forecasting tools that claim to predict people’s movements at borders. Alyna Smith, Deputy Director of PICUM, said: “Forecasting tools are often based on unreliable data, and – even if it is not the original intention – often used to enable operations to push people back from the border, preventing them from reaching safety and putting them in harm’s way. It’s good news that these tools have been categorised as “high-risk”. But forecasting tools that could be used to facilitate pushbacks must be subject to bans without exceptions.”
  • surveillance technologies such as drones, thermal imaging cameras, radar sensors, and others that are used to detect and locate transit groups for the sole purpose of subjecting them to violence and pushing them back.

Since 2018, the BVMN and its members have collected 38 testimonies, impacting 1,076 people, that recount the presence of a drone prior to their illegal pushback. (see this testimony collected by Collective Aid)

“Drones are not inherently harmful, they can and should be used to identify vulnerable groups and organise search and rescue operations. However, they are currently used to locate groups and illegally push them back, often after subjecting them to high levels of violence” said Barker. “We welcome the obligations introduced to users of border surveillance technologies and hope that this will result in further safeguards around how they are used to target people on the move”.

  • non-remote biometric identification systems that include hand-held devices that scan faces, fingerprints or palms, and voice or iris identification technology used by police to identify a person and check their residence status. Smith said “It’s incredibly important that these uses are included among high-risk under the Act. Police in a growing number of member states are being equipped with high-tech handheld devices that can scan faces and fingerprints to identify on the spot a person’s migration status. There’s already evidence these tools drive more discriminatory profiling by police against anyone who looks like they might be undocumented – citizens and non-citizens alike.”

The EU’s large-scale migration databases

The EU has been collecting millions of records, including biographic and biometric data, about third-country nationals in large-scale databases that are being interconnected for immigration enforcement purposes. This interconnectedness not only refers to EU Member States but also to third countries which the Commission is seeking to entrench and strengthen security-related information sharing mechanisms with. Without sufficient fundamental rights impact assessments, safeguards and guarantees, this practice throws up considerable data and privacy rights concerns.

We welcome the European Parliament’s decision to ensure that the AI Act’s protections and safeguards apply to these databases. However, the Parliament position should have gone much further: the Parliament is proposing a 4 year period for EU migration databases to comply with the AI Act.

Smith said: “The EU’s migration databases already present risks in terms of data protection and increased discrimination, since they’re built on the assumption that all non-citizens are potential threats. Explicitly including these systems within the scope of the AI Act is certainly positive, but we’re concerned that legislators might use the four-year grace period to dampen safeguards.”

NOTES TO THE EDITORS

  • PICUM (Platform for International Cooperation on Undocumented Migrants) is a network of NGOs primarily based in Europe that work to advance the rights of undocumented people. The Border Violence Monitoring Network is an independent network of NGOs, associations, and collectives that monitor human rights violations at and within Europe’s borders and advocates to stop structural and actual violence against people on the move.
  • Links to resources on:

Cover image: ryzhi – Adobe Stock

Data encryption: why it’s vital for migrants and their defenders

Michael Traitov – Adobe Stock

Data encryption is a process that protects the integrity and privacy of our stored digital information (on a phone or cloud) and our moving data (such as chat messages). While being crucial for the safety and privacy of our communication, and a democratic society overall, data encryption has been increasingly challenged in the name of fighting crime.

In the aftermath of the 2015 and 2016 terrorist attacks in Belgium and France, national law enforcement agencies and Europol notably pushed the EU and its member states to create so-called ‘backdoors’ to encryption that would grant access to protected devices and messages.

More recently, the European Commission proposal for a regulation to prevent and combat child sexual abuse has introduced a series of encryption-weakening measures aimed to stop the dissemination of child sexual abuse material (CSAM) online. As it stands, measures likely to be put into place if the proposal becomes law include ‘client-side scanning’, a technology that allows digital platforms and law enforcement to check messages against a database for ‘censurable’ content, before they are encrypted and sent.

‘Backdoors’ such as ‘client-side scanning’ make it possible for state and non-state actors – inside and outside the EU’s borders – to access our data en masse. As security technologist and Harvard Lecturer Bruce Schneier has noted, data encryption is a tool “uniquely suited to protect against bulk surveillance”. The prospect of potential mass surveillance of our online activities is especially worrying for marginalised and racialised groups, who are already disproportionately targeted by data-driven policing and surveillance in Europe, and for whom it is more difficult to claim the data protection rights they are entitled to. In many situations, these groups depend on encrypted digital communications services to keep themselves and others virtually as well as physically safe.

It is crucial that child protection is addressed at the European level. However, there are some issues that arise concerning the proposed Commission measures concerning CSAM online. There may be potential unintended harms against survivors of child abuse who rely on confidential communications to find help and report their abuse. These legislative changes may furthermore undermine encryption, and consequently online safety, in Europe and beyond, with devastating consequences for the work of human and migrant rights defenders.

Advocates have also challenged the idea that the intended measures will work at all, given the likelihood that perpetrators will switch to more ‘obscure’ platforms and the high error rates associated with detection measures, which mean law enforcement authorities investigating online child sexual abuse will have to devote limited resources to sift through mountains of potentially incorrectly flagged messages.

Chat services may keep asylum seekers’ sensitive communications with their lawyers private, and migrants’ contact with family and friends abroad, who are living under repressive regimes, safe. Customised apps allow survivors of gender-based violence, LGBTQI+ and undocumented people to access remote support services and (mental) health care without facing stigma or risking detention and deportation. Encrypted devices and clouds can protect migrants’ data when their phones are seized by border police upon entering the EU. They also ensure the integrity of digital evidence of abuse, if ever used in court by survivors of violence.

Those who defend the rights of vulnerable individuals, and journalists who expose the injustices they face, also depend on encryption technology to do their work. As humanitarian assistance to migrants and acts of solidarity across Europe are increasingly met with suspicion and criminalisation by European governments, human rights defenders use encryption to keep evidence of rights abuses safely stored, and communications with victims private. Activists with large social media followings also employ encrypted authentication to protect their accounts from being compromised by attacks. Journalists rely on encryption to safely connect with and protect their sources.

Online safety is crucial for a democratic society. It is also vital for marginalised people and those, like journalists and human rights defenders, working in spaces where the ability to denounce rights violations is already curtailed. Data encryption is a fundamental element of a safe online environment and should not be sacrificed in favour of “tough-on-crime” responses to societal issues that require holistic, victim-centred approaches.

Regulating migration tech: how the EU’s AI Act can better protect people on the move

Kenya-Jade Pinto

As the European Union amends the Artificial Intelligence Act (AI Act), understanding the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?

From AI lie-detectors, AI risk profiling systems used to assess the likelihood of movements outside the limited scope of regular pathways, to the rapidly expanding tech-surveillance at Europe’s borders, AI systems are increasingly a feature of the EU’s approach to migration.

On the ‘sharp-edge’ of innovation

While the uptake of AI is promoted as a policy goal by EU institutions, for migrants and people on the move, AI technologies fit into a wider system of over-surveillance, discrimination and violence. As highlighted by Petra Molnar in Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up, AI systems are increasingly used in efforts to restrict migration, affecting millions of people on the move. In this context, more and more ‘innovation’ means a ‘human laboratory’ of tech experiments, with people in already dangerous, vulnerable situations as the subjects.

How do these systems affect people? AI is used to make predictions, assessments and evaluations about people in the context of their migration claims. Especially worrying is the systematic use of AI to assess whether people who want to come to or enter Europe present a ‘risk’ of unlawful activity or security threats. These systems tend to pre-judge people based on factors outside of their control, relying on discriminatory assumptions and associations. Along with AI lie detectors, polygraphs and emotion recognition, we see how AI is being used and developed within a broader framework of racialised suspicion against migrants.

Not only do AI systems create these severe harms against individuals; they are also part of a broader – and growing – surveillance eco-system developed at and within Europe’s borders. Increasingly, racialised people and migrants are over-surveilled, targeted, detained and criminalised through EU and national policies. Technological systems form part of those infrastructures of control.

Specifically, many AI systems are being tested and used to shape the way governments and institutions respond to migration. This includes AI for generalised surveillance at the border, such as ‘heterogenous robot systems’ at coastal areas, and predictive analytic systems to forecast migration trends. There is a significant concern that predictive analytics will be used to facilitate push-backs, pull-backs and other ways to prevent people from exercising their right to seek asylum, and leading to the use of more dangerous routes. This concern is especially acute in a climate of ever-increasing criminalisation of migration, and of people helping migrants.  While these systems don’t always make decisions directly about people, they dramatically affect the experience of borders and the migration process, shifting even further toward surveillance, control, and violence throughout the migration experience.

Regulating Migration Technology: What has happened so far?

In April 2021, the European Commission launched its legislative proposal to regulate AI in the European Union. The proposal categorises some uses of AI in the context of migration as ‘high-risk’ – but fails to address how AI systems exacerbate violence and discrimination against people in migration processes and at borders.

Crucially, the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the context of significant power imbalances in which these systems operate. The proposal also includes a carve-out for AI systems that form part of large scale EU IT systems, such as EURODAC. This is a harmful development, and means that the EU itself will largely not be scrutinised for its use of AI in the context of its migration databases.

In many ways, the minimal technical checks required under the proposal of (a limited set of) high-risk systems in migration control could be seen as enabling, rather than providing meaningful safeguards for people subject to, these opaque, discriminatory, surveillance systems.

The proposal does not include any reference to predictive analytic systems in the migration context, or to the generalised surveillance technologies at borders, in particular those that do not make decisions about, or identify, natural persons. Therefore, systems that pose harm in the migration context in more systemic ways seem to have been completely overlooked.

In its first steps to amend the proposal, the IMCO-LIBE committee did not make any specific amendments in the migration field. There are major steps to be taken to improve this from a fundamental rights perspective.

Amendments: How can the EU AI act better protect people on the move?

Civil society have been working to develop amendments to the AI act to better protect against these harms in the migration context.  EU institutions still have a long way to go to make the AI Act a vehicle for genuine protection of peoples’ fundamental rights, especially for people experiencing marginalisation. The AI act must be updated in three main ways to address AI-related harms in the migration context:

  1. Update the AI act’s prohibited AI practices (Article 5) to include ‘unacceptable uses’ of AI systems in the context of migration. This should include prohibitions on: AI-based individual risk assessment and profiling systems in the migration context that draw on personal and sensitive data; AI polygraphs in the migration context; predictive analytic systems when used to interdict, curtail and prevent migration; and a full prohibition of remote biometric identification and categorisation in public spaces, including in border and migration control settings.
  2. Include within ‘high-risk’ use cases AI systems in migration control that require clear oversight and accountability measures, including, namely, all other AI-based risk assessments; predictive analytic systems used in migration, asylum and border control management; biometric identification systems; and AI systems used for monitoring and surveillance in border control.
  3. Amend Article 83 to ensure AI as part of large-scale EU IT databases are within the scope of the AI Act and that the necessary safeguards apply for uses of AI in the EU migration context.

The amendment recommendations on AI and migration were developed in coalition, reflecting the broad scope of harms and disciplines this issue covers. Special thanks to Petra Molnar, Access Now, European Digital Rights (EDRI), the Platform for International Cooperation for Undocumented Migrants (PICUM), Statewatch, Migration and Technology Monitor, European Disability Forum, Privacy International, Jan Tobias Muehlberg, and the European Centre for non-profit Law (ECNL). The original writing for this blog was kindly provided by Sarah Chander, Senior Policy Advisor at EDRI.

Digital technology, policing and migration – What does it mean for undocumented migrants?

SOS RACISMO GIPUZKOA

Data Protection, Immigration Enforcement and Fundamental Rights – StateWatch – Executive Summary

Infographic: EU Regulations on Interoperability Systems and Discriminatory Policy