Dismantling the Use of Big Data to Deport

The growing use of digital technology and the large-scale processing of personal data for immigration enforcement follows the tendency to blur the line between immigration control and security goals. The result is increased surveillance of and discrimination against people of colour, religious and ethnic minorities.

As the UN’s human rights expert on racism has pointed out, in most national contexts it is harder for non-citizens to protect themselves from abuses of the state. At the same time, governments exercise powers for border management and immigration enforcement that are not subject to the usual procedural safeguards guaranteed to citizens. The result is the development and deployment of digital technologies for migration control in ways that are “uniquely experimental, dangerous and discriminatory.


For instance, since 2013, Frontex (the EU’s border and coast guard) has run operation Eurosur, a framework for information exchange and cooperation between member states and Frontex to prevent irregular migration and cross-border crime through the use of drones, vessels, manned and unmanned aircraft, helicopters and satellites with radar system, thermal cameras and high-tech sensors. Eurosur was inspired by Spain’s surveillance system Sistema Integrado de Vigilancia Exterior (SIVE), in place since 2000 to monitor its southern coastal borders using radar technology, high-tech cameras, vessel automatic identification system, and border guards at the Spain-Morocco border and in Spanish territorial waters. In Hungary and Greece, an EU-funded pilot project introduced AI-powered lie detectors at border checkpoints in airports, intended to monitor people’s faces for signs of lying, and to flag individuals for further screening by a human officer.  And in Germany, the Federal Office for Migration and Refugees (BAMF) has been using automated text and speech recognition systems in asylum proceedings.

The trend of using digital technology and big data for immigration enforcement in ways that blur the line between immigration control and security goals is further embodied in the EU’s regulations on interoperable migration databases. Adopted just one year after the EU’s General Data Protection Regulation (GDPR) came into force, these two regulations create a basis for interconnecting multiple migration databases (together with data on criminal records) to pursue goals related to immigration enforcement and addressing serious crimes. The interoperability framework targets exclusively non-EU nationals for purposes that co-mingle immigration enforcement and the targeting of “serious crimes” like terrorism, implying a false link between criminality and immigration. This creates a deeply complex system with multiple interconnecting databases that increases the likelihood of errors and makes it extremely difficult to inform people about how their data is used, how they can rectify their data and obtain effective remedies where there are errors or abuses.


In April 2021, European Commission proposed a new regulation on artificial intelligence that recognises certain uses of AI in the migration and asylum context as presenting a “high risk” for fundamental rights and safety. But the Act proposes limited, mostly technical responses to this risk, paying little attention to the impact of these technologies on the welfare and rights of the people most affected. Worryingly, the interoperability migration databases are explicitly exempted from the even minimum rules set by the Act.

The processing of undocumented people’s personal data for immigration enforcement purposes happens within Europe’s borders too. Personal data is often shared when undocumented people report crime or mistreatment to the police, exposing them to detention and deportation and discouraging them from seeking help. Personal data is also used to “police” undocumented people who access health care, social services, and education. In addition to their harmful effects on people’s health and safety, these practices can lead to racial profiling and discrimination. To safeguard fundamental rights, “firewalls” are needed so that immigration enforcement is always kept separate from access to critical services and supports.


The use of technology can make things worse, by increasing the potential for discrimination and harm in ways that are often less visible. At the same time, it’s important to keep sight of the fact that the fundamental problem is the deployment of these technologies to advance an already harmful agenda.

Our Report:

Data Protection, Immigration Enforcement and Fundamental Rights

PICUM’s Recommendations

For European Union officials:


  • Adopt an Artificial Intelligence (AI) Act that recognises and addresses the fundamental rights implications of some uses of artificial intelligence. This means, among others;
    • ensuring that the AI Act applies to uses of AI in the context of the EU’s migration databases;
    • including in the AI Act robust and consistent update mechanisms for “unacceptable” and “limited” risk AI systems, as well as obligations on users of high-risk AI to conduct a fundamental rights impact assessment;
    • creating individuals rights in the AI Act as a basis for judicial remedies, as well as a right to effective remedy where those rights have been infringed and a mechanism for pubic interest organisations to lodge a complaint with national supervisory authorities.
  • Ensure democratic oversight and systems of accountability of uses of digital technology and large-scale processing of personal data. Given the well-recognised asymmetries of information and of power between those who develop and deploy digital technology and those who are subject to it, the EU must integrate mechanisms for genuine oversight and consultation, including with civil society organisations and communities most likely to experience the harmful effects of these systems. There must also be accessible systems of accountability in place to permit redress for rights violations linked to the use of these systems and technologies. This requires empowering equality bodies, data protection authorities, and other relevant public bodies to ensure accountability for the implications of digital technology and data processing for human rights and discrimination.
  • Live up to commitments under the EU Anti-Racism Action Plan by mainstreaming a racial equality perspective across all policy areas, including migration. Scholarship has made clear that the politics of race and the politics of migration are deeply intertwined.The European Commission should ensure that the Equality Task Force takes meaningful steps to identify and address the racial equality dimensions of the EU’s migration policies, with meaningful engagement from racial justice and migrants’ rights advocates in the process.


For advocates working on the national/local levels:


  • Connect with digital rights and racial justice advocates in your national context to explore and understand together the policies and practices that affect undocumented people in your country – at the border (e.g., remote surveillance, screening procedures), in their interactions with immigration authorities in asylum and visa procedures, and in their interactions with the police.
  • Inform undocumented people of how digital technology affects them – and about how to defend their rights. The lack of transparency around the use of digital technology, and its complexity, make it very hard to understand where and how it is being used, much less to monitor and hold public and private actors accountable for its use (and misuse). What is clear is that there will be practical consequences for migrants, and they should be aware of how they will be affected – and how they can challenge errors and violations of their rights. We all need to do what we can to support them in this. While problematic in many ways, EU law provides strong protections for the right to privacy and data protection, regardless of a person’s residence or migration status. National data protection authorities and human rights bodies can also be an important source of information and provide avenues for accountability.
  • Press for policies that ensure that immigration enforcement is kept separate from the delivery of key services. Advocate for the creation of “firewalls” to ensure that undocumented people who try to get health care, go to school, access services, report crime or seek redress for labour rights violations do not face immigration consequences.
  • Document the impact of digital technology on undocumented people. Where you see uses of digital technology that are discriminatory or harmful, or barriers to accountability, document it. This will provide evidence that can support advocacy as well as possible legal challenges to change problematic laws and practices.
  • Engage and advocate. Digital rights and migrants’ rights can seem like distant spheres, with little in common. But the world is changing, and the overlap between digital rights and migrant rights is closer than ever, and the stakes higher than ever. It is critical that organisations and advocates step out of their silos and take the time and have the courage to educate themselves and to speak up, together, on these issues that are at the intersection of our work. It is critical that advocates push back against the framing of digital rights issues and solutions as mainly technical matters for computer scientists and “experts”, and that the broader human rights and racial justice context be brought to bear in this work and advocacy, through collaboration and joint advocacy.
Our Briefings: