


Regulating Migration Tech : How the EU’s AI Act can better protect people on the move.As the European Union amends the Artificial Intelligence Act (AI Act) exploring the impact of AI systems on marginalised communities is vital. AI systems are increasingly developed, tested and deployed to judge and control migrants and people on the move in harmful ways. How can the AI Act prevent this?
From AI lie-detectors, AI risk profiling systems used to assess likely good of ‘illegal’ movement, to the rapidly expanding tech-surveillance complex at Europe’s borders, AI systems are increasingly a feature of migration management in the EU.
On the ‘sharp-edge’ of innovation
Whilst the uptake of AI is promoted as a policy goal by EU institutions, for marginalised communities, and in particular for migrants and people on the move, AI technologies fit into a wider system of over-surveillance, discrimination and violence. As highlighted by Petra Molnar in Technological Testing Grounds: Migration Management Experiments and Reflections from the Ground Up AI systems are increasingly controlling migration and affecting millions of people on the move. More and more ‘innovation’ means a ‘human laboratory’ of tech experiments. People in already dangerous, vulnerable situations are the subjects.
How do these systems affect people? In migration management, AI is used to make predictions, assessments and evaluations about people in the context of their migration claims. Of particular concern is the use of AI to assess whether people on the move present a ‘risk’ of illegal activity or security threats. AI systems in this space are inherently discriminatory, pre-judging people on the basis of factors outside of their control. Along with AI lie detectors, polygraphs and emotion recognition, we see how AI is being used and developed within a broader framework of racialised suspicion against migrants.
Not only can AI systems present these severe harms to people on the move in individual ways, they form part of a broader surveillance eco-system increasingly developed at and within Europe’s borders. Increasingly, racialised people and migrants are over-surveilled, targeted, detained and criminalised through EU and national policies. Technological systems form part of those infrastructures of control.
Specifically, many AI systems are being tested and deployed on a structural level to shape the way governments and institutions respond to migration. This includes AI for generalised surveillance at the border, including ‘heterogenous robot systems’ at coastal areas, and predictive analytic systems to forecast migration trends. There is a significant concern that predictive analytics will be used to facilitate push-backs, pull-backs and other ways to prevent people exercising their right to seek asylum. This concern is especially valid in a climate of ever-increasing criminalisation of migration, and also of human rights defenders helping migrants. Whilst these systems don’t always make decisions directly about people, they vastly affect the experience of borders and the migration process, shifting even further toward surveillance, control, and violence throughout the journey.
Regulating migration technology: What has happened so far?
In April 2021, the European Commission launched its legislative proposal to regulate AI in the European Union. The proposal, whilst categorising some uses of AI in migration control as ‘high-risk’, fails to address how AI systems exacerbate violence and discrimination against people on the move in migration processes and at borders.
Crucially, the proposal does not prohibit some of the sharpest and most harmful uses of AI in migration control, despite the significant power imbalance that these systems exacerbate. The proposal also includes a carve-out for AI systems that form part of large scale EU IT systems, such as EURODAC. This is a harmful development meaning that the EU itself will largely not be scrutinised for its use of AI in the context of its migration databases.
Full Article : https://edri.org/our-work/regulating-migration-tech-how-the-eus-ai-act-can-better-protect-people-on-the-move/
Technology is the new border enforcer, and it discriminates
Tech solutions have not made border control more objective or humane, but rather more dangerous.
Across the globe, an unprecedented number of people are on the move due to conflict, instability, environmental disasters, and poverty. As a result, many countries have started exploring technological solutions for border enforcement, decision-making, and data collection and processing.
From drones patrolling the Mediterranean Sea to Big Data projects predicting people’s movement to automated decision-making in immigration applications, governments are justifying these innovations as necessary to maintain border security. However, what they often omit to acknowledge is that these high-risk technological experiments exacerbate systemic racism and discrimination.

https://r3d.mx/publicaciones/ PDF.
10 threats to migrants and refugees
This article presents some of the tools and techniques deployed as part surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity, with a particular focus on the UK.
Over the last two decades we have seen an array of digital technologies being deployed in the context of border controls and immigration enforcement, with surveillance practices and data-driven immigration policies routinely leading to discriminatory treatment of people and undermining peoples’ dignity.
And yet this is happening with little public scrutiny, often in a regulatory or legal void and without understanding and consideration to the impact on migrant communities at the border and beyond.
These practices mean that migrants are bearing the burden of the new systems and losing agency in their migration experience, particularly when their fate is being put in the hands of systems driven by data processing and so called tech innovations. There is a need to demand a more humane approach to immigration based on the principles of fairness, accessibility, and respect for human rights.
In this article we present some of these tools and techniques widely used, looking at the situation worldwide and adding a particular focus on the UK.
- Data Sharing: turning public officials into border guards
- Mobile Phone Extraction: your phone is fair game

3.Social Media Intelligence: what does a Facebook like say about you?
4.Predictive Policing: a feedback loop that reinforces racial bias
5.Lie Detectors: security on scientifically dubious grounds
6.Border Externalisation: outsourcing border controls and surveillance
7. Biometrics Processing: a feast of databases
8.Facial Recognition: making surveillance frictionless
9. Artificial Intelligence: your fate in the hands of the system
10. Private Companies: when the border is a good business
https://privacyinternational.org/long-read/4000/10-threats-migrants-and-refugees
We are happy to share our partners’ guide here to help climate and migrant justice advocates, journalists, and policymakers speak about the intersections of climate change and migration.https://truthonborders.com/our-statement/

Financing border wars The border industry, its financiers and human rights. PDF.
This report seeks to explore and highlight the extent of today’s global border security industry, by focusing on the most important geographical markets—Australia, Europe, USA—listing the human rights violations and risks involved in each sector of the industry, profiling important corporate players and putting a spotlight on the key investors in each company. https://www.tni.org/en/publication/financing-border-wars

The EU’s proposed Artificial Intelligence (AI) Act aims to address the risks of certain uses of AI and to establish a legal framework for its trustworthy deployment, thus stimulating a market for the production, sale and export of various AI tools and technologies. However, certain technologies or uses of technology are insufficiently covered by or even excluded altogether from the scope of the AI Act, placing migrants and refugees – people often in an already-vulnerable position – at even greater risk of having their rights violated.
This briefing has been produced as a complementary document to proposed amendments to the AI Act drafted by a coalition of human rights organisations (including European Digital Rights, Access Now, Migration and Technology Monitor, PICUM and Statewatch).
It begins with key points and recommendations, which largely correspond with those in the proposed amendments. A short introduction follows, before an explanation of what the AI Act is, how it deals with migration, and the associated concerns of civil society over its “risk-based approach”.
It goes on to examine the current development and deployment of AI systems by EU institutions and member states for asylum, border and migration control purposes, outlining key use cases, the risks these pose to fundamental rights, and how these would be regulated (or not) by the proposed AI Act.
The briefing then provides a snapshot of the extensive public funding that the EU has provided for the research and development of ‘border AI’, before giving an overview of the key actors and institutions involved in negotiations on the AI Act as it passes through EU institutions. PDF. https://www.statewatch.org/publications/reports-and-books/a-clear-and-present-danger-missing-safeguards-on-migration-and-asylum-in-the-eu-s-ai-act/
EU has spent over €340 million on border AI technology that new law fails to regulate.The EU has spent €341 million on research into artificial intelligence technologies for asylum, immigration and border control purposes since 2007, yet the proposed AI Act currently being debated in EU institutions fails to provide meaningful safeguards against harmful uses of those technologies, says a report published today by Statewatch. (12 May 2022).
The report, A clear and present danger: Missing safeguards on migration and asylum in the EU’s AI Act (pdf), identifies a total of 51 projects looking into diverse potential uses of AI technologies, including autonomous border control robots, biometric identification and verification devices, and automated data-gathering and analysis systems.
Private companies have received more of the funding (€163 million) than any other type of institution, with transnational military and security companies such as Indra, Leonardo, Israel Aerospace Industries and GMV Aerospace and Defence amongst the primary recipients.
The funds for have come from the EU’s research and development programmes, of which the current iteration, Horizon Europe, is worth a total of €93 billion and runs from 2021 to 2027. €1.4 billion of that total is devoted to “civil security”, and the first work programme makes €55 million available for projects on “border management”. [1]
The report is published in the same week that a coalition of human rights organisations, including Statewatch, have published proposals for amendments to the AI Act [2] that would ensure the law provides fundamental rights protections for people subjected to AI systems in asylum, immigration and border proceedings.
‘A clear and present danger’ demonstrates how numerous existing uses of advanced technology –remote biometric identification systems, automated assessment and verification tools, profiling technologies embedded in large-scale EU databases, border surveillance and predictive analytics systems – are insufficiently covered by or even excluded altogether from the scope of the AI Act, placing people in an already-vulnerable positions at even greater risk of having their rights violated.
These demands stand in stark contrast to those put forward by advocates of a more laissez-faire approach to the deployment of advanced technologies – as a report produced by the consultancy firm RAND Europe for EU border agency Frontex put it: “Legislations and regulations appear to be the barriers that technology developers will need to overcome to ensure the use of their AI-based solution.” [3] https://www.statewatch.org/news/2022/may/eu-has-spent-over-340-million-on-border-ai-technology-that-new-law-fails-to-regulate/

https://www.osce.org/odihr/499777 Border Management and Human Rights.
A continuación se muestran imágenes de nuestras visitas a las fronteras de todo el mundo. También se pueden encontrar en nuestro instagram @migration.tech
Estas fotografías ofrecen una representación visual del contexto de cada vez mayor seguridad y politización de la gestión de la migración, y deliberadamente no muestran los rostros de las personas ya que la política de nuestros proyectos es no presentar ninguna fotografía sin el consentimiento informado y la participación constante de quienes aparecen en ellas.
Desafortunadamente, las representaciones visuales en el campo de la migración con frecuencia resultan víctimas de tropos dañinos basados en representaciones racistas y unilaterales de personas en crisis. En este proyecto estamos decididos a no perpetuar ciertos tipos de imágenes dañinas de refugiados, solicitantes de asilo o migrantes, que reducen las complejas historias de estas personas a ciberanzuelos o retratos estereotipados de cuerpos de orígenes étnicos particulares que no respetan las historias individuales de las personas. https://es.migrationtechmonitor.com/snapshots

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders. m November 23 2021.
The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.
Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”
In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practised “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modelling.
Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.
Ignoring human experience and impacts
But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.”
The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.
In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.”
People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines.
As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.
Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.” https://chrgj.org/2021/11/23/pilots-pushbacks-and-the-panopticon-digital-technologies-at-the-eus-borders/
https://algorithmwatch.org/en/ AlgorithmWatch is a human rights organization based in Berlin and Zurich. We fight for a world where algorithms and Artificial Intelligence (AI) do not weaken justice, democracy, and sustainability, but strengthen them.