The Hostile Office
The Digital Hostile Environment

The digitalisation of the ‘Hostile Environment’ in the UK is growing. Migrants are being used as a testing ground for the State’s digital surveillance plans. In recent years, the Government has increased technological surveillance yet the lack of transparency around these systems is making effective scrutiny almost impossible.
What do we know?
Digitalisation of the border
The UK is transitioning to a fully digital border and immigration system. So far, this has largely focused on replacing physical documents with digital ones in the form of eVisas and Electronic Travel Authorisation (ETA).
However, we believe this is just the beginning of wider digitalisation. In 2022, the previous Government set out plans to begin testing technologies to enable ‘contactless’ border crossings with a pilot due to be rolled out in 2024. The details of this are still unclear and we are committed to investigating and raising awareness of these changes.
What we do know is that biometric tools, including facial recognition software, are becoming increasingly common in the UK’s immigration system. The increased use of technology in immigration enforcement has, and will have, a disproportionate impact on People of Colour and people from migrant backgrounds.
Surveillance
Across Fortress Europe and the U.S., borders are becoming more militarised. Alongside armed border guards and barbed wire fences, technology like drones and AI surveillance towers alongside mass collection of personal data are quickly becoming the norm.
In the UK, a ‘techno-border’ is being rolled out to monitor small boat crossings. Alongside border surveillance, the Home Office is currently developing a ‘Biometric Matcher’ or Strategic Mapping Platform, which will allow police and immigration officers to conduct biometric searches across fingerprints and facial scans held on a vast joint database. This platform could enable the monitoring and tracking of migrant communities at an enormous rate.
Facial recognition and policing
Police and private companies have been quietly rolling out live facial recognition surveillance cameras across the UK which take ‘faceprints’ of people often without you ever knowing. Surveillance and predictive policing mechanisms like live facial recognition are presented to the public as neutral tools to ‘fight crime’ but in fact, these systems are underpinned by racist and classist assumptions that certain communities are more of a threat or likely to commit crimes. As migration is being increasingly presented as a national security – and counter-terrorism – issue, the State uses this framing to ‘justify’ implementing more surveillance and checks on migrants.
While this raises concerns around privacy, transparency and accountability for everyone, migrants are particularly vulnerable. Not only are migrant communities often used as a testing ground for new biometric technologies with little power to challenge or consent. For example, at the border, migrants are often given the ‘choice’ of giving their biometrics to immigration enforcement or are not allowed into the country.
We are part of the Safety Not Surveillance coalition. This is a coalition of grassroots and civil society organisations working at the intersections of racial justice, migrant justice, criminal legal system accountability and tech. It was established in June 2024, in response to the expansion of novel technologies being used in policing and at the border, without clear legal grounds. Together, we are seeking a shift away from surveillance and control towards accountability, redress and community safety.
Artificial intelligence (AI) in the immigration system
The use of AI in the asylum system and wider immigration system could increase discrimination and negatively impact migrants, including refugees. We are becoming increasingly concerned about the potential implementation of AI without transparency or scrutiny, while neglecting the impacting on migrants and racialised communities.
Alongside our anti-surveillance campaigning, we have been investigating how AI is being used in the asylum system. In 2015, a Home Office streaming algorithm was used to determine visa applications as specific nationalities were receiving immediate refusals as result of being assigned a higher risk rating. Despite being scrapped, there is evidence that new streaming algorithms like IPIC have been implemented in a broader and more complex system with no scrutiny or transparency. People in the asylum system and refugees in our Network have spoken to us about instances where they believe AI has been used in the process of their asylum cases. We are working in partnership with them to investigate if and how AI is being used.
We want to gain a full picture of how AI is being used in the asylum system and expand our focus to the wider immigration system. Despite the Government’s attempts to shield the Digital Hostile Environment from criticism, we’re not giving up. We’re committed to exposing and challenging the creep of surveillance in migrants’ lives.
In this project:
- AI Under Watch: Scrutinising the asylum system by those most affected
- The UK’s AI Borders: Anduril’s Autonomous Surveillance Towers
Updates:
- AI in the asylum system: workshops
- The Hostile Environment goes digital
- Digitalisation of the UK border: Electronic Travel Authorisation (ETA)
- The Data Protection and Digital Information Bill harms migrants’ rights
- Data-Sharing and Immigration Enforcement
- Digitalisation of the UK border: eVisas
- Cross-border surveillance and racial profiling: The EU Migration Pact and the UK
- Confusion and anxiety around eVisa rollout
- The eVisa scheme was set up to fail