The Hostile Office > The Digital Hostile Environment

AI Under Watch: Scrutinising the asylum system by those most affected

New report/resource- AI Under Watch: Scrutinising the asylum system by those most affected (Insights for people in the asylum system and refugee support services)

To view this page and the resource in Arabic click here.

To view this page and the resource in Pashto click here.

To view this page and the resource in Farsi click here.

To view this page and the resource in Dari click here.

“My destiny should not be left to AI.”

AI has gradually been implemented into the UK immigration and asylum system for some time, but exactly how it’s being used remains unclear. As part of our Hostile Office campaign, we set out to understand the potential use and how this could impact people seeking asylum. 

In August 2024, we received funding from Public Voices in AI Fund to investigate the potential role and impact of AI on people seeking asylum in London and the South of England. Through workshops and interviews, we set out to inform and be informed by people seeking asylum about AI through creative activities and gather evidence of its impact on their daily lives. The workshops aimed to enable people seeking asylum to identify and challenge AI misuse in their cases through deliberative dialogue sessions and interviews, and co-produce this resource to help inform and support others who navigate digitised systems and borders.

We’ve collated these insights into a resource co-produced with people seeking asylum and AI experts to help inform migrants, including refugees and people seeking asylum as well as support services.

What is AI?

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. Generally, AI works by looking for patterns in data to make predictions. AI can be trained on data such as text, which would then allow it to sort through and categorise applications and make recommendations about whether the application should be approved. It can also be trained on data such as images, which is why it is being used for developing other technologies like facial recognition, which attempts to match faces on video or in a photo to another image of a face.

AI in the asylum system

The Home Office uses streaming algorithms, which sort data into categories or ‘streams’ to inform decisions. One streaming algorithm introduced in 2015 was used to determine visa applications where nationality played a key factor. This algorithm was successfully challenged in 2020, as certain nationalities were receiving immediate refusals through being assigned higher ‘risk’ ratings. While this decision-making algorithm was scrapped, new streaming algorithms have been implemented in a broader and more complex system with no scrutiny or transparency. 

A prominent, confirmed use of AI in the asylum system is Identify and Prioritise Immigration Cases (IPIC), which recommends outcomes for people with immigration reporting conditions, such as deportation, detention, or other kinds of monitoring. Like the visa application streaming algorithm, IPIC includes nationality as one of its categories, as well as at least one other protected characteristic.

People in the asylum system and refugees in our Network have spoken to us about instances where they believe AI has been used in the process of their asylum cases. This includes transcribing (writing out) both screening and substantive interviews, and the country information used to inform decisions. 

What do people seeking asylum think about AI in the asylum system?

People seeking asylum who worked with us on this project had a range of views and suggestions on the use of AI in the asylum process. Most told us their thoughts about how and where AI could be used, while a minority were completely opposed to its use at any stage of the process. The most widespread view was that while the majority of participants were content with AI being used in order to standardise decision-making or make the system more efficient, the consistent feedback was that the final decision should not use AI or a “machine”.  This was due to concerns around AI’s ability to interpret body language and linguistic or cultural intersections such as dialect if AI were used in place of a human caseworker. 

However, one of the most pertinent findings of our research was the prevalence of errors in the asylum process. In addition, the research also highlighted wider systemic issues with the asylum system itself, most notably the immense anguish the waiting times cause people seeking asylum and lack of transparency around how and when decisions are made.  

Specific findings included:

  • The widespread prevalence of errors relating to names, date of birth or inaccuracies in asylum interview transcripts. Participants stated that there needs to be greater transparency about the process of how transcripts are compiled and evaluated, alongside how to challenge or edit incorrect transcripts
  • The emotional toll of waiting times and inefficiency of the asylum system as a whole was evident amongst all participants. Participants stated that AI could be used to help with the overall efficiency of the system, particularly in regard to reducing waiting although they did not elaborate on how this could be specifically used to speed up the process
    • However, a minority of participants did raise concerns about the ability of AI to make errors which may not be adequately picked up or addressed
  • Concerns were consistently raised around the limitations of AI in its ability to understand human behaviour and emotions
    • For example, not only would AI be unable to read body language, but it might also have difficulty in differentiating or appreciating the complexities of how mannerism may differ from culture to culture
  • Some of the participants stated they had concerns about how AI would impact disabled people, specifically neurodivergent people
  • Participants stated they were cautious of AI due to the lack of transparency around how it would be used, and what datasets it would rely on

Both the immigration system and implementation of technology are opaque, and for migrants it is not always clear how to navigate it. Throughout the project, participants said they wanted more information on the UK immigration system generally and how to protect your rights, as well as expressing interest in more workshops in the future. 

We have compiled a few different ways for people seeking asylum to be able to better understand and get greater transparency around these cases in our co-produced resource. This includes how to make a subject access request (SAR), obtaining a copy of a substantive interview request, and how to make a Freedom of Information request. 

This resource only marks the beginning of greater investigation into the implementation of AI in the asylum and wider immigration system. Despite difficulties in obtaining information via FOIs to the Home Office on AI and the wider digitalisation of the immigration system due to their alleged concerns that information about how they work could impact the operation of immigration controls, we remain committed to our campaigning against the Digital Hostile Environment. 

We hope by amplifying the voices of those affected, we can challenge the unchecked use of AI in asylum decision-making processes, raise awareness for other migrants, and move towards more ethical systems and transparent, accountable, and reliable mechanisms.

The Resource

Download the English resource here.

If you have any feedback or questions about this resource and research, please email [email protected] 

Scroll to Top