AI in global recruitment and HR: overcoming legal problems and built-in bias
Is AI an efficient and unbiased tool to aid in recruitment? Or is it a legal minefield with its own built-in bias? While organisations are beginning to use AI in their talent search process, there are legitimate concerns around transparency and accountability, as Marianne Curphey heard at the CIPD’s annual conference.
![image-of-robot-and-man-at-table](/media/images/istock-1371325596_25005_page_2F63DB6CD6D170D8EA1EA1A320DD73BF.jpg)
This article is taken from the Leadership Supplement from
Relocate Think Global People
Click on the cover to access the digital edition.In a panel discussion at the CIPD conference, legal and government experts took part in a robust discussion about the benefits and pitfalls of AI in recruitment. Panellists explored the global acceleration of the integration of AI in HR and recruitment processes.Organisations are currently debating and testing three concerns: how to take full advantage of powerful AI systems; where AI is best placed to provide the most value; and how to manage its use, staying mindful of the legal and ethical dilemmas it presents.This is particularly difficult to do without clear guidance around ethical standards, something which governments themselves are grappling with. What’s more, while you may think your current hiring process is AI-free, this may not be the case. Many recruitment tools use an element of AI in the job matching or screening process.So, could AI be a positive force or a serious legal danger within the world of work? How can we deal with concerns around data privacy and ethics? Do AI algorithms perpetuate bias and discrimination? How could AI support efficiency and innovation at work? We look at some of the positive impacts of using AI strategically in the recruitment process and the legal pitfalls.
AI in recruitment – can it deliver high volume efficiency?
Nikki Sun, research and programme manager at Oxford Martin AI Governance Initiative, has a decade of experience working at the intersection of journalism, public policy and emerging technologies. The Oxford Martin School is a research and policy unit based in the social sciences division of the University of Oxford. Its research is used to support decision-makers from industry, government and civil society mitigate AI’s challenges and to realise its benefits.Her previous research has studied the impact of AI on employment and labour in China and how AI is shifting the power imbalance between workers and employers. She described the use of AI in mass recruitment for roles like assembly line workers and delivery drivers in China. In her research on the Chinese workplace, she found that AI is often preferred for roles with standard job descriptions and large volumes of applicants.“When dealing with thousands of applications, for example assembly line workers in factories and delivery drivers, AI is definitely more preferable there because usually the job description is very standard and they just check whether applicants meet certain standards and then hire them,” she explained.“In terms of senior roles or in the field of highly skilled work, people tend to use human HR in dealing with the applicants. Although companies are trying to do more with AI in more senior roles as well, we haven't really seen much progress. In terms of the decision-making process in evaluation and promotion, or pay rises, companies are more comfortable to have human HR as a final reviewer and decision maker.”The legal landscape: varying regulatory approaches
The legal regulation of AI in HR and recruitment varies by region. The EU sets rigorous standards under the EU AI Act, which categorises recruitment and employee management systems as “high-risk” areas. This classification imposes strict obligations on companies that use AI for hiring, promotions and performance monitoring. For instance, under the EU AI Act, companies must ensure transparency, monitor AI input data and notify workers if these systems are in place. Additionally, public sector employers are subject to more stringent rules.The UK, in contrast, has adopted a more flexible “principles-based approach” as outlined in its White Paper, focusing on regulating AI by relying on existing legal frameworks, like the Equality Act and GDPR. These existing laws, which address discrimination, data protection and consent, already apply to employment contexts, but cover areas where AI’s role is expanding. Companies operating in both regions must ensure compliance with these regulations to avoid liabilities.Furat Ashraf is a partner at Bird & Bird in the international HR services group in London and is an employment lawyer supporting clients on the full spectrum of contentious and non-contentious employment law issues across EMEA and APAC. Acting for a broad range of clients from global financial institutions to multinational corporates, her experience includes drafting and negotiating settlement agreements and employment contracts, reviewing and updating employment policies and handbooks, and advising on the employment aspects of large cross-border corporate transactions.She explained to the panel, which was chaired by Hayfa Mohdzaini, a senior policy and practice adviser – technology at the CIPD, that AI legislation is still being debated and that there was “quite a spectrum” between the EU, UK and US when examining the approaches that different regions and countries are taking. “This covers recruitment and selection, decisions affecting work relationships, promotion, termination, and any kind of monitoring of performance and behaviour,” she said.The EU AI Act will have extra territorial effects, meaning that if you are a UK business that sells products into the EU, then you will be bound by those regulations. It also specifies the need for a degree of human oversight, strict obligations in relation to data input into certain AI systems and the obligation to notify workers’ representatives and affected workers.“When you look at the UK position, it really ties back to the White Paper produced by the previous government, which was very focused on using a principles-based approach,” she said.“One of the things that people often overlook is that AI is another technology and we have a lot of existing laws, particularly in the employment context, that HR practitioners will be familiar with, and which regulate how you use technology when it comes to the workforce.“For example, the Equality Act is a good example of a law where you do need to be mindful when it comes to bias and discrimination because you have groups that are protected under the Equality Act from indirect discrimination.”The law also covers GDPR and the obligations will apply when it comes to putting candidate data through your systems or using employee data to generate data around performance evaluation, she said. It also covers decisions you might have taken based on the data collected and giving employees information on how you are going to use the data they provide.“When you look the outside UK, there is an obligation around works councils and employee representative bodies, and often those existing obligations may even go further than something like the EU AI Act,” she said. “In the EU AI Act, there's an obligation to notify workers’ representatives. There will be countries where you need to consult them and you need their agreement, for example in Germany.”She said it was therefore important not to look at the legal landscape in a vacuum as regards AI. “While we do not have new legislation in the UK, we have a lot of existing legislation which will affect it,” she told the panel.Bias and unfairness: how to recognise and mitigate damage
One of the most pressing challenges in AI and its role in HR is inbuilt bias, which can manifest in various ways. Historically, AI algorithms have shown unintended discrimination due to biased training data. If the data on which they are trained is biased, then the AI will produce a biased result. For example, some AI recruitment tools have inadvertently marginalised women or certain ethnic groups because the datasets used to train these models reflected societal biases.“Bias is something that has long been an issue when it comes to rolling out tech,” said Furat Ashraf. “Thinking about the adverse impact on certain demographics of the population –those with protected characteristics in the context of the Equality Act – is particularly important. There were a few very prominent cases some time ago, although technology has improved since then. Some of the big tech giants that were early adopters found that AI as a recruitment tool was deselecting for women because the training data wasn’t programmed properly.”She said that it was important to take the legal perspective into account when looking at bias in discrimination because that is the one area in grievance hearings where damages are uncapped, together with whistleblowing.“Every HR practitioner in the room will say that if you've met a manager, you know that human bias is real,” she said. “You can't remove your difficult manager.”She raised the point that, in theory, if you find the point of bias within AI and change the algorithm and train it on some additional data, then the issue should be fixed; something that is arguably much harder to do when dealing with humans.She also talked about how disability fitted into discrimination and bias and how video technology in the context of recruitment had shortcomings when it came to failure to pick up neurodiverse conditions or cultural differences. “That's another area where I think we have to be cautious on,” she said. “In that scenario, what are the emotional behaviours it is reading and what data set is the AI trained on?”The legal implications of AI bias in recruitment and HR
Emily Campbell-Ratcliffe is head of AI assurance at the UK’s Department for Science, Innovation and Technology (DSIT). Emily leads DSIT’s efforts to support the growth of an ethical, trustworthy and effective AI assurance ecosystem in the UK, which is a key pillar to realising the UK’s AI governance framework. She represents the UK at the OECD’s working party on AI governance and is also part of the OECD.AI network of experts, contributing to its expert groups on AI risk and accountability and compute and climate.She sounded a warning note around how to manage bias in AI from a legal point of view and the perception that algorithms can be fairer than a human if you can remove bias from the AI programme. “Bias mitigation is an extremely difficult legal issue,” she said. “It's a very big grey area. The Equality and Human Rights Commission (EHRC) are struggling to understand where the lines are with bias mitigation algorithms and we are working with them on this, but it is slightly controversial and a legal grey area.”Another pitfall is organisations thinking that AI is a less biased recruitment tool, even though it has been trained on internal data. “They're just reinforcing existing biases within the organisation and people are expecting results they just fundamentally won’t get,” she said. “So, it's about how you can enhance your training data through synthetic data. Otherwise, organisations are trying to bring some tools to make different decisions, but they end up fundamentally reinforcing existing hiring practices.”Nikki Sun explained that AI recruitment video interviews also had inbuilt flaws and sometimes AI picked up random features that were not actually relevant to the job. This could include the background to the interview, the lighting, the candidate’s facial expressions and other details that might not be favourable. “It is an area we need to be very careful about,” she said. “For example, if there is a bias, we don't know what bias the machine has.”Furat Ashraf agreed that this example was completely prohibited under the EU AI Act. “One specific system is the emotional recognition tools, which are often used in recruitment to tell how engaged and enthusiastic someone is,” she said. “That has been marked in the EU AI Act and also in the draft bill of the TUC as something that people are particularly concerned about and on the red list.”Other potential pitfalls that might cause bias in the system are special rules around sick leave, maternity leave or when people are out of the office.“Does your system take account of those and is there sufficient human augmentation to make sure that you're not just going off AI’s end decision?” she asked. “We have redundancy selection tools and it might seem as though you have mitigated all the risk, but have you looked at that tool and considered if is there somebody in there who has made a complaint of discrimination?”Whistleblowing is something that is on the rise and is a protected disclosure, but it is also something that an algorithm will not know about, she said.Steps to mitigate bias in AI and recruitment
Emily Campbell-Ratcliffe suggested that there should always be a human checking the outcomes, reading through CVs and personal statements and ensuring that AI is not making mistakes. “It is very easy to game the system nowadays,” she said. “If you're just using a machine to read CVs, it's very easy. So, from a business perspective, having a human there is very important.”Furat Ashraf suggested that all organisations should audit their recruitment process and tools to find out what they were already using. “The starting point for clients is auditing what you already use, because people say they don’t want to use an AI tool and then we find most of the recruitment tools already use AI,” she said.“When you use them, you never want to be the case example of something that wasn't fully tried and tested. I'm always going to talk to clients about what you agree contractually with your provider and how you mitigate some risk that way.”Read related articles
- The impact of a global shake up in the world of artificial intelligence begins
- Embracing a human approach to AI
- Stay or go? Career curation in the Big Reskill
The importance of transparency
Emily Campbell-Ratcliffe acknowledged that in the negotiation stage, it was very hard to negotiate who picks up the liabilities for an inaccurate or biased output.“For HR, it is all very well your commercial or legal colleagues saying the provider will pick up the liability if there is a discriminatory output. As an employer, the claim is going to be brought against you and you are going to be the one in the tribunal explaining the decision. It doesn't then matter that you have a contractual right to make a claim against the provider of the tool.”She said companies that have really good early engagement strategies and engage with employee representative bodies and bring them on board can decrease some of the suspicion around what the tool is doing. “When it comes to litigation, the US goes first and we are already seeing in the US a whole new work stream of legislation that is around class actions against providers that deliver AI screening.”She cited the example of one man who was Black and had health conditions. He was rejected by 100 different companies for a job. All of the companies were using the same AI tool and that tool was deselecting him. He is now pursuing a case against the provider rather than the companies who rejected him.She called for meaningful transparency in organisations with an emphasis on the deployer of the system, not the developer. “So if you are the deployer, there are certain questions you should be asking developers and they should be explaining those in a way that you understand and in plain English. Even if you don't have the technical expertise internally, you can feel confident that a system is going to do what it is intended to and in the right way.”Looking forward – using AI safely
As organisations embrace AI, building trust and ensuring accountability are as critical as technological sophistication. By focusing on transparency, human oversight and working through potential risks and legal liabilities, companies can leverage AI’s benefits while minimising legal or reputational damage. This balanced approach, creating a transparent, responsible AI system, will be essential for any organisation looking to use AI as a tool for sustainable and ethical growth.![leadership-supplement-sp25-intext](/media/images/rel077relocateleadershipsupplement-outnowadintext670x90px_25033_compressed_1637E8EF4622D0C0076A49B3826A6AC1.jpg)
![Think-Women-IWD-2025-intext](/media/images/rel071relocatethinkwomen2025iodinviteadintext670x90v2_24861_compressed_4679A58E07B847DC3A0B7C51671331CA.jpg)
Find out more about the Think Global People and Think Women community and events.
![Mini-Factsheet-banner-intext](/media/images/rel054relocatemini-factsheetsadintext670x90px002_22975_compressed_C3C612832D473D04A54714A1F0D534B0.jpg)
![Podcast-banner-intext](/media/images/rel053relocatepodcastgenericad670x90intext002_22979_compressed_3EA72B6497B000CEF555DE406799EE54.jpg)
Subscribe to Relocate Extra, our monthly newsletter, to get all the latest international assignments and global mobility news.Relocate’s new Global Mobility Toolkit provides free information, practical advice and support for HR, global mobility managers and global teams operating overseas.
©2025 Re:locate magazine, published by Profile Locations, Spray Hill, Hastings Road, Lamberhurst, Kent TN3 8JB. All rights reserved. This publication (or any part thereof) may not be reproduced in any form without the prior written permission of Profile Locations. Profile Locations accepts no liability for the accuracy of the contents or any opinions expressed herein.