The UK's data protection watchdog finds that AI recruitment technologies can filter candidates according to protected characteristics including race, gender, and sexual orientation.
The Information Commissioner's Office also said that in trying to guard against unfair bias in recruitment, AI tools could then infer such characteristics based on information in the candidate's application. Such inferences were not enough to monitor bias effectively, and they were often processed without a lawful basis and without the candidate's knowledge, the ICO said.
The findings are part of an audit [PDF] of organizations that develop or provide AI-powered recruitment tools between August 2023 to May 2024.
In a prepared statement, Ian Hulme, ICO director of assurance, said: "AI can bring real benefits to the hiring process, but it also introduces new risks that may cause harm to jobseekers if it is not used lawfully and fairly. Our intervention has led to positive changes by the providers of these AI tools to ensure they are respecting people's information rights.
"Our report signals our expectations for the use of AI in recruitment, and we're calling on other developers and providers to also action our recommendations as a priority. That's so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers."
While the research found many AI recruitment tool providers monitored the accuracy and bias, not all did. At the same time, a number also included "features in some tools [which] could lead to discrimination by having a search functionality that allowed recruiters to filter out candidates with certain protected characteristics."
In UK data protection law, "protected characteristics" include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion and belief, sex and sexual orientation.
The audit found that tools "estimated or inferred" people's gender, ethnicity, and other characteristics from their job application or even just their name, rather than asking candidates directly. "This inferred information is not accurate enough to monitor bias effectively. It was often processed without a lawful basis and without the candidate's knowledge," the report said.
The report also revealed that tools sometimes collected far more personal information than was needed. "In some cases, personal information was scraped and combined with other information from millions of people's profiles on job networking sites and social media. This was then used to build databases that recruiters could use to market their vacancies to potential candidates. Recruiters and candidates were rarely aware that information was being repurposed in this way," the report said.
Following its research, the ICO has produced a range of recommendations for developers and providers of recruitment tools which use AI. These include requirements already in laws such as processing personal information fairly, explaining the processing clearly, keeping personal information collected to a minimum, and not repurposing or processing personal information unlawfully. It also recommended providers and developers conduct a risk assessment to understand the impact on people's privacy.
Worldwide, attention has been drawn to the implementation of AI-assisted recruitment tools in legal cases, policies, and new laws.
In April, the US Equal Employment Opportunity Commission (EEOC) allowed a claim against Workday to continue, arguing the HR and finance software vendor may qualify as an employment agency because of the way its AI tool screens applicants. The plaintiff in the case said he was turned down for every single one of the more than 100 jobs he applied for using the Workday platform and alleges illegal discrimination on the basis of race, age, and disability. Workday argues the case is without merit.
In 2022, the Biden administration and Department of Justice warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).
Earlier this year, legal experts warned that using AI for recruitment was deemed a "high-risk" activity under the EU's new AI Act, creating a number of obligations for developers. ®
Government-appointed commissioners say Birmingham severely lacked Oracle skills during disastrous implementation
UK data regulator says some devs and providers are operating without a 'lawful basis'
An emotionally-manipulable AI in the hands of the Pentagon and CIA? This'll surely end well
Data platform vendors can't meet all your needs, warns Gartner
Rewrite 'please leave my text editor alone'
Webinar Watch this webinar to find out how Amazon Q Business and Q Apps can help you do it
IT spend set to rise nonetheless and it's not all about wundertech