Objectivity is the goal when it comes to recruitment. It’s natural to think that AI should be best placed to overcome the biases implied by human-powered recruitment, but guess again.
When an algorithm was tasked with the job of assigning judicial sentencing in the US in the hopes that it could remove bias, many were disappointed by the results, which apportioned harsher sentences to African-American defendants.
To prepare the algorithm, it was fed existing judicial decisions, which, it appears, were tainted by those same biases that the developers were hoping to breed out of the judicial system.
Mirroring human biases
“AI algorithms are mirrors,” wrote Dr Dana McKay, senior lecturer in innovative interactive technologies at RMIT. Artificial intelligence (AI) reflects the data it is fed and, given that data is reflective of human behaviour and social biases, AI algorithms are naturally presenting biased results.
Most concerningly, the decision-making process in AI is often concealed, meaning that it can be difficult to ascertain whether, and how, bias has contributed to the outcome.
The story of the sentencing algorithm is particularly evocative, but by no means an exception. Take, for example, the discovery that Google’s algorithm was found to have been advertising high-paid jobs at a higher frequency to men than to women because of the statistical likelihood of men currently being in such roles.
Meanwhile, research from the University of Melbourne also found that AI discriminates against parents when it comes to hiring. On account of leave requirements, ChatGPT was more likely to find parents less qualified for “every single occupation” than non-parents.
Examples such as these show how not only can algorithms be tainted by bias, but they can also perpetuate it. Bias in AI “can have significant negative consequences, especially when it comes to recruitment”, said Dr McKay.
“With a lot of employers now starting to use some form of AI in recruiting and hiring, there’s a question around whether candidates are being rejected solely because they did not fit the bias.”
Regulating the bias
In the United States, Dr McKay explained, the law is beginning to catch up with the problem of biased AI-powered recruiting. A recent landmark case in California suggests that companies can now be held legally responsible for using biased recruiting software regardless of who made the software or how frequently it is used.
According to Dr McKay, there’s a case to be made for the same or similar to apply in Australia: “Similar laws could apply here in Australia, with, for example, the Victorian ‘positive duty’ law that requires employers to eliminate discrimination.”
Up to a third of Australian companies rely on AI to assist in their recruiting practices, while we currently have no specific laws governing the use of AI in recruitment. That said, explained The Guardian, the Department of Industry, Science, and Resources has published an ethical AI framework to guide businesses and governments in the responsible use of AI technologies – including in recruitment.
“Employers don’t have any bad intent; they want to do the right things, but they have no idea what they should be doing. There are no internal oversight mechanisms set up, no independent auditing systems to ensure there is no bias,” said lawyer Natalie Sheard.
While public awareness around the discriminatory capacity of AI-powered recruitment is growing, the Diversity Council of Australia (DCA) found that many employers nonetheless believe in the usefulness of AI – dividing employers into the “cautious” and the “converted”.
Importantly, said Dr McKay, AI itself should not be conflated with the discriminatory outcomes to date. They say less about the technology and more about the underlying human biases being fed into the algorithms: “Ultimately, we always need to remember that AI algorithms are only as good as the data they are based on.”
Much of the bias making its way into the algorithms, said the DCA, comes from the lack of diversity currently experienced in the tech and AI workforce.
“Without the decades of progress it would take to properly diversify the tech industry, which would likely be hindered by biased AI tools anyway, DCA’s research suggests a more immediate solution is to examine how we deploy this technology,” said SmartCompany.
Though it will likely be a difficult, ongoing process, the benefits of getting it right should outweigh any potential costs. For example, the use of AI in recruitment can save recruiters up to 70 per cent of their annual costs, reduce time to hire by 90 per cent, and screen résumés 70 per cent faster than humans.
Mitigating bias in AI-powered recruitment will take the following, said Professor Didar Zowghi, principal research scientist at CSIRO’s Data61:
1. Understanding the datasets.
2. Boosting transparency in AI decision-making processes.
3. Increasing human oversight.
4. Enforcing ongoing audits.
“Keeping humans as the central pillar in the AI ecosystem means AI-enabled technologies will serve as a partner in progress, amplifying human potential rather than replacing it,” said Professor Zowghi.
“With the right approach, AI has the potential to not only find the best talent but also to help ensure the decision-making process is equitable and fair for all job seekers.”
RELATED TERMS
According to the Australian Human Rights Commission, discrimination occurs when one individual or group of people is regarded less favourably than another because of their origins or certain personality traits. When a regulation or policy is unfairly applied to everyone yet disadvantages some persons due to a shared personal trait, that is also discrimination.
The practice of actively seeking, locating, and employing people for a certain position or career in a corporation is known as recruitment.
Turnover in human resources refers to the process of replacing an employee with a new hire. Termination, retirement, death, interagency transfers, and resignations are just a few examples of how organisations and workers may part ways.
Nick Wilson
Nick Wilson is a journalist with HR Leader. With a background in environmental law and communications consultancy, Nick has a passion for language and fact-driven storytelling.