In the race to attract and retain talent, HR functions are experiencing shiny object syndrome at the moment. HR tech products are promising people nirvana with everything from AI powered platforms, to intelligent workflow automation, advanced machine learning and neural networks, and everything in between.
But as with all new tech, is there a downside for early adopters? And what are the ethical implications for AI and tech platforms that automate how a business engages its people?
Many HR tech vendors are also promoting their Ethical AI credentials, but what does it all actually mean? And, are companies and HR teams adequately equipped to evaluate the ethical and moral impacts of introducing AI-powered tools and technology into their business?
Picture this: a new, AI-powered video interview tool was deployed with promising speed, accuracy and a bunch of cost savings that would pay for the tool itself. Sounds ideal. Fast forward 12-months, however, and it’s found to have discriminated against several minority groups, in turn making the organisation’s diversity efforts redundant. An employee leaked these issues to the media and now the company has a string of negative media coverage across several major news publishers to deal with as well.
This is not fiction. Something similar occurred at Amazon in 2018. They had deployed an AI-based recruitment tool to help automate the recruitment process. However, the tool was discovered to be biassed against women, penalising resumes that contained words such as ‘women's’ or ‘female’. The tool was scrapped, but it highlighted the risks and potential for AI to perpetuate biases. And the damage to Amazon’s employer-brand was done.
What do we mean by AI Ethics?
AI ethics seeks to prevent or minimise the negative effects of AI on people, communities and the environment. AI ethics determines the guidelines and principles that aim to ensure the moral use of AI when it comes to responsible decision making. The guidelines will address critical business issues and consider questions such as:
- Will AI promote justice and fairness?
- Will it promote respect for others?
- Will AI do no harm?
- Will it be equitable and avoid discrimination?
So, what can P&C leaders start doing now to address ethical risks with the use of AI in HR Tech?
Boundary setting: determine where your organisation’s ‘line in the sand’ starts and ends in relation to the ethical use of AI in your people and culture practices.
Some of the basics will be covered by regulatory compliance e.g. GDPR for data privacy and statutory legislation for anti-discrimination. However, is doing the ‘minimum’ enough to satisfy an ethical lens? For example, many HR Tech vendors apply an 80% rule for bias mitigation, which is a guideline used to determine if a hiring process is fair to everyone. It states that if a certain group, like a minority group, is selected less than 80% as often as the group with the highest selection rate, then the process may be considered unfair or discriminatory. Your organisation needs to consider this and question whether 80% is good enough from an ethical standpoint, or if you should be pushing this further?
Self-Assessment: determine whether you can fulfil these ethical standards and the gaps that may exist.
Firstly, does your organisation have an AI ethics audit committee to create and maintain ethical compliance-by-design practices relating to new or existing AI-powered tech? (If the answer is no, a good place to start is the not-for-profit organisation, Forhumanity)
Secondly, when considering ethical bias mitigation in any AI for HR use case, we must go back a few steps before ‘it was the data set used that was biassed’ and question where the bias started to first creep in. For example, could bias in an AI-powered tech be traced to your organisation's culture of hiring and promoting more of the same, under the guise of ‘culture fit’? The old adage ‘garbage in, garbage out’ is at play here, so the AI is only as ethical as what you train it to be.
Access to skills - do you have the right people having the right conversations?
Does your organisation have the internal knowledge and skills to interrogate a vendor's AI model to determine if they meet the organisation's ethical standards? Do you know what questions to ask during your vendor discovery and evaluation process?
Should organisations start engaging philosophy-trained ethicists to develop and participate in these conversations? If so, where do you access these skills? How do you give them a voice in your organisation?
An example of this is Microsoft’s ‘AI Ethics Champions’: internal staff members charged with raising awareness of AI ethical risks throughout the organisation.
TQSolutions predicts corporate ethicists will be an emerging skill set in the years to come, initially deployed by tech businesses, and a service offered by consultants and freelancers, before transitioning to broader industries in-house as AI adoption increases.
Interestingly, during research for this article, the authors conducted a search of LinkedIn Talent Insights and the findings paint a limiting picture:
- We found no ethicists in people and culture or very little skill crossover of functional skills
- AI-specific ethics skills predominantly exists in legal, tech and global consulting firms
- While there is some representation, Australia is lacking in these skills and we cut the data several ways to check this
So what do we do about all this?
How organisations respond to the ethical risks will define and shape their future reputation and success. They need to rapidly develop ethical frameworks and guidelines and involve new skills and diverse voices in these conversations. An ethics professional who will question the moral impact of proposed AI tech and tools will become essential BAU. The frameworks and guidelines they help organisations create will set clear ethical boundaries and be used to monitor and identify gaps in ethical compliance.
By adopting a framework for responsible decision making relating to the use of AI, organisations can better develop their business and HR strategies. They will deploy AI with less risk and with greater confidence. Ultimately, ensuring they are doing the right thing. And that only seems like the right thing to do.
By TQSolutions managing director Gareth Flynn and principal consultant Emma Gibbons.