Powered by MOMENTUM MEDIA
HR Leader logo
Stay connected.   Subscribe  to our newsletter
Tech

Misconfigured AI hiring tool gets entire HR team fired

By Kace O'Neill | |4 minute read
Misconfigured Ai Hiring Tool Gets Entire Hr Team Fired

An AI implementation mistake of epic proportions has led to an entire HR team getting axed, highlighting the dangers of over-relying on the evolving technology.

As recently reported by The Economic Times, an entire HR team was fired after leaning on a misconfigured AI-powered CV screening tool too heavily in their recruitment practices. This glaring example of AI over-reliance was discovered after the HR team used AI to evaluate potential job candidates for an IT role.

The screening tool’s algorithm – mistakenly configured – failed to identify a single qualified candidate from a large pool, filtering out every viable option available. This failure was then discovered by leadership, who deemed the blunder severe enough for the organisation to terminate the entire HR department.

Advertisement
Advertisement

Fears over AI stealing the jobs of workers is one thing, but when over-reliance on the technology results in a department-wide dismissal, it serves as a stark reminder of the inaccuracy and flaws that are still intertwined with these screen tools and algorithms.

HR Leader spoke to John Burgess, co-founder and chief executive of Sydney-based AI consultancy JoltAI, about how an over-reliance on AI can really come back to bite workers.

“Such examples of over-reliance on AI clearly demonstrate the need for oversight by humans at important stages of the AI system lifecycle,” said Burgess.

“While AI can process vast amounts of data and identify patterns with remarkable efficiency, it lacks the context, judgement, and ethical reasoning that humans bring to decision making. Human oversight ensures that AI outputs align with organisational values, regulatory standards, and societal expectation.”

With AI continuing to grow and its integration into the workplace rapidly taking place, workers and employers must ensure that the correct oversight is being afforded upon implementation.

A recent Deloitte Global Boardroom Program report stated that “while AI is not new, its scaled use in the enterprise and by employees brings the question of governance and oversight of AI and GenAI into sharp focus”.

Burgess said: “Just how meaningful human oversight can be achieved within the human resources field, without stifling the desired benefits of efficiency and innovation, will depend on the context and use case.”

After examining the various standards and frameworks that are already available, Burgess explained that some common themes emerge around oversight.

  1. Establish clear accountability – specific team members should be given formal responsibility for ensuring proper oversight of systems.
  2. Understand AI capabilities and limitations – staff should be given training on how their AI systems work, their algorithms, and the data they process.

  3. Perform scenario testing – conduct regular tests to identify how the AI systems perform under various scenarios and ensure this aligns with company policy and legal requirements.

  4. Mandate transparency and explainability – use AI systems that provide transparent and explainable outputs.

  5. Set boundaries for automation – require human oversight for sensitive decisions such as hiring, terminations, and promotions. AI should act as a support tool, not the final decision-maker.

Kace O'Neill

Kace O'Neill

Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.