Artificial intelligence (AI) has become widespread and readily available so quickly that the proper precautions can be overlooked. Workplaces should understand that safety is still needed when operating this tech.
XpertHR’s Safeguards for using ChatGPT and other bots for HR report outlined some of the best ways to ensure that safety is practised when using ChatGPT and other AI-powered bots in HR.
According to the report, there are four key areas that need to be recognised and safeguarded when using AI:
1. Bias
AI chatbots like ChatGPT collate information from all across the internet. That means that even the less favourable corners are recognised and used to create a response.
This is why caution needs to be exercised to ensure that no bias is perpetuated through the use of AI in the workplace. If not handled correctly, employers can land in hot water through issues with sexism, racism, and other forms of discrimination.
Some safeguards for dealing with AI bias, as listed by XpertHR, are:
- Ensure that all employees understand what bias is and how to identify it.
- Independently audit chatbot responses for bias.
- Implement anti-bias standards for tasks that are known pain points for bias, like job descriptions, performance reviews and pay decisions.
2. Inaccuracy
Not everything you read on the internet is true. And similar to the last point, this can affect the information presented by AI systems.
Therefore, it’s crucial that employers don’t just take the word of whatever AI system they’re using and understand that there is room for inaccuracy.
Safeguards for dealing with inaccuracies, as listed by XpertHR, are:
- Thoroughly research the chatbot’s capabilities and best uses.
- Set clear parameters for what types of tasks chatbots can be used for.
- Ensure that chatbots are maintained through employee monitoring.
- Require that chatbot outputs be independently verified.
- Prohibit the use of chatbots for advanced research and compliance questions.
3. Cyber security and privacy
Data breaches have been a concern surrounding ChatGPT since the beginning. Companies like Apple and Amazon have already put measures in place to ensure that employees don’t accidentally leak info through the use of chatbots.
XpertHR noted that in ChatGPT’s privacy policy, it states that the platform collects personal information.
That’s why caution must be taken when using this tech. To assist, XpertHR listed some safeguards:
- Consult with the company IT team to ensure leading practices are followed.
- Thoroughly research chatbots before choosing one to ensure it is reputable and uses high-quality data.
- Do not provide chatbots with personally identifiable information or personal health information.
- Implement encryption, authentication and other security systems to prevent the chatbot from being misused.
4. User error
It’s reported that “human error” accounts for 82 per cent of data breaches. This is why it’s crucial that staff are up to date with training when using systems such as AI.
Upskilling is necessary when any form of AI is used, as it is a relatively new and unknown way of working. XpertHR said employers should:
- Train employees in how chatbots work, AI ethics and relevant policies.
- Establish a gradual adoption plan that allows employees time to understand their new partner.
Jack Campbell
Jack is the editor at HR Leader.