Chinese company DeepSeek launched its new AI assistant last month, polarising the tech sector while simultaneously raising privacy concerns.
DeepSeek has put the tech sector into overdrive with its new AI assistant and large language model (LLM). Crafted out of China, DeepSeek now poses as a direct competitor to the well-known generative AI, OpenAI’s ChatGPT.
Despite the significance of its launch, however, many have espoused concerns about using the platform, including because of its biases and collection of private data.
Just recently, the NSW Department of Customer Service banned the use of DeepSeek. As reported by HR Leader’s sister brand, Cyber Daily, the department blocked access to the AI after analysing if it complied with its policies.
“The Department of Customer Service regularly reviews technology for compliance with its policies and procedures and has taken a precautionary approach to restrict corporate access to DeepSeek AI, consistent with the approach taken for many new and emerging applications, systems and services,” said a spokesperson of the department.
HR Leader reached out to Armin Chitizadeh, a lecturer at the School of Computer Science, University of Sydney, and Uri Gal, professor of business information systems at the University of Sydney Business School, about DeepSeek’s application to the workplace – and whether employees should be using it.
Gal first touched on the Department of Customer Service’s decision to ban the AI assistant.
“Government agencies typically manage highly sensitive information, and there are worries that DeepSeek’s extensive collection of data – such as device details, usage metrics, and personal identifiers – could expose confidential information to vulnerabilities if accessed or stored outside Australian borders,” Gal said.
“Although the open-source nature of the model offers transparency regarding its code, it does not guarantee that user data is handled solely within Australia or according to local privacy standards. This risk of cross-border data access is a key factor behind the ban.”
Once DeepSeek was launched, several Australian ministers were quick to warn people against sharing personal information with the new AI model. Despite these fears primarily fixating on DeepSeek, Chitizadeh argued that this cautious approach should go towards all AI models – not just DeepSeek.
“We should always act the same way that we are acting towards DeepSeek, towards all AI tools. The first thing that people are saying [is] that we cannot trust the news that we get from DeepSeek, or any information we get from DeepSeek, because [it is] censored,” Chitizadeh said.
“The fact is, I don’t think we should ever blindly trust any news or data we get from these tools. They can either intentionally censor or change some of their answers, or unintentionally, like a hallucination.
“[These models] are so good at explaining things that are wrong and telling us in a way that we think is true. So, we should never ever blindly trust any AI-generated content.”
Gal believes the risks associated with DeepSeek pose too unpredictably for organisations to adopt it as a legitimate tool.
“Beyond government applications, generative AIs like DeepSeek pose additional risks to the public. These include the potential spread of misinformation, unintentional biases in outputs, and the risk of privacy breaches if personal data is inadvertently exposed or misused,” Gal said.
“Moreover, the scale and automation of such systems can lead to accountability challenges, which could complicate efforts to trace and rectify erroneous or harmful content.”
In terms of employees using DeepSeek in the workplace, Chitizadeh was reluctant to advise workers to use these tools, proposing that organisations instead should invest in their own AI systems that can offer better protection.
“When you use these tools, you are giving away your information to a company, and that company is not charging you anything. So people are not really self-aware of what data they’re using, nor are they self-conscious about what data they’re providing,” Chitizadeh said.
“The best way to move forward from this current state is for companies to provide more reliable AI tools. I’m seeing this as an opportunity in the future that there might be some companies who can prove that they are reliable, and they can be trusted, and they have their servers and resources and that the companies can easily use them. I think that would be a really, really reasonable path to go.”
“There should be more control, and there should be more investment to make it safer for their own employees. My advice is [that] it’s too risky to be trusted.”
Kace O'Neill
Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.