The sudden boom in artificial intelligence (AI) that ChatGPT headlined has taken the world by storm. With access to this program being as easy as typing “ChatGPT” into Google, it’s no surprise that plenty of people are taking advantage of it.
This technology is so popular that it even became the fastest-growing consumer application ever, taking two months to reach 100 million monthly active users. In comparison, Reuters said that it took TikTok nine months to reach this milestone and Instagram two and a half years after its launch.
“When it launched, the user-friendly AI large language model went from zero to 100 million users in only 60 days. Since then, Microsoft has invested $US10 billion dollars in start-up Open AI and embedded GPT4 into Bing,” said Associate Professor Rob Nicholls, School of Management and Governance, UNSW Business School.
This technology is extremely useful and easy to use. With a simple prompt, users can save the time of researching and writing as ChatGPT does it all for them in an instant.
“Part of the user experience of OpenAI’s ChatGPT is that AI tools can generate really useful text from a specific prompt, making for possible opportunities to save time in the day-to-day at work with machine learning — for example, when writing emails,” said Professor Nicholls.
“You can even use the job description provided in an advertisement to get ChatGPT to write the ‘perfect’ cover letter. Or the staff member from people and culture can generate the ‘perfect’ job advertisement for LinkedIn from the scratchy brief provided by the business unit.”
With easy accessibility and worthwhile benefits, it’s no wonder this program has taken off. It can create issues, however. In the workplace, you run the risk of revealing sensitive information, said Professor Nicholls.
“As a starting point, let’s look at the example of the job advert. If the role is one which is commonly advertised, then little is lost by sharing the form of the job description with a couple of hundred million other uses of ChatGPT,” he explained.
“But if the job description includes information which could be used by a competitor to identify your business, then the risks are significantly higher (especially if recruitment is an important part of insider business strategy in your workplace).”
Professor Nicholls continued: “With companies like Samsung having recently been stung by staff members inadvertently giving away material via ChatGPT, it is important to consider the risks carefully before using it at work.”
Clearly, people are aware of the risks posed by AI, like ChatGPT. Italy went as far as to ban the chatbot, becoming the first country to do so. This was due to privacy concerns that the program posed, as companies have identified data breaches.
There “appears to be no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies,” said Italian data regulator Garante.
With this in mind, it’s important that you’re conscious of the information you’re giving to ChatGPT or similar AI chatbots. Revealing sensitive information can easily be lost in the system and could land employees and their organisations in trouble as proved by the recent data breaches.
Jack Campbell
Jack is the editor at HR Leader.