Powered by MOMENTUM MEDIA
HR Leader logo
Stay connected.   Subscribe  to our newsletter
Tech

Is the business world blindly in love with AI?

By Kace O'Neill | |6 minute read
Is The Business World Blindly In Love With Ai

Dependence on artificial intelligence has become an obsession within the Australian workplace and across the world. As implementation continues to grow, has the fixation become so great that obvious flaws are being ignored?

Trailblazing through the Australian workplace, AI has been ushered into the upper-echelon business tools, with many proclaiming it as the beacon of a new era.

As its prominence grows, rudimentary understandings of the technology spread, and with that terrain comes ignorance of its flaws. Contrary to popular belief, AI and perfection are not an intertwined ideal, and the technology is still prone to several juvenile errors that can affect business outcomes and worker safety, which can lead to harsh financial repercussions.

Advertisement
Advertisement

HR Leader recently spoke to Associate Professor Niusha Shafiabady about some of the advantages and disadvantages of using AI.

“AI is becoming part of our everyday lives, and despite what many people think, this technology is not perfect. It has many flaws, and if we are to integrate it into our lives, we must know about these flaws and what kinds of solutions we need to fix it,” Shafiabady said.

“When AI is applied to real-world problems, it is never 100 per cent accurate. Even though we can test it vigorously, there might be cases where it makes an unexpected decision.”

Bias was pinpointed by Shafiabady as a legitimate threat to businesses and processes that deploy this technology.

For example, algorithmic bias, which AI is prone to, can result in discriminatory hiring practices based on gender, race, and colour, which is what Derek Mobley is alluding to through his lawsuit.

“Usually, bias happens because we are not training the AI with proper data. We do not have enough of one sample, and we have many of another sample. This is something that could create bias,” Shafiabady said.

Shafiabady explained that the education processes that students go through to understand the inner workings of AI are not sufficient. In fact, the lack of accurate teaching is partly to blame for the influx of algorithmic bias.

“The universities and the higher education system are not really doing a great job. We have graduates that don’t really have the proper knowledge of AI. And then they will get into the job market, and they will create some kind of algorithms. They use algorithms for predicting different things and different phenomena,” she said.

“Because they don’t have the proper knowledge, most of them wouldn’t be able to look after these kinds of issues. And that is because a chief executive in an organisation is not usually an AI expert, so they would not be able to manage that. They rely on the engineers and the middle management, and then they will fall in that trap of relying on the people who are not real experts. Then the issue of bias will go on and on.”

AI systems can also be incredibly inaccurate if they are not regularly updated, leading to financial and business disruptions that can prove to be very costly for an organisation or industry.

“Depending on what we are using the AI systems for, the inaccuracies could be very expensive for an organisation,” Shafiabady said.

“[For example] if you are trading on the stock market and you want to predict the financial markets using AI. If it is not accurate or properly updated, they could fall in the trap of mispredictions and miscalculations, and they could really lose a lot of money, and this could affect everyone.”

In terms of integrating AI, chief executives and upper management must learn to ask questions and ensure that their engineers are on the ball and not susceptible to issues such as AI bias and older systems.

“As a policymaker or as the chief executive of an organisation, they should be aware that AI has bias. If an engineering team provides a solution, they should ask how they have addressed the bias in that specific use of the AI system or the program,” Shafiabady said.

“They can just ask the questions and see if the people who are the engineers have really addressed those matters. Or when they have collected the data or trained the systems, have they consulted with the proper people who have enough knowledge of the system to collect proper information with respect to what they are predicting?”

Being alert to some of the inaccuracies that can arise when using AI is crucial for organisations. As this technology continues to be adopted across the Australian workplace, ensuring that you have processes in place that can prevent those issues from occurring could be the difference between optimising AI and having it harm your business practices.

Kace O'Neill

Kace O'Neill

Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.