Powered by MOMENTUM MEDIA
HR Leader logo
Stay connected.   Subscribe  to our newsletter
Tech

How to tame the AI menace

By Dr Gleb Tsipursky | |7 minute read
How To Tame The Ai Menace

Concerns about AI technology such as ChatGPT have HR circles talking. Here’s what you can do to mitigate the challenges posed. 

Recent Monmouth University polling showed 55 per cent of Americans expressed concern regarding AI’s impact on humanity’s future. In an age of rapid technological progress, maintaining control over artificial intelligence (AI) development is crucial. As AI-driven chatbots like ChatGPT increasingly pervade everyday life, addressing potential legal and ethical ramifications is essential and this is an issue that all HR professionals should address.

Some have taken action, with Elon Musk (OpenAI co-founder), Steve Wozniak (Apple co-founder), and over 1,000 AI experts and funders signing a letter requesting a six-month halt on training new models. Additionally, Time published an article by AI alignment pioneer Eliezer Yudkowsky, advocating a stricter permanent global ban and international sanctions against nations pursuing AI research.

Advertisement
Advertisement

However, these suggestions necessitate coordination among myriad stakeholders from diverse organisations and government entities and addressing a host of mental blind spots. I present a more feasible proposal, aligned with our current methods for controlling potentially threatening developments: legal liability.

Utilising legal liability, AI development can be effectively slowed, ensuring alignment with our values and ethics. We can guarantee AI companies promote safety and innovate in ways that minimise threats to society. This ensures ethical and effective AI tool development and usage, as I explore extensively in my new book, ChatGPT for Thought Leaders and Content Creators: Unlocking the Potential of Generative AI for Innovative and Effective Content Creation.

Legal liability: A crucial mechanism for regulating AI development

Longstanding section 230 of the Communications Decency Act has protected online platforms from liability for user-generated content. As AI technology advances, the distinction between content creators and hosts becomes unclear, leading to questions about whether AI-driven platforms like ChatGPT should be held responsible for their generated content.

Introducing legal liability for AI developers pushes companies to prioritise ethical concerns, ensuring AI products adhere to societal norms and legal regulations. Companies will need to internalise negative externalities or adverse side effects of products or business activities impacting other parties, like a nightclub’s loud music disturbing neighbours. Legal liability for negative externalities effectively decelerates AI development, providing time for reflection and robust governance framework establishment.

Holding developers and companies responsible for their creations’ consequences is crucial to controlling unchecked AI development. Legal liability promotes transparency and accountability, urging developers to focus on refining AI algorithms, reducing harmful output risks, and ensuring regulatory standard compliance.

For instance, an AI chatbot spreading hate speech or misinformation could cause significant societal harm. A more advanced AI tasked with enhancing a company’s stock might sabotage competitors without ethical constraints. Imposing legal liability on developers and companies creates a powerful incentive to invest in refining technology to avoid such outcomes.

Legal liability is more achievable than a six-month or permanent pause, aligning with America’s approach to business regulation: permitting innovation while punishing harmful business activity consequences.

The advantages of decelerating AI development

Securing ethical AI: By slowing AI development, ethical principles can be thoughtfully integrated into AI system design and implementation, reducing the risk of bias, discrimination, and other ethical issues with severe societal consequences.

Preventing technological unemployment: Rapid AI development may disrupt labour markets, causing widespread unemployment. Decelerating AI progress provides time for labour markets to adjust and decrease the risk of technological unemployment.

Bolstering regulations: AI regulation is a complex undertaking requiring a thorough understanding of the technology and its implications. Slowing AI development enables the formation of robust regulatory frameworks addressing AI-related challenges effectively.

Building public trust: Introducing legal liability in AI development can foster public trust in these technologies. By demonstrating a commitment to transparency, accountability, and ethical considerations, companies can cultivate positive public relationships, laying the foundation for a responsible and sustainable AI-driven future.

Practical steps to implement legal liability in AI development

Clarify section 230: section 230 does not seem to encompass AI-generated content. The law defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

The term “development” of content “in part” remains somewhat ambiguous, but court decisions have found that platforms cannot rely on section 230 protection if they provide “pre-populated answers” and are “much more than a passive transmitter of information provided by others.” It is highly probable that legal cases would determine AI-generated content is not covered by section 230.

Those who advocate AI development slowdown should initiate legal cases that allow courts to clarify this issue. By establishing that AI-generated content is not exempt from liability, we create a powerful incentive for developers to exercise caution and ensure their creations meet ethical and legal standards.

Create AI governance bodies: Meanwhile, governments and private entities should collaborate to form AI governance bodies that develop guidelines, regulations, and best practices for AI developers. These bodies can help monitor AI development and ensure compliance with established standards, managing legal liability and promoting innovation within ethical boundaries.

Promote collaboration: Encouraging collaboration among AI developers, regulators, and ethicists is crucial for the creation of comprehensive regulatory frameworks. By working together, stakeholders can develop guidelines that balance innovation and responsible AI development.

Educate the public: Public awareness and understanding of AI technology are critical for effective regulation. By educating the public about AI’s benefits and risks, we can encourage informed debates and discussions that drive the development of balanced and effective regulatory frameworks.

Develop liability insurance for AI developers: Insurance companies should offer liability insurance for AI developers, incentivising them to adopt best practices and adhere to established guidelines. This approach will help reduce the financial risks associated with potential legal liabilities and promote responsible AI development.

Conclusion

The growing prominence of AI technologies like ChatGPT underscores the pressing need to address the ethical and legal ramifications of AI development.

By employing legal liability as a tool to slow down AI development, we can create an environment that fosters responsible innovation, prioritises ethical considerations, and minimises the risks associated with these emerging technologies.

It is vital that developers, companies, regulators, and the public join forces to navigate a responsible path for AI development that safeguards humanity’s best interests and fosters a sustainable, equitable future.

Dr Gleb Tsipursky helps leaders use hybrid work to improve retention and productivity while cutting costs. He serves as the chief executive of the future-of-work consultancy Disaster Avoidance Experts