Companies Look to Human Moderators to Keep AI Apps in Line

  • by:
  • Source: WSJ
  • 10/24/2023
Businesses weighing the risks and benefits of generative artificial intelligence are running up against a challenge social-media platforms have long wrestled with: preventing technology from being hijacked for malicious ends.

Taking a page from those platforms, business technology leaders are turning to a mixture of software-based “guardrails” and human moderators to keep its use within prescribed bounds.

AI models like OpenAI’s GPT-4 are trained on vast amounts of internet content. Given the right prompts, a large language model can generate reams of toxic content inspired by the Web’s darkest corners. That means content moderation needs to happen at the source—when AI models are trained—and on the outputs they churn out.

Intuit, the Mountain View, Calif.-based maker of TurboTax software, recently released a generative AI-based assistant that offers customers financial recommendations. Intuit Assist, which is currently available to a limited number of users, relies on large language models trained on internet data and models fine-tuned with Intuit’s own data.  

The company is now planning to build a staff of eight full-time moderators to review what goes in and out of the large language model-powered system, including helping to prevent employees from leaking sensitive company data, said Atticus Tysen, the company’s chief information security officer. 

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 louder.news, Privacy Policy