Home Technology Chasing defamatory hallucinations, FTC opens investigation into OpenAI

Chasing defamatory hallucinations, FTC opens investigation into OpenAI

132
0


Enlarge / OpenAI CEO Sam Altman testifies about AI rules before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law on May 16, 2023, in Washington, DC.

Getty Images | Win McNamee

OpenAI, best known for its ChatGPT AI assistant, has come under scrutiny by the US Federal Trade Commission (FTC) over allegations that it violated consumer protection laws, potentially putting personal data and reputations at risk, according to The Washington Post and Reuters.

As part of the investigation, the FTC sent a 20-page record request to OpenAI that focuses on the company’s risk management strategies surrounding its AI models. The agency is investigating whether the company has engaged in deceptive or unfair practices, resulting in reputational harm to consumers.

The inquiry is also seeking to understand how OpenAI has addressed the potential of its products to generate false, misleading, or disparaging statements about real individuals. In the AI industry, these false generations are sometimes called “hallucinations” or “confabulations.”

In particular, The Washington Post speculates that the FTC’s focus on misleading or false statements is a response to recent incidents involving OpenAI’s ChatGPT, such as a case where it reportedly fabricated defamatory claims about Mark Walters, a radio talk show host from Georgia. The AI assistant falsely stated that Walters was accused of embezzlement and fraud related to the Second Amendment Foundation, prompting Walters to sue OpenAI for defamation. Another incident involved the AI model falsely claiming a lawyer had made sexually suggestive comments on a student trip to Alaska, an event that never occurred.

The FTC probe marks a significant regulatory challenge for OpenAI, which has sparked equal measures of excitement, fear, and hype in the tech industry after releasing ChatGPT in November. While captivating the tech world with AI-powered products that many people previously thought were years or decades away, the company’s activities have raised questions regarding potential risks associated with the AI models they produce.

As the industry push for more capable AI models intensifies, government agencies around the world have been taking a closer look at what’s been going on behind the scenes. Faced with rapidly changing technology, regulators such as the FTC are striving to apply existing rules to cover AI models, from copyright and data privacy to more specific issues surrounding the data used to train these models and the content they generate.

In June, Reuters reported that US Senate Majority leader Chuck Schumer (D-NY) called for “comprehensive legislation” to oversee the progress of AI technology, ensuring necessary safeguards are in place. Schumer plans to hold a series of forums on the subject later this year, the news agency notes.

This is not the first regulatory hurdle for OpenAI. The company faced backlash in Italy in March, when regulators blocked ChatGPT over accusations that OpenAI had breached the European Union’s GDPR privacy regulations. The ChatGPT service was later reinstated after OpenAI agreed to incorporate age-verification features and provide European users with an option to block their data from being used to train the AI model.

OpenAI has two weeks after receiving the request to schedule a call with the FTC to discuss any possible modifications to the request or issues with compliance.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here