Home Technology 200 AI researchers urge OpenAI, Google, Meta to allow safety checks

200 AI researchers urge OpenAI, Google, Meta to allow safety checks

0
200 AI researchers urge OpenAI, Google, Meta to allow safety checks

[ad_1]

More than 200 of the world’s leading researchers in artificial intelligence (AI) have signed an open letter calling on big players in AI  like OpenAI, Meta, and Google to allow outside experts to independently evaluate and test the safety of their AI models and systems.

The letter argues that strict rules put in place by tech firms to prevent abuse or misuse of their AI tools are having the unintended consequence of stifling critical independent research aimed at auditing these systems for potential risks and vulnerabilities.

Prominent signatories include Stanford University’s Percy Liang, Pulitzer-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, AI ethics researcher Deb Raji, and former government advisor Suresh Venkatasubramanian.

What are the AI researchers concerned about?

The researchers say AI company policies that ban certain types of testing and prohibit violations of copyrights, generation of misleading content, or other abuses are being applied in an overly broad manner. This has created a “chilling effect” where auditors fear having their accounts banned or facing legal repercussions if they push the boundaries to stress-test AI models without explicit approval.

Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter states.

The letter lands amid growing tensions, with AI firms like OpenAI claiming that The New York Times’ efforts to probe for copyright issues in ChatGPT amounted to “hacking.” Meta has updated terms threatening revocation if its latest language model is used to infringe intellectual property.

Researchers argue companies should provide a “safe harbor” allowing responsible auditing, as well as direct channels to responsibly report potential vulnerabilities found during testing, rather than having to resort to “gotcha” moments on social media.

“We have a broken oversight ecosystem,” said Borhane Blili-Hamelin of the AI Risk and Vulnerability Alliance. “Sure, people find problems. But the only channel to have an impact is these ‘gotcha’ moments where you have caught the company with its pants down.”

The letter and accompanying policy proposal aim to foster a more collaborative environment for external researchers to evaluate the safety and potential risks of AI systems impacting millions of consumers.

Featured image: Ideogram

Sam Shedden

Managing Editor

Sam Shedden is an experienced journalist and editor with over a decade of experience in online news.

A seasoned technology writer and content strategist, he has contributed to many UK regional and national publications including The Scotsman, inews.co.uk, nationalworld.com, Edinburgh Evening News, The Daily Record and more.

Sam has written and edited content for audiences whose interests include media, technology, AI, start-ups and innovation. He’s also produced and set-up email newsletters in numerous specialist topics in previous roles and his work on newsletters saw him nominated as Newsletter Hero Of The Year at the UK’s Publisher Newsletter Awards 2023.

He has worked in roles focused on growing reader revenue and loyalty at one of the UK’s leading news publishers, National World plc
growing quality, profitable news sites. He has given industry talks and presentations sharing his experience growing digital audiences to international audiences.

Now a Managing Editor at Readwrite.com, Sam is involved in all aspects of the site’s news operation including commissioning, fact-checking, editing and content planning.

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here