Home Technology ChatGPT goes rogue with incorrect responses and gibberish interactions

ChatGPT goes rogue with incorrect responses and gibberish interactions

ChatGPT goes rogue with incorrect responses and gibberish interactions


ChatGPT appeared to go rogue on Tuesday evening as users reported the AI chatbot responding with incorrect answers and talking gibberish, as reported by 404Media.

Members of a ChatGPT subreddit shared their experiences with screenshots of extraordinary exchanges with the technology, responses either making no sense or answers to questions being way off the mark.

One screenshot showed this response to a question: “This is the work of service and any medical today to the data field.” It continued: “The 12th to the degree and the pool to the land to the top of the seam, with trade and feature, can spend the large and the all before it’s under the care.”

Meanwhile, another user explained that they asked ChatGPT for “a synonym for overgrown” and got the response: “A synonym for “overgrown” is “overgrown” is “overgrown” is “overgrown”…”

Other users claimed it gave totally incorrect answers to the basic questions, such as responding with “Tokyo, Japan” when asked to name the biggest city on earth that begins with an ‘A.’

OpenAI, the creators of ChatGPT, has confirmed it has fixed the issue in a message on its status page. Still, it’s another reminder that while we’re in the middle of an AI boom, the technology is not yet immune to going rogue or, quite simply, going wrong.

AI models like ChatGPT have a long way to go

This is just another example of AI technology proving it’s not yet capable of earning complete trust from its users, despite fears that Artificial Intelligence has the potential to replace humans in a variety of day-t0-day tasks, both at home and in the workplace.

There have already been several instances where lawyers have gotten in trouble for citing fictitious cases generated by AI. Just last month, Reuters reported that a New York lawyer was facing disciplinary action after they used ChatGPT for research in a medicinal malpractice lawsuit and failed to confirm that the case cited was valid.

Featured Image: Photo by Jonathan Kemper on Unsplash

James Jones

Freelance Journalist

James Jones is a highly experienced journalist, podcaster and digital publishing specialist, who has been creating content in a variety of forms for online publications in the sports and tech industry for over 10 years.

He has worked at some of the leading online publishers in the country, most recently as the Content Lead for Snack Media’s expansive of portfolio of websites, including Football Fancast.com, FootballLeagueWorld.co.uk and GiveMeSport.com. James has also appeared on several national and global media outlets, including BBC News, talkSPORT, LBC Radio, 5 Live Radio, TNT Sports, GB News and BBC’s Match of the Day 2.

James has a degree in Journalism and previously held the position of Editor-in-Chief at FootballFanCast.com. Now, he co-hosts the popular We Are West Ham Podcast, writes a weekly column for BBC Sport and covers the latest news in the industry for ReadWrite.com.


Source link


Please enter your comment!
Please enter your name here