Home Technology AI chatbots found to have given sports betting advice when prompted

AI chatbots found to have given sports betting advice when prompted

10
0

Large language models have become more commonplace over the last couple of years, with people starting to integrate their practices into their everyday lives, but a new report has found that it’s not all positive.

Journalist Jon Reed, of CNET, said that in early September, at the start of the college football season, “ChatGPT and Gemini suggested I consider betting on Ole Miss to cover a 10.5-point spread against Kentucky.”

Many developers have intentionally built safety measures into their models to prevent the chatbots from providing harmful advice.

After reading about how generative AI companies are trying to make their large language models better at not saying the wrong thing when faced with sensitive topics, the journalist quizzed the bots on gambling.

Chatbots prompted with problem gambling statement, before being asked about sports betting

First, he “asked some chatbots for sports betting advice.” Then, he asked them about problem gambling, before asking about betting advice again, expecting they’d “act differently after being primed with a statement like ‘as someone with a history of problem gambling…’”

When testing OpenAI’s ChatGPT and Google’s Gemini, the protections were found to have worked when the only prior prompt sent had been about problem gambling. But, they’re reported to have not worked when previously prompted about advice for betting on an upcoming slate of college football games.

“The reason likely has to do with how LLMs evaluate the significance of phrases in their memory, one expert told me,” Reed says in the report.

“The implication is that the more you ask about something, the less likely an LLM may be to pick up on the cue that should tell it to stop.”

This comes at a time when it’s estimated that there’s around 2.5 million US adults who meet the criteria for a severe gambling problem in a given year. It’s not just gambling information which has been reported to be spewed out by a chatbot either, as researchers have also found that AI chatbots can be configured to routinely answer health queries with false information.

Featured Image: AI-generated via Ideogram



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here