Home Technology “Hallucinating” AI models help coin Cambridge Dictionary’s word of the year

“Hallucinating” AI models help coin Cambridge Dictionary’s word of the year

72
0


Enlarge / A screenshot of the Cambridge Dictionary website where it announced its 2023 word of the year, “hallucinate.”

On Wednesday, Cambridge Dictionary announced that its 2023 word of the year is “hallucinate,” owing to the popularity of large language models (LLMs) like ChatGPT, which sometimes produce erroneous information. The Dictionary also published an illustrated site explaining the term, saying, “When an artificial intelligence hallucinates, it produces false information.”

“The Cambridge Dictionary team chose hallucinate as its Word of the Year 2023 as it recognized that the new meaning gets to the heart of why people are talking about AI,” the dictionary writes. “Generative AI is a powerful tool but one we’re all still learning how to interact with safely and effectively—this means being aware of both its potential strengths and its current weaknesses.”

As we’ve previously covered in various articles, “hallucination” in relation to AI originated as a term of art in the machine-learning space. As LLMs entered mainstream use through applications like ChatGPT late last year, the term spilled over into general use and began to cause confusion among some, who saw it as unnecessary anthropomorphism. Cambridge Dictionary’s first definition of hallucination (for humans) is “to seem to see, hear, feel, or smell something that does not exist.” It involves perception from a conscious mind, and some object to that association.

 

Like all words, its definition borrows heavily from context. When machine-learning researchers use the term hallucinate (which they still do, frequently, judging by research papers), they typically understand an LLM’s limitations—for example, that the AI model is not alive or “conscious” by human standards—but the general public may not. So in a feature exploring hallucinations in-depth earlier this year, we suggested an alternative term, “confabulation,” that perhaps more accurately describes the creative gap-filling principle of AI models at work without the perception baggage. (And guess what—that’s in the Cambridge Dictionary, too.)

“The widespread use of the term ‘hallucinate’ to refer to mistakes by systems like ChatGPT provides a fascinating snapshot of how we’re thinking about and anthropomorphising AI,” said Henry Shevlin, an AI ethicist at the University of Cambridge, in a statement. “As this decade progresses, I expect our psychological vocabulary will be further extended to encompass the strange abilities of the new intelligences we’re creating.”

Hallucinations have resulted in legal trouble for both individuals and companies over the past year. In May, a lawyer who cited fake cases confabulated by ChatGPT got in trouble with a judge and was later fined. In April, Brian Hood sued OpenAI for defamation when ChatGPT falsely claimed that Hood had been convicted for a foreign bribery scandal. It was later settled out of court.

In truth, LLMs “hallucinate” all the time. They pull together associations between concepts from what they have learned from training (and later fine-tuning), and it’s not always an accurate inference. Where there are gaps in knowledge, they will generate the most probable-sounding answer. Many times, that can be correct, given high-quality training data and proper fine-tuning, but other times it’s not.

So far, it seems that OpenAI has been the only tech company to significantly clamp down on erroneous hallucinations with GPT-4, which is one of the reasons that model is still seen as being in the lead. How they’ve achieved this is part of OpenAI’s secret sauce, but OpenAI chief scientist Illya Sutstkever has previously mentioned that he thinks RLHF may provide a way to reduce hallucinations in the future. (RLHF, or reinforcement learning through human feedback, is a process whereby humans rate a language model’s answers, and those results are used to fine-tune the model further.)

Wendalyn Nichols, Cambridge Dictionary’s publishing manager, said in a statement, “The fact that AIs can ‘hallucinate’ reminds us that humans still need to bring their critical thinking skills to the use of these tools. AIs are fantastic at churning through huge amounts of data to extract specific information and consolidate it. But the more original you ask them to be, the likelier they are to go astray.”

It has been a banner year for AI words, according to the dictionary. Cambridge says it has added other AI-related terms to its dictionary in 2023, including “large language model,” “AGI,” “generative AI,” and “GPT.”



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here