OpenAI has issued a warning for its users after it revealed that some had started developing feelings for its GPT-4o chatbot.
In a “system card” blog post, the AI company highlighted the risks associated with “anthropomorphization and emotional reliance,” which involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models.
OpenAI stated that the risk may be heightened by the more advanced audio capabilities of GPT-4o, which appear to be more realistic. According to the tech firm, early testing revealed that users were using language that might show they were forming a bond with the OpenAI model. For example, this includes language expressing shared bonds, such as “This is our last day together.”
We’re sharing the GPT-4o System Card, an end-to-end safety assessment that outlines what we’ve done to track and address safety challenges, including frontier model risks in accordance with our Preparedness Framework. https://t.co/xohhlUquEr
— OpenAI (@OpenAI) August 8, 2024
The phenomenon might have broader social implications. “Human-like socialization with an AI model may produce externalities impacting human-to-human interactions,” OpenAI continued. “For instance, users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.”
Omni models like GPT-4o mean that the AI is able to remember key details in a conversation, but this may also lead to an over-reliance on technological interactions.
OpenAI added that it would study the potential for emotional reliance, and ways in which deeper integration of its model’s and systems’ features with the audio tool may drive behavior and cause people to form bonds with it.
That said, the company claims that the models are “deferential,” allowing users to interrupt and “take the mic” at any time.
Concerningly, OpenAI also noted that GPT-4o can sometimes “unintentionally generate an output emulating the user’s voice.” This means it could potentially be used to impersonate someone, which could be exploited for nefarious purposes by anyone from criminals to malicious ex-partners engaging in harmful activities.
Life imitating art as GPT-4o compared to Her
Some users have taken to X to comment on the strange development, stating that it is “creepy” and similar to the plot of the 2013 movie “Her.” Another likened the impact to the science-fiction series “Black Mirror.”
This is creepy…
From the OpenAI GPT-4osystem card – https://t.co/2pGz84dMUk
"During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice" pic.twitter.com/xMyfywkatC
— Axel Hunter (@axelphunter) August 10, 2024
OpenAI's latest safety report reads like the plot to the 2013 movie Her:
"users might form social relationships with AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships."https://t.co/MGmnClBBKt pic.twitter.com/iNGLg8Geok
— Neuroscience TV (@NeuroscienceTV) August 9, 2024
OpenAI just leaked the plot of Black Mirror's next season. https://t.co/kfOQ4jLkwa pic.twitter.com/ZG2SoZA7yY
— Max Woolf (@minimaxir) August 8, 2024
Coincidentally, OpenAI removed one of the voices used by GPT-4o after it was likened to the actress Scarlett Johansson and the character she played in “Her.”
Featured image: Canva / Ideogram
The post OpenAI warns users against forming emotional bond with its GPT-4o chatbot appeared first on ReadWrite.