A series of explicit AI deepfake images of Taylor Swift doing the rounds on social media has caused outrage amongst fans and lawmakers alike, as reported by VentureBeat.
The images show the 2023 Times Person Of The Year engaging in explicit sexual activity with fans of NFL team the Kansas City Chiefs, which is the team her boyfriend Travis Kelce plays for.
An army of Swift fans has rushed to her defense on social media with the hashtag #ProtectTaylorSwift, while X battled to block the content due to new accounts regularly reposting the images. Meanwhile, US lawmakers are now under renewed pressure to crack down on the rapidly evolving generative AI marketplace.
It’s not clear what AI image-generation tools were used to create these specific deepfakes of Taylor Swift. Many services, including MidJourney and OpenAI’s DALL-E3, prohibit the creation of sexually explicit or suggestive content.
However, 404 Media, which says tracked the images down to a group on Telegram, claims the images were created using Microsoft’s AI tools, which is powered by DALL-E3.
X account @Zvbear has admitted to creating some of the images, according to Newsweek, and they have since turned their account to private.
What can lawmakers do to crack down on deepfake content creation?
As the Daily Mail reports of Taylor Swift’s fury over these specific images being spread on social media, US lawmakers are under pressure to regulate the technology behind them.
Tom Kean Jr, a Republican Congressman from the state of New Jersey, released a statement to the press this week that urges Congress to take up and pass two bills he has introduced to help regulate AI.
We are living in a highly advanced technological world that is ever-changing, and proper oversight is necessary.
Let’s not wait for the next victim to realize the importance of AI regulations. https://t.co/Aw5bP1StNB
— Congressman Tom Kean (@CongressmanKean) January 25, 2024
In this statement, Kean says: “Whether the victim is Taylor Swift of any young person across our country, we need to establish safeguards to combat this alarming trend.
“My bill, the AI Labelling Act, would be a very significant step forward.”
The AI Labeling Act would require AI multimedia generator companies to add a “clear and conspicuous notice” to their generated works that is is “AI-generated content.” It’s clear, though, how that would prevent the creation of the images in the first place.
Meta is already doing something similar for images generated using its Image AI art generator tool, while OpenAI recently promised to implement AI image credentials.
Featured Image: Photo by Rosa Rafael on Unsplash