Sexually explicit AI-generated images of Talor Swift flooded Twitter/X this week, infuriating fans and reigniting the conversation about AI regulations.
According to The Verge, one of the images on X received more than 45 million views, 24,000 reports, and hundreds of thousands of likes and bookmarks. The post was live for about 17 hours before the user who posted it was suspended.
To flood social media with results, users promoted posts with the phrase “Protect Taylor Swift ” to saturate people searching for the fake images with images of the singer at concerts or performances.
The AI-generated images may have originated from a Telegram group dedicated to creating explicit images of women using AI tools such as Microsoft Designer.
X wrote a post a day after the incident, saying that images of non-consensual nudity are prohibited on the social media platform and that there is a “zero-tolerance policy towards such content.”
Swift’s fans criticized X/Twitter for allowing the post to remain live for nearly a day.
A DailyMail report said that Swift was furious about the images and was considering legal action against the deepfake porn site that published them.
Users said this could be the beginning of serious discussions about regulating AI.
The truth is that some of these AI tools have restrictions that prevent nude, pornographic, or photorealistic images of celebrities, but some don’t. Regulations could prevent this from happening to other celebrities or even people you know.