Sexually explicit AI generated images featuring Taylor Swift have proliferated on social media, kicking up a controversy and leading to backlash from fans. One suggested that the images may have originated in a Telegram group, where users share explicit AI-generated content created using Microsoft's AI services and tools in this latest gust in the whole AI storm.

These Telegram group members allegedly found amusement in the viral spread of Taylor Swift's AI-generated images on X (formerly Twitter). Notably, while X's policies explicitly prohibit synthetic and manipulated media, as well as nonconsensual nudity, actually enforcing these policies is incredibly difficult ever since Musk sacked basically everyone working in moderation.

The offending AI images of Taylor Swift surfaced on X in the past 24 hours, shedding light on the growing challenge of combating the spread of fake imagery created by artificial intelligence. One particularly prominent instance on X garnered over 45 million views, thousands of reposts, and hundreds of thousands of likes before the account responsible for posting them was suspended.

Despite the removal of the initial post, discussions about the images persisted, leading to widespread reposts across various accounts. The term "Taylor Swift AI" trended in some regions, amplifying the visibility of these explicit images.

According to reports, the viral NSFW AI-generated images of Taylor Swift may have been from a Telegram group. However, the Swift's fans, the Swifties, quickly responded by reporting the images and reposting photos of her concerts with hashtags #TaylorSwiftAI and #ProtectTaylorSwift.