Nonconsensual deepfake porn of Taylor Swift went viral on X this week, with one post garnering greater than 45 million views, 24,000 reposts and tons of of 1000’s of likes before it was removed.
The pop star has considered one of the world’s most dedicated, extremely online, and incomprehensibly massive fanbases. Now, the Swifties are out for blood.
When mega-fandoms get organized, they’re able to immense things, like when K-pop fans reserved tons of of tickets to a Donald Trump rally in an try and tank attendance numbers. Because the 2024 U.S. presidential election approaches, some pundits have even theorized concerning the power of Swifties as a voting bloc.
But today isn’t election day, and Swifties are focused on something more immediate: making the musician’s nonconsensual deepfakes as difficult to search out as possible. Now, if you search terms like “taylor swift ai” or “taylor swift deepfake” on X, you’ll find 1000’s of posts from fans attempting to bury the AI-generated content. On X, the phrase “PROTECT TAYLOR SWIFT” has been trending with over 36,000 posts.
Sometimes, these fandom-driven campaigns can cross a line. While some fans are encouraging one another to dox the X users who circulated the deepfakes, others worry about fighting harassment with more harassment, especially when the suspected perpetrator has a comparatively common name, and in some cases, the Swifties might be going after the incorrect guy. With so many 1000’s of fans collaborating within the cause, it’s inevitable that not every Swiftie will likely be a part of the identical unified front — and a few are more in contact with the “Popularity” era than others.
With the rise of accessible generative AI tools, this harassment tactic has grow to be so widespread that last 12 months, the FBI and international law enforcers issued a joint statement concerning the threat of sextortion. In accordance with research from cybersecurity firm Deeptrace, about 96% of deepfakes are pornographic, they usually almost all the time feature women.
“Deepfake pornography is a phenomenon that exclusively targets and harms women,” the report reads. This abuse has even seeped into schools, where underage girls have been targeted by their classmates with explicit, nonconsensual deepfakes. So, for some Taylor Swift fans, this isn’t only a matter of protecting the star. They realize that these attacks can occur to any of them, not only celebrities, and that they must fight to set the precedent that this behavior is intolerable.
“She is taking the hit for us immediately, all,” said a TikTok user named LeAnn in a video urging users to defend Swift. “In protecting her, you’re going to be protecting yourself, and your daughters.”
In accordance with 404 Media, the photographs originated on a Telegram chat that’s dedicated to creating nonconsensual, explicit images of girls using generative AI. The group directs its users to generate AI deepfakes on Microsoft’s Designer; though this type of content violates Microsoft policy, its AI remains to be capable of making it, and users have created easy workarounds to bypass basic safety tools.
Microsoft and X didn’t reply to request for comment before publication.
Congress is making some legislative headway to criminalize nonconsensual deepfakes. Virginia has banned deepfake revenge porn, and Representative Yvette Clarke (D-NY) recently reintroduced the DEEPFAKES Accountability Act, which she first proposed in 2019. While critics worry concerning the difficulty of legislating the dark corners of the online, some say the bill could not less than institute some legal precedent of protection from this abuse. Swift’s fans also called attention to the failures of Ticketmaster, the entertainment mega-company that also owns Live Nation. In a very memorable statement, FTC chair Lina Khan said last 12 months that the disastrous experience to purchase tickets for Swift’s Eras tour “ended up converting more gen Z-ers into anti-monopolists overnight than anything I could have done.”
This abuse campaign is emblematic of the issues with AI’s steep ascent: firms are constructing too fast to properly assess the risks of the products they’re shipping. So, perhaps Taylor Swift fans will take up the fight for thoughtful regulation of fast-developing AI products — but when it takes a mass harassment campaign against a celeb for undertested AI models to face any type of scrutiny, then that’s a complete other problem.