Spurred by the growing threat of deepfakes, the FTC is in search of to modify an existing rule that bans the impersonation of companies or government agencies to cover all consumers.
The revised rule — depending on the ultimate language, and the general public comments that the FTC receives — may also make it illegal for a GenAI platform to supply goods or services that they know or have reason to know are getting used to harm consumers through impersonation.
“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale,” FTC chair Lina Khan said in a press release. “With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever. Our proposed expansions to the ultimate impersonation rule would just do that, strengthening the FTC’s toolkit to handle AI-enabled scams impersonating individuals.”
It’s not only folks like Taylor Swift who should worry about deepfakes. Online romance scams involving deepfakes are on the rise. And scammers are impersonating employees to extract money from corporations.
In a recent poll from YouGov, 85% of Americans said they were very concerned or somewhat concerned in regards to the spread of misleading video and audio deepfakes. A separate survey from The Associated Press-NORC Center for Public Affairs Research found that almost 60% of adults think AI tools will increase the spread of false and misleading information through the 2024 U.S. election cycle.
Last week, my colleague Devin Coldewey covered the FCC’s move to make AI-voiced robocalls illegal by reinterpreting an existing rule that prohibits artificial and pre-recorded message spam. Timely in light of a phone campaign that employed a deepfaked President Biden to discourage Latest Hampshire residents from voting, the rule change — and the FTC’s step today — are the present extent of the federal government’s fight against deepfakes and deepfaking technology.
No federal law squarely bans deepfakes. High-profile victims like celebrities can theoretically turn to more traditional existing legal remedies to fight back, including copyright law, likeness rights and torts (e.g. invasion of privacy, intentional infliction of emotional distress). But these patchwork laws will be time-consuming — and laborious — to litigate.
Within the absence of congressional motion, 10 states across the country have enacted statutes criminalizing deepfakes — albeit mostly non-consensual porn. Little doubt, we’ll see those laws amended to encompass a wider array of deepfakes — and more state-level laws passed — as deepfake-generating tools grow increasingly sophisticated. (Working example, Minnesota’s law already targets deepfakes utilized in political campaigning.)