Lots of of AI luminaries sign letter calling for anti-deepfake laws

Lots of in the factitious intelligence community have signed an open letter calling for strict regulation of AI-generated impersonations, or deepfakes. While that is unlikely to spur real laws (despite the House’s recent task force), it does act as a bellwether for a way experts lean on this controversial issue.

The letter, signed by over 500 people in and adjoining to the AI field at time of publishing, declares that “Deepfakes are a growing threat to society, and governments must impose obligations throughout the availability chain to stop the proliferation of deepfakes.”

They call for full criminalization of deepfake child sexual abuse materials (CSAM, AKA child pornography) no matter whether the figures depicted are real or fictional. Criminal penalties are called for in any case where someone creates or spreads harmful deepfakes. And developers are called on to forestall harmful deepfakes from being made using their products in the primary place, with penalties if their preventative measures are inadequate.

Among the many more outstanding signatories of the letter are:

  • Jaron Lanier
  • Frances Haugen
  • Stuart Russell
  • Andrew Yang
  • Marietje Schaake
  • Steven Pinker
  • Gary Marcus
  • Oren Etzioni
  • Genevieve smith
  • Yoshua Bengio
  • Dan Hendrycks
  • Tim Wu

Also present are a whole lot of academics from across the globe and plenty of disciplines. In case you’re curious, one person from OpenAI signed, a pair from Google Deepmind, and none at press time from Anthropic, Amazon, Apple, or Microsoft (except Lanier, whose position there’s non-standard). Interestingly they’re sorted within the letter by “Notability.”

This is much from the primary call for such measures; actually they’ve been debated within the EU for years before being formally proposed earlier this month. Perhaps it’s the EU’s willingness to deliberate and follow through that activated these researchers, creators, and executives to talk out.

Or perhaps it’s the slow march of KOSA towards acceptance — and its lack of protections for this kind of abuse.

Or perhaps it’s the specter of (as we now have already seen) AI-generated scam calls that would sway the election or bilk naive folks out of their money.

Or perhaps it’s yesterday’s task force being announced with no particular agenda aside from possibly writing a report about what some AI-based threats is likely to be and the way they is likely to be legislatively restricted.

As you possibly can see, there is no such thing as a shortage of reasons for those within the AI community to be out here waving their arms around and saying “possibly we must always, you recognize, do something?!”

Whether anyone will take notice of this letter is anyone’s guess — nobody really paid attention to the infamous one calling for everybody to “pause” AI development, but in fact this letter is a little more practical. If legislators determine to tackle the difficulty, an unlikely event given it’s an election yr with a sharply divided congress, they are going to have this list to attract from in taking the temperature of AI’s worldwide academic and development community.