Your AI clone could goal your loved ones, but there’s a straightforward defense

The warning extends beyond voice scams. The FBI announcement details how criminals also use AI models to generate convincing profile photos, identification documents, and chatbots embedded in fraudulent web sites. These tools automate the creation of deceptive content while reducing previously obvious signs of humans behind the scams, like poor grammar or obviously fake photos.

Very similar to we warned in 2022 in a chunk about life-wrecking deepfakes based on publicly available photos, the FBI also recommends limiting public access to recordings of your voice and pictures online. The bureau suggests making social media accounts private and restricting followers to known contacts.

Origin of the key word in AI

To our knowledge, we will trace the primary appearance of the key word within the context of contemporary AI voice synthesis and deepfakes back to an AI developer named Asara Near, who first announced the concept on Twitter on March 27, 2023.

“(I)t could also be useful to determine a ‘proof of humanity’ word, which your trusted contacts can ask you for,” Near wrote. “(I)n case they get a wierd and urgent voice or video call from you this may also help assure them they are literally speaking with you, and never a deepfaked/deepcloned version of you.”

Since then, the concept has spread widely. In February, Rachel Metz covered the subject for Bloomberg, writing, “The concept is becoming common within the AI research community, one founder told me. It’s also easy and free.”

In fact, passwords have been used since precedent days to confirm someone’s identity, and it seems likely some science fiction story has handled the problem of passwords and robot clones prior to now. It’s interesting that, on this recent age of high-tech AI identity fraud, this ancient invention—a special word or phrase known to few—can still prove so useful.