OpenAI revealed in a blog post today that it identified and disrupted five covert influence operations within the last three months that were attempting to govern public opinion or influence political outcomes.
The influence campaigns disrupted by OpenAI are claimed to have come from Russia, China, Iran and, surprisingly, Israel, although the Israeli campaign was linked to an organization somewhat than the federal government. The campaigns spanned multiple tasks, including generating content in various languages, creating fake social media profiles and conducting research.
The important thing to an influence campaign is to generate content that seeks to govern public opinion or political outcomes using various technique of communication. But the variability of the way the influence campaigns were using OpenAI tools resembling ChatGPT is interesting.
In the primary campaign detailed by OpenAI, an operation from Russia dubbed “Bad Grammar,” those behind it used OpenAI models to debug code for running a Telegram bot and ChatGPT to create short, political comments in Russian and English that were subsequently posted to Telegram.
A second Russian operation, dubbed Doppelganger, used OpenAI models to generate comments in English, French, German, Italian and Polish that were posted on X and, bizarrely, the joke site 9GAG. It’s not clear what influence campaign you’ll be able to undertake on a site that posts funny memes, but seemingly the Russians saw value in it for reasons unknown.
The third campaign, allegedly from China and dubbed Spamoflague, used OpenAI models to research public social media activity after which generate text in Chinese, English, Japanese and Korean. That text was then posted on X, Medium and Blogspot. The campaign also used OpenAI tools to debug code and manage databases and web sites.
The fourth campaign, an Iranian operation often known as the “International Union of Virtual Media,” used Open AI models to generate and translate long-form articles, headlines and website tags that were then published on an internet site.
The ultimate campaign was traced to an Israeli company called STOIC, which, in response to its website, provides ongoing, real-time monitoring, data evaluation and management services for political parties, ministries and municipalities. The corporate used OpenAI tools to generate articles and comments that were posted on Instagram, Facebook, X and web sites related to the corporate.
The content posted by these various operations spanned a wide range of issues, but unsurprisingly, many were related to geopolitical issues, including Russia’s invasion of Ukraine, the conflict in Gaza, the Indian elections and politics in Europe and the U.S.
“Detecting and disrupting multi-platform abuses resembling covert influence operations might be difficult because we don’t at all times know the way content generated by our products is distributed,” OpenAI noted within the blog post. “But we’re dedicated to finding and mitigating this abuse at scale by harnessing the facility of generative AI.”
Photo: Focal Foto/Flickr
Your vote of support is very important to us and it helps us keep the content FREE.
One click below supports our mission to supply free, deep, and relevant content.
Join our community on YouTube
Join the community that features greater than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and plenty of more luminaries and experts.
THANK YOU