The European Union has launched a consultation on draft election security mitigations aimed toward larger online platforms, resembling Facebook, Google, TikTok and X (Twitter), that features a set of recommendations it hopes will shrink democratic risks from generative AI and deepfakes — along with covering off more well-trodden ground resembling content moderation resourcing and repair integrity, political ads transparency, and media literacy. The general goal for the guidance is to make sure tech giants take due care and a spotlight to a full sweep of election-related risks that may bubble up on their platforms, including consequently of easier access to powerful AI tools.
The EU is aiming the election security guidelines on the nearly two dozen platform giants and search engines like google and yahoo which can be currently designated under its rebooted e-commerce rules, aka the Digital Services Act (DSA).
Concerns that advanced AI systems like large language models (LLMs) — that are able to outputting highly plausible sounding text and/or realistic imagery, audio or video — have been riding high since last yr’s viral boom in generative AI, which saw tools like OpenAI’s AI chatbot, ChatGPT, becoming household names. Since then, scores of generative AIs have been launched, including a spread of models and tools developed by long-established tech giants like Meta and Google, whose platforms and services routinely reach billions of web users.
“Recent technological developments in generative AI have enabled the creation and widespread use of artificial intelligence able to generating text, images, videos, or other synthetic content. While such developments may bring many recent opportunities, they could result in specific risks within the context of elections,” text the EU is consulting on warns. “[G]enerative AI can notably be used to mislead voters or to govern electoral processes by creating and disseminating inauthentic, misleading synthetic content regarding political actors, false depiction of events, election polls, contexts or narratives. Generative AI systems may also produce incorrect, incoherent, or fabricated information, so called ‘hallucinations,’ that misrepresent the truth, and which may potentially mislead voters.”
In fact, it doesn’t take a staggering amount of compute power and cutting-edge AI systems to mislead voters. Some politicians are experts in producing “fake news” just using their very own vocal cords, in spite of everything. And even on the tech tool front, malicious agents don’t need fancy GenAIs to execute a crudely suggestive edit of a video (or manipulate digital media in other, much more basic ways) with the intention to create potentially misleading political messaging that may quickly be tossed onto the outrage fire of social media to be fanned by willingly triggered users (and/or amplified by bots) until the divisive flames begin to self-spread (driving whatever political agenda lurks behind the fake).
See, for a recent example, a (critical) decision by Meta’s Oversight Board of how the social media giant handled an edited video of U.S. president Joe Biden, which called on the parent company to rewrite “incoherent” rules around fake videos since, currently, such content could also be treated in a different way by Meta’s moderators — depending on whether it’s been AI generated or edited in a more basic way.
Notably, but unsurprisingly, then, the EU’s guidance on election security doesn’t limit itself to AI-generated fakes either.
While, on GenAI, the bloc is putting a wise emphasis on the necessity for platforms to tackle dissemination (not only creation) risks too.
Best practices
One suggestion the EU is consulting on within the draft guidelines is that the labeling of GenAI, deepfakes and/or other “media manipulations” by in-scope platforms ought to be each clear (“distinguished” and “efficient”) and protracted (i.e., travels with content if/when it’s reshared) — where the content in query “appreciably resemble[s] existing individuals, objects, places, entities, events, or depict[s] events as real that didn’t occur or misrepresent them,” because it puts it.
There’s also an additional advice platforms provide users with accessible tools so that they can add labels to AI-generated content.
The draft guidance goes on to suggest “best practices” to tell risk mitigation measures could also be drawn from the EU’s (recently agreed legislative proposal) AI Act and its companion (but non-legally binding) AI Pact, adding: “Particularly relevant on this context are the obligations envisaged within the AI Act for providers of general-purpose AI models, including generative AI, requirements for labelling of ‘deep fakes’ and for providers of generative AI systems to make use of technical state-of-the-art solutions to be certain that content created by generative AI is marked as such, which can enable its detection by providers of [in-scope platforms].”
The draft election security guidelines, that are under public consultation within the EU until March 7, include the overarching advice that tech giants put in place “reasonable, proportionate, and effective” mitigation measures tailored to risks related to (each) the creation and “potential large-scale dissemination” of AI-generated fakes.
The usage of watermarking, including via metadata, to differentiate AI-generated content is specifically really helpful — so that such content is “clearly distinguishable” for users. However the draft says “other kinds of synthetic and manipulated media” should get the identical treatment too.
“This is especially vital for any generative AI content involving candidates, politicians, or political parties,” the consultation observes. “Watermarks might also apply to content that is predicated on real footage (resembling videos, images or audio) that has been altered through using generative AI.”
Platforms are urged to adapt their content moderation systems and processes so that they’re capable of detect watermarks and other “content provenance indicators,” per the draft text, which also suggests they “cooperate with providers of generative AI systems and follow leading cutting-edge measures to be certain that such watermarks and indicators are detected in a reliable and effective manner”; and asks them to “support recent technology innovations to enhance the effectiveness and interoperability of such tools.”
The majority of the DSA, the EU’s content moderation and governance regulation, applies to a broad sweep of digital businesses from later this month — but already (for the reason that end of August) the regime applies for nearly two dozen (larger) platforms, with 45 million+ monthly lively users within the region. Greater than 20 so-called very large online platforms (VLOPs) and really large online search engines like google and yahoo (VLOSEs) have been designated under the DSA to this point, including the likes of Facebook, Instagram, Google Search, TikTok and YouTube.
Extra obligations these larger platforms face (i.e., in comparison with non-VLOPs/VLOSEs) include requirements to mitigate systemic risks arising from how they operate their platforms and algorithms in areas resembling democratic processes. So which means that, for instance, Meta could, within the near future, be forced into adopting a less incoherent position on what to do about political fakes on Facebook and Instagram — or, well, not less than within the EU, where the DSA applies to its business. (NB: Penalties for breaching the regime can scale as much as 6% of world annual turnover.)
Other draft recommendations aimed toward DSA platform giants vis-à-vis election security include a suggestion they make “reasonable efforts” to make sure information provided using generative AI “relies to the extent possible on reliable sources within the electoral context, resembling official information on the electoral process from relevant electoral authorities,” as the present text has it, and that “any quotes or references made by the system to external sources are accurate and don’t misrepresent the cited content” — which the bloc anticipates will work to “limit . . . the consequences of ‘hallucinations.’”
Users also needs to be warned by in-scope platforms of potential errors in content created by GenAI and ought to be pointed toward authoritative sources of knowledge, while the tech giants also needs to put in place “safeguards” to stop the creation of “false content which will have a robust potential to influence user behaviour,” per the draft.
Amongst the security techniques platforms may very well be urged to adopt is “red teaming” — or the practice of proactively trying to find and testing potential security issues. “Conduct and document red-teaming exercises with a specific give attention to electoral processes, with each internal teams and external experts, before releasing generative AI systems to the general public and follow a staggered release approach when doing so to raised control unintended consequences,” it currently suggests.
GenAI deployers in scope of the DSA’s requirement to mitigate system risk also needs to set “appropriate performance metrics” in areas like safety and factual accuracy of answers given to questions on electoral content, per the present text, and “continually monitor the performance of generative AI systems, and take appropriate actions when needed.”
Safety features that seek to stop the misuse of the generative AI systems “for illegal, manipulative and disinformation purposes within the context of electoral processes” also needs to be integrated into AI systems, per the draft — which provides examples resembling prompt classifiers, content moderation and other kinds of filters — to ensure that platforms to proactively detect and forestall prompts that go against their terms of service related to elections.
On AI-generated text, the present advice is for VLOPs/VLOSEs to “indicate, where possible, within the outputs generated the concrete sources of the knowledge used as input data to enable users to confirm the reliability and further contextualise the knowledge” — suggesting the EU is leaning toward a preference for footnote-style indicators (resembling what AI search engine You.com typically displays) for accompanying generative AI responses in dangerous contexts like elections.
Support for external researchers is one other key plank of the draft recommendations — and, indeed, of the DSA generally, which puts obligations on platform and search giants to enable researchers’ data access for the study of systemic risk. (Which has been an early area of focus for the Commission’s oversight of platforms.)
“As AI generated content bears specific risks, it ought to be specifically scrutinised, also through the event of ad hoc tools to perform research aimed toward identifying and understanding specific risks related to electoral processes,” the draft guidance suggests. “Providers of online platforms and search engines like google and yahoo are encouraged to think about organising dedicated tools for researchers to get access to and specifically discover and analyse AI generated content that’s often known as such, according to the duty under Article 40.12 for providers of VLOPs and VLOSEs within the DSA.”
The present draft also touches on using generative AI in ads, suggesting platforms adapt their ad systems to think about potential risks here too — resembling by providing advertisers with ways to obviously label GenAI content that’s been utilized in ads or promoted posts and to require of their ad policies that the label be used when the commercial includes generative AI content.
The precise steerage the EU will push on platform and search giants in relation to election integrity can have to attend for the ultimate guidelines to be produced in the approaching months. But the present draft suggests the bloc intends to supply a comprehensive set of recommendations and best practices.
Platforms will have the ability to decide on to not follow the rules but they are going to must comply with the legally binding DSA — so any deviations from the recommendations could encourage added scrutiny of different selections (Hi, Elon Musk!). And platforms will should be prepared to defend their approaches to the Commission, which is each producing guidelines and enforcing the DSA rulebook.
The EU confirmed today that the election security guidelines are the primary set within the works under the VLOPs/VLOSEs-focused Article 35 (“Mitigation of risks”) provision, saying the aim is to supply platforms with “best practices and possible measures to mitigate systemic risks on their platforms which will threaten the integrity of democratic electoral processes.”
Elections are clearly front of mind for the bloc, with a once-in-five-year vote to elect a brand new European Parliament set to happen in early June. And there the draft guidelines even include targeted recommendations related to the European Parliament elections — setting an expectation platforms put in place “robust preparations” for what’s couched within the text as “an important test case for the resilience of our democratic processes.” So we will assume the ultimate guidelines will probably be made available long before the summer.
Commenting in an announcement, Thierry Breton, the EU’s commissioner for internal market, added:
With the Digital Services Act, Europe is the primary continent with a law to deal with systemic risks on online platforms that may have real-world negative effects on our democratic societies. 2024 is a big yr for elections. That’s the reason we’re making full use of all of the tools offered by the DSA to make sure platforms comply with their obligations and usually are not misused to govern our elections, while safeguarding freedom of expression.