The European Union published draft election security guidelines Tuesday aimed toward the around two dozen (larger) platforms with greater than 45 million+ regional monthly lively users who’re regulated under the Digital Services Act (DSA) and — consequently — have a legal duty to mitigate systemic risks similar to political deepfakes while safeguarding fundamental rights like freedom of expression and privacy.
In-scope platforms include the likes of Facebook, Google Search, Instagram, LinkedIn, TikTok, YouTube and X.
The Commission has named elections as one in every of a handful of priority areas for its enforcement of the DSA on very large online platforms (VLOPs) and really large online engines like google (VLOSEs). This subset of DSA-regulated corporations are required to discover and mitigate systemic risks, similar to information manipulation targeting democratic processes within the region, along with complying with the complete online governance regime.
Per the EU’s election security guidance, the bloc expects regulated tech giants to up their game on protecting democratic votes and deploy capable content moderation resources within the multiple official languages spoken across the bloc — ensuring they’ve enough staff available to reply effectively to risks arising from the flow of knowledge on their platforms and act on reports by third-party fact-checkers — with the danger of huge fines for dropping the ball.
It will require platforms to tug off a precision balancing act on political content moderation — not lagging on their ability to differentiate between, for instance, political satire, which should remain online as protected free speech, and malicious political disinformation, whose creators could possibly be hoping to influence voters and skew elections.
Within the latter case, the content falls under the DSA categorization of systemic risk that platforms are expected to swiftly spot and mitigate. The EU standard here requires that they put in place “reasonable, proportionate, and effective” mitigation measures for risks related to electoral processes, in addition to respecting other relevant provisions of the wide-ranging content moderation and governance regulation.
The Commission has been working on the election guidelines at pace, launching a consultation on a draft version just last month. The sense of urgency in Brussels flows from upcoming European Parliament elections in June. Officials have said they may stress-test platforms’ preparedness next month. So the EU doesn’t appear ready to depart platforms’ compliance to probability, even with a tough law in place meaning tech giants are risking big fines in the event that they fail to fulfill Commission expectations this time around.
User controls for algorithmic feeds
Key among the many EU’s election guidance aimed toward mainstream social media firms and other major platforms are that they need to give their users a meaningful selection over algorithmic and AI-powered recommender systems — so they’re able to exert some control over the sort of content they see.
“Recommender systems can play a big role in shaping the data landscape and public opinion,” the guidance notes. “To mitigate the danger that such systems may pose in relation to electoral processes, [platform] providers … should consider: (i.) Ensuring that recommender systems are designed and adjusted in a way that provides users meaningful decisions and controls over their feeds, with due regard to media diversity and pluralism.”
Platforms’ recommender systems must also have measures to downrank disinformation targeted at elections, based on what the guidance couches as “clear and transparent methods,” similar to deceptive content that’s been fact-checked as false and/or posts coming from accounts repeatedly found to spread disinformation.
Platforms must also deploy mitigations to avoid the danger of their recommender systems spreading generative AI-based disinformation (aka political deepfakes). They must also be proactively assessing their recommender engines for risks related to electoral processes and rolling out updates to shrink risks. The EU also recommends transparency across the design and functioning of AI-driven feeds and urges platforms to interact in adversarial testing, red-teaming, etc., to amp up their ability to identify and quash risks.
On GenAI the EU’s advice also urges watermarking of synthetic media — while noting the bounds of technical feasibility here.
Really helpful mitigating measures and best practices for larger platforms within the 25 pages of draft guidance published today also lay out an expectation that platforms will dial up internal resourcing to concentrate on specific election threats, similar to around upcoming election events, and setting up processes for sharing relevant information and risk evaluation.
Resourcing must have local expertise
The guidance emphasizes the necessity for evaluation on “local context-specific risks,” along with member state specific/national and regional information gathering to feed the work of entities answerable for the design and calibration of risk mitigation measures. And for “adequate content moderation resources,” with local language capability and knowledge of the national and/or regional contexts and specificities — a long-running gripe of the EU in the case of platforms’ efforts to shrink disinformation risks.
One other suggestion is for them to strengthen internal processes and resources around each election event by organising “a dedicated, clearly identifiable internal team” ahead of the electoral period — with resourcing proportionate to the risks identified for the election in query.
The EU guidance also explicitly recommends hiring staffers with local expertise, including language knowledge. Platforms have often sought to repurpose a centralized resource — without all the time searching for out dedicated local expertise.
“The team should cover all relevant expertise including in areas similar to content moderation, fact-checking, threat disruption, hybrid threats, cybersecurity, disinformation and FIMI [foreign information manipulation and interference], fundamental rights and public participation and cooperate with relevant external experts, for instance with the European Digital Media Observatory (EDMO) hubs and independent factchecking organisations,” the EU also writes.
The guidance allows for platforms to potentially ramp up resourcing around particular election events and de-mobilize teams once a vote is over.
It notes that the periods when extra risk mitigation measures could also be needed are prone to vary, depending on the extent of risks and any specific EU member state rules around elections (which may vary). However the Commission recommends that platforms have mitigations deployed and up and running at the least one to 6 months before an electoral period, and proceed at the least one month after the elections.
Unsurprisingly, the best intensity for mitigations is predicted within the period prior to the date of elections, to deal with risks like disinformation targeting voting procedures.
Hate speech within the frame
The EU is usually advising platforms to attract on other existing guidelines, including the Code of Practice on Disinformation and Code of Conduct on Countering Hate Speech, to discover best practices for mitigation measures. However it stipulates they need to ensure users are supplied with access to official information on electoral processes, similar to banners, links and pop-ups designed to steer users to authoritative info sources for elections.
“When mitigating systemic risks for electoral integrity, the Commission recommends that due regard can be given to the impact of measures to tackle illegal content similar to public incitement to violence and hatred to the extent that such illegal content may inhibit or silence voices within the democratic debate, particularly those representing vulnerable groups or minorities,” the Commission writes.
“For instance, types of racism, or gendered disinformation and gender-based violence online including within the context of violent extremist or terrorist ideology or FIMI targeting the LGBTIQ+ community can undermine open, democratic dialogue and debate, and further increase social division and polarization. On this respect, the Code of conduct on countering illegal hate speech online could be used as inspiration when considering appropriate motion.”
It also recommends they run media literacy campaigns and deploy measures aimed toward providing users with more contextual info — similar to fact-checking labels; prompts and nudges; clear indications of official accounts; clear and non-deceptive labeling of accounts run by member states, third countries and entities controlled or financed by third countries; tools and info to assist users assess the trustworthiness of knowledge sources; tools to evaluate provenance; and establish processes to counter misuse of any of those procedures and tools — which reads like a listing of stuff Elon Musk has dismantled since taking on Twitter (now X).
Notably, Musk has also been accused of letting hate speech flourish on the platform on his watch. And on the time of writing, X stays under investigation by the EU for a spread of suspected DSA breaches, including in relation to content moderation requirements.
Transparency to amp up accountability
On political promoting, the guidance points platforms to incoming transparency rules on this area — advising they prepare for the legally binding regulation by taking steps to align themselves with the necessities now. (For instance, by clearly labeling political ads, providing information on the sponsor behind these paid political messages, maintaining a public repository of political ads, and having systems in place to confirm the identity of political advertisers.)
Elsewhere, the guidance also sets out the best way to take care of election risks related to influencers.
Platforms must also have systems in place enabling them to demonetize disinformation, per the guidance, and are urged to offer “stable and reliable” data access to 3rd parties undertaking scrutiny and research of election risks. Data access for studying election risks must also be provided free of charge, the recommendation stipulates.
More generally the guidance encourages platforms to cooperate with oversight bodies, civil society experts and one another in the case of sharing details about election security risks — urging them to ascertain comms channels for suggestions and risk reporting during elections.
For handling high-risk incidents, the recommendation recommends platforms establish an internal incident response mechanism that involves senior leadership and maps other relevant stakeholders throughout the organization to drive accountability around their election event responses and avoid the danger of buck passing.
Post-election, the EU suggests platforms conduct and publish a review of how they fared, factoring in third-party assessments (i.e., quite than simply searching for to mark their very own homework, as they’ve historically preferred, attempting to put a PR gloss atop ongoing platform manipulated risks).
The election security guidelines aren’t mandatory, as such, but when platforms opt for one more approach than what’s being really helpful for tackling threats on this area, they should have the ability to display their alternative approach meets the bloc’s standard, per the Commission.
In the event that they fail to do this, they’re risking being present in breach of the DSA, which allows for penalties of as much as 6% of worldwide annual turnover for confirmed violations. So there’s an incentive for platforms to get with the bloc’s program on ramping up resources to deal with political disinformation and other info risks to elections as a strategy to shrink their regulatory risk. But they may still have to execute on the recommendation.
Further specific recommendations for the upcoming European Parliament elections, which is able to run June 6–9, are also set out within the EU guidance.
On a technical note, the election security guidelines remain in draft at this stage. However the Commission said formal adoption is predicted in April once all language versions of the guidance can be found.