TikTok to open in-app Election Centers for EU users to tackle disinformation risks

Date:

Lilicloth WW
ChicMe WW
Kinguin WW

TikTok will launch localized election resources in its app to achieve users in each of the European Union’s 27 Member States next month and direct them towards “trusted information”, as a part of its preparations to tackle disinformation risks related to regional elections this 12 months.

“Next month, we’ll launch a neighborhood language Election Centre in-app for every of the 27 individual EU Member States to make sure people can easily separate fact from fiction. Working with local electoral commissions and civil society organisations, these Election Centres will likely be a spot where our community can find trusted and authoritative information,” TikTok wrote today.

“Videos related to the European elections will likely be labelled to direct people to the relevant Election Centre. As a part of our broader election integrity efforts, we can even add reminders to hashtags to encourage people to follow our rules, confirm facts, and report content they consider violates our Community Guidelines,” it added in a blog post discussing its preparations for 2024 European elections.

The blog post also discusses what it’s doing in relation to targeted risks that take the shape of influence operations looking for to make use of its tools to covertly deceive and manipulate opinions in a bid to skew elections — i.e. akin to by establishing networks of faux accounts and using them to spread and boost inauthentic content. Here it has committed to introduce “dedicated covert influence operations reports” — which it claims will “further increase transparency, accountability, and cross-industry sharing” vis-a-vis covert infops.

The brand new covert influence ops reports will launch “in the approaching months”, per TikTok — presumably being hosted inside into its existing Transparency Center.

TikTok can also be announcing the upcoming launch of nine more media literacy campaigns within the region (after launching 18 last 12 months, making a complete of 27 — so it looks to be plugging the gaps to make sure it has run campaigns across all EU Member States).

It also says it’s seeking to expand its local fact-checking partners network — currently it says it really works with nine organizations, which cover 18 languages. (NB: The EU has 24 “official” languages, and an additional 16 “recognized” languages — not counting immigrant languages spoken.)

Notably, though, the video sharing giant isn’t announcing any latest measures related to election security risks linked to AI generated deepfakes.

In recent months, the EU has been dialling up its attention on generative AI and political deepfakes and calling for platforms to place in place safeguards against this sort of disinformation.

TikTok’s blog post — which is attributed to Kevin Morgan, TikTok’s head of safety & integrity for EMEA — does warn that generative AI tech brings “latest challenges around misinformation”. It also specifies the platform doesn’t allow “manipulated content that may very well be misleading” — including AI generated content of public figures “if it depicts them endorsing a political view”. Nevertheless Morgan offers no detail of how successful (or otherwise) it currently is at detecting (and removing) political deepfakes where users select to disregard the ban and upload politically misleading AI generated content anyway.

As an alternative he writes that TikTok puts a requirement on creators to label any realistic AI generated content — and flags the recent launch of a tool to assist users apply manual labels to deepfakes. However the post offers no details about TikTok’s enforcement of this deepfake labelling rule; nor any further detail on the way it’s tackling deepfake risks, more generally, including in relation to election threats.

“Because the technology evolves, we’ll proceed to strengthen our efforts, including by working with industry through content provenance partnerships,” is the one other tidbit TikTok has to supply here.

We’ve reached out to the corporate with a series of questions looking for more detail concerning the steps it’s taking to arrange for European elections, including asking where within the EU its efforts are being focused and any ongoing gaps (akin to in language, fact-checking and media literacy coverage), and we’ll update this post with any response.

Recent EU requirement to act on disinformation

Elections for a brand new European Parliament are because of happen in early June and the bloc has been cranking up the pressure on social media platforms, especially, to arrange. Since last August, the EU has latest legal tools to compel motion from around two dozen larger platforms which have been designated as subject to the strictest requirements of its rebooted online governance rulebook.

Before now the bloc has relied on self regulation, aka the Code of Practice Against Disinformation, to attempt to drive industry motion to combat disinformation. However the EU has also been complaining — for years — that signatories of this voluntary initiative, which include TikTok and most other major social media firms (but not X/Twitter which removed itself from the list last 12 months), should not doing enough to tackle rising information threats, including to regional elections.

The EU Disinformation Code launched back in 2018, as a limited set of voluntary standards with a handful of signatories pledging some broad-brush responses to disinformation risks. It was then beefed up in 2022, with more (and “more granular”) commitments and measures — plus an extended list of signatories, including a broader range of players whose tech tools or services can have a task within the disinformation ecosystem.

While the strengthened Code stays non-legally binding, the EU’s executive and online rulebook enforcer for larger digital platforms, the Commission, has said it’ll consider adherence to the Code on the subject of assessing compliance with relevant elements of the (legally binding) Digital Services Act (DSA) — which requires major platforms, including TikTok, to take steps to discover and mitigate systemic risks arising from use of their tech tools, akin to election interference.

The Commission’s regular reviews of Code signatories’ performance typically involve long, public lectures by commissioners warning platforms have to ramp up their efforts to deliver more consistent moderation and investment in fact-checking, especially in smaller EU Member States and languages. Platforms’ go-to reply to the EU’s negative PR is to make fresh claims to be taking motion/doing more. After which the identical pantomime typically plays out six months or a 12 months later.

This ‘disinformation must do higher’ loop may be set to vary, though, because the bloc finally has a law in place to force motion on this area — in the shape of the DSA, which begun applying on larger platforms last August. Hence why the Commission is currently consulting on detailed guidance for election security. The rules will likely be aimed toward the nearly two dozen firms designated as very large online platforms (VLOPs) or very large online search engines like google (VLOSEs) under the regulation and which thus have a legal duty to mitigate disinformation risks.

The chance for in-scope platforms, in the event that they fail to maneuver the needle on disinformation threats, is being present in breach of the DSA — where penalties for violators can scale as much as 6% of world annual turnover. The EU will likely be hoping the regulation will finally concentrate tech giants’ minds on robustly addressing a societally corrosive problem — one which adtech platforms, with their industrial incentives to grow usage and engagement, have generally opted to dally over and dance around for years.

The Commission itself is accountable for enforcing the DSA on VLOPs/VLOSEs. And can, ultimately, be the judge of whether TikTok (and the opposite in-scope platforms) have done enough to tackle disinformation risks or not.

In light of today’s announcements, TikTok looks to be stepping up its approach to regional information-based and election security risks to attempt to make it more comprehensive — which can address one common Commission grievance — although the continued lack of fact-checking resources covering all of the EU’s official languages is notable. (Though the corporate is reliant on finding partners to offer those resources.)

The incoming Election Centers — which TikTok says will likely be localized to the official language of each certainly one of the 27 EU Member States — could find yourself being significant in battling election interference risks. Assuming they prove effective at nudging users to reply more critically to questionable political content they’re exposed to by the app, akin to by encouraging them to take steps to confirm veracity by following the links to authoritative sources of knowledge. But rather a lot will depend upon how these interventions are presented and designed.

The expansion of media literacy campaigns to cover all EU Member States can also be notable — hitting one other frequent Commission grievance. However it’s not clear whether all these campaigns will run before the June European elections (we’ve asked).

Elsewhere, TikTok’s actions look to be closer to treading water. As an example, the platform’s last Disinformation Code report back to the Commission, last fall, flagged the way it had expanded its synthetic media policy to cover AI generated or AI-modified content. However it also said then that it desired to further strengthen its enforcement of its synthetic media policy over the following six months. Yet there’s no fresh detail on its enforcement capabilities in today’s announcement.

Its earlier report back to the Commission also noted that it desired to explore “latest products and initiatives to assist enhance our detection and enforcement capabilities” around synthetic media, including in the realm of user education. Again, it’s not clear whether TikTok has made much of a foray here — although the broader issue is the shortage of strong methods (technologies or techniques) for detecting deepfakes, at the same time as platforms like TikTok make it super easy for users to spread AI generated fakes far and wide.

That asymmetry may ultimately demand other forms of policy interventions to effectively cope with AI related risks.

As regards TikTok’s claimed deal with user education, it hasn’t specified whether the extra regional media literacy campaigns it’ll run over 2024 will aim to assist users discover AI generated risks. Again, we’ve asked for more detail there.

The platform originally signed itself as much as the EU’s Disinformation Code back in June 2020. But as security concerns related to its China-based parent company have stepped up it’s found itself facing rising mistrust and scrutiny within the region. On top of that, with the DSA coming into application last summer, and an enormous election 12 months looming for the EU, TikTok — and others — look set to be squarely within the Commission’s crosshairs over disinformation risks for the foreseeable future.

Although it’s Elon Musk-owned X that has the dubious honor of being first to be formally investigated over DSA risk management requirements, and a raft of other obligations the Commission is anxious it might be breaching.


Share post:

High Performance VPS Hosting

Popular

More like this
Related

Helldivers 2 Secures Critics’ Alternative at Golden Joystick Awards, Praised for Its Teamwork and Challenge

In 2024, Helldivers 2 claimed the celebrated Critics’ Alternative...

Agni Trailer: Pratik Gandhi and Divyenndu Narrate The Tale of Firefighters

The upcoming OTT release, Agni stars Pratik Gandhi,...

Should the US ban Chinese drones?

You'll be able to enable subtitles (captions) within the...

Ally McCoist reveals he’s been affected by incurable condition that two operations couldn’t fix

talkSPORT's Ally McCoist has opened up about living with...