The European Union and United States put out a joint statement Friday affirming a desire to extend cooperation over artificial intelligence. The agreement covers AI safety and governance, but in addition, more broadly, an intent to collaborate across a lot of other tech issues, reminiscent of developing digital identity standards and applying pressure on platforms to defend human rights.
As we reported Wednesday, that is the fruit of the sixth (and possibly last) meeting of the EU-U.S. Trade and Technology Council (TTC). The TTC has been meeting since 2021 in a bid to rebuild transatlantic relations battered by the Trump presidency.
Given the potential for Donald Trump returning to the White House within the U.S. presidential elections happening later this yr, it’s not clear how much EU-U.S. cooperation on AI or every other strategic tech area will actually occur within the near future.
But, under the present political make-up across the Atlantic, the need to push for closer alignment across a variety of tech issues has gained in strength. There’s also a mutual desire to get this message heard — hence today’s joint statement — which is itself, perhaps, also a wider appeal geared toward all sides’s voters to go for a collaborative program, moderately than a destructive opposite, come election time.
An AI dialogue
In a piece of the joint statement focused on AI, filed under a heading of “Advancing Transatlantic Leadership on Critical and Emerging Technologies”, the pair write that they “reaffirm our commitment to a risk-based approach to artificial intelligence… and to advancing protected, secure, and trustworthy AI technologies.”
“We encourage advanced AI developers in america and Europe to further the applying of the Hiroshima Process International Code of Conduct for Organisations Developing Advanced AI Systems which complements our respective governance and regulatory systems,” the statement also reads, referencing a set of risk-based recommendations that got here out of G7 discussions on AI last yr.
The essential development out of the sixth TTC meeting appears to be a commitment from EU and U.S. AI oversight bodies, the European AI Office and the U.S. AI Safety Institute, to establish what’s couched as “a Dialogue.” The aim is a deeper collaboration between the AI institutions, with a specific deal with encouraging the sharing of scientific information amongst respective AI research ecosystems.
Topics highlighted here include benchmarks, potential risks and future technological trends.
“This cooperation will contribute to creating progress with the implementation of the Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management, which is crucial to minimise divergence as appropriate in our respective emerging AI governance and regulatory systems, and to cooperate on interoperable and international standards,” the 2 sides go on to suggest.
The statement also flags an updated version of an inventory of key AI terms, with “mutually accepted joint definitions” as one other consequence from ongoing stakeholder talks flowing from the TTC.
Agreement on definitions might be a key piece of the puzzle to support work toward AI standardization.
A 3rd element of what’s been agreed by the EU and U.S. on AI shoots for collaboration to drive research geared toward applying machine learning technologies for useful use cases, reminiscent of advancing healthcare outcomes, boosting agriculture and tackling climate change, with a specific deal with sustainable development. In a briefing with journalists earlier this week a senior commission official suggested this element of the joint working will deal with bringing AI advancements to developing countries and the worldwide south.
“We’re advancing on the promise of AI for sustainable development in our bilateral relationship through joint research cooperation as a part of the Administrative Arrangement on Artificial Intelligence and computing to handle global challenges for the general public good,” the joint statement reads. “Working groups jointly staffed by United States science agencies and European Commission departments and agencies have achieved substantial progress by defining critical milestones for deliverables within the areas of utmost weather, energy, emergency response, and reconstruction. We’re also making constructive progress in health and agriculture.”
As well as, an overview document on the collaboration around AI for the general public good was published Friday. Per the document, multidisciplinary teams from the EU and U.S. have spent greater than 100 hours in scientific meetings over the past half-year “discussing methods to advance applications of AI in on-going projects and workstreams”.
“The collaboration is making positive strides in a lot of areas in relation to challenges like energy optimisation, emergency response, urban reconstruction, and extreme weather and climate forecasting,” it continues, adding: “In the approaching months, scientific experts and ecosystems within the EU and america intend to proceed to advance their collaboration and present revolutionary research worldwide. This may unlock the ability of AI to handle global challenges.”
In keeping with the joint statement, there may be a desire to expand collaboration efforts on this area by adding more global partners.
“We are going to proceed to explore opportunities with our partners in the UK, Canada, and Germany within the AI for Development Donor Partnership to speed up and align our foreign assistance in Africa to support educators, entrepreneurs, and peculiar residents to harness the promise of AI,” the EU and U.S. note.
On platforms, an area where the EU is enforcing recently passed, wide-ranging laws — including laws just like the Digital Services Act (DSA) and Digital Markets Act — the 2 sides are united in calling for Big Tech to take protecting “information integrity” seriously.
The joint statement refers to 2024 as “a Pivotal 12 months for Democratic Resilience”, on account of the variety of elections being held all over the world. It includes an explicit warning about threats posed by AI-generated information, saying the 2 sides “share the priority that malign use of AI applications, reminiscent of the creation of harmful ‘deepfakes,’ poses latest risks, including to further the spread and targeting of foreign information manipulation and interference”.
It goes on to debate a lot of areas of ongoing EU-U.S. cooperation on platform governance and features a joint call for platforms to do more to support researchers’ access to data — especially for the study of societal risks (something the EU’s DSA makes a legal requirement for larger platforms).
On e-identity, the statement refers to ongoing collaboration on standards work, adding: “The subsequent phase of this project will deal with identifying potential use cases for transatlantic interoperability and cooperation with a view toward enabling the cross-border use of digital identities and wallets.”
Other areas of cooperation the statement covers include clean energy, quantum and 6G.