The European Union has taken the wraps off the structure of the brand new AI Office, the ecosystem-building and oversight body that’s being established under the bloc’s AI Act. The chance-based regulatory framework for artificial intelligence is predicted to enter into force before the tip of July — following the regulation’s final approval by EU lawmakers last week. The AI Office will take effect on June 16.
The AI Office reflects the bloc’s larger ambitions in AI. It’ll play a key role in shaping the European AI ecosystem over the approaching years — playing a dual role of helping to manage AI risks, and fostering uptake and innovation. However the bloc also hopes the AI Office can exert wider influence on the worldwide stage as many countries and jurisdictions need to understand the way to approach AI governance. In all, it is going to be made up of 5 units.
Here’s a breakdown of what each of the five units of the EU’s AI Office will concentrate on:
One unit will tackle “regulation and compliance”, including liaising with EU Member States to support harmonized application and enforcement of the AI Act. “The unit will contribute to investigations and possible infringements, administering sanctions,” per the Commission, which intends the Office to play a supporting role the EU country-level governance bodies the law will even establish for enforcing the broad sweep of the regime.
One other unit will cope with “AI Safety”. The Commission said it will concentrate on “the identification of systemic risks of very capable general-purpose models, possible mitigation measures in addition to evaluation and testing approaches” — with general purpose models (GPAIs) referring to the recent wave of generative AI technologies equivalent to the foundational models that underpin tools like ChatGPT. Though the EU said the unit might be most concerned with GPAIs with so-called “systemic risk” — which the law defines as models trained above a certain compute threshold.
The AI Office may have responsibility for directly enforcing the AI Act’s rules for GPAIs — so relevant units are expected to conduct testing and evaluation of GPAIs, in addition to using powers to request information from AI giants to enable the oversight.
The AI Office’s compliance unit’s work will even include producing templates GPAIs might be expected to make use of, equivalent to for summarizing any copyrighted material used to coach their models.
While having a dedicated AI Safety unit seems obligatory to present full effect to the law’s rules for GPAIs, it also looks intended to reply to international developments in AI governance for the reason that EU’s law was drafted — equivalent to the UK and US announcing their very own respective AI Safety Institutes last fall. The massive difference, though, is the EU’s AI Safety unit is armed with legal powers.
A 3rd unit of the AI Office will dedicate itself to what the Commission dubs “Excellence in AI and Robotics”, including supporting and funding AI R&D. The Commission said this unit will coordinate with its previously announced “GenAI4EU” initiative, which goals to stimulate the event and uptake of generative AI models — including by upgrading Europe’s network of supercomputers to support model training.
A fourth unit is targeted on “AI for Social Good”. The Commission said it will “design and implement” the Office’s international engagement for large projects where AI could have a positive societal impact — equivalent to in areas like weather modelling, cancer diagnoses and digital twins for artistic reconstruction.
Back in April, the EU announced that a planned AI collaboration with the US, on AI safety and risk research, would also include a concentrate on joint working on uses of AI for the general public good. So this component of the AI Office was already sketched out.
Finally, a fifth unit will tackle “AI Innovation and Policy Coordination”. The Commission said its role might be to make sure the execution of the bloc’s AI strategy — including “monitoring trends and investment, stimulating the uptake of AI through a network of European Digital Innovation Hubs and the establishment of AI Factories, and fostering an revolutionary ecosystem by supporting regulatory sandboxes and real-world testing”.
Having three the five units of the EU AI Office working — broadly speaking — on AI uptake, investment and ecosystem constructing, while just two are concerned with regulatory compliance and safety, looks intended to supply further reassurance to industry that the EU’s speed in producing a rulebook for AI isn’t anti-innovation, as some homegrown AI developers have complained. The bloc also argues trustworthiness will foster adoption of AI.
The Commission has already appointed the heads of several of the AI Office units — and the general head of the Office itself — however the AI Safety unit’s chief has yet to be named. A lead scientific advisor role can be vacant. Confirmed appointments are: Lucilla Sioli, head of AI Office; Kilian Gross, head of the Regulation & Compliance unit; Cecile Huet, Excellence in AI and Robotics Unit; Martin Bailey, AI for Societal Good Unit; and Malgorzata Nikowska, AI Innovation and Policy Coordination Unit.
The AI Office was established by a Commission decision back in January and began preparatory work — equivalent to deciding the structure — in late February. It sits inside the EU’s digital department, DG Connect — which is (currently) headed by internal market commissioner, Thierry Breton.
The AI Office will eventually have a headcount of greater than 140 people, including technical staff, lawyers, political scientists and economists. On Wednesday the EU said some 60 staff have been put in place up to now. It plans to ramp up hiring over the following couple of years because the law is implemented and becomes fully operation. The AI Act takes a phased approach to rules, with some provisions set to use six months after the law is available in force, while others get an extended lead in of a 12 months or more.
One key upcoming role for the AI Office might be in drawing up Codes of Practice and best practices for AI developers — which the EU desires to play a stop-gap role while the legal rulebook is phased in.
A Commission official said the Code is predicted to launch soon, once the AI Act enters into force later this summer.
Other work for the AI Office includes liaising with a variety of other fora and expert bodies the AI Act will establish to knit together the EU’s governance and ecosystem-building approach, including the European Artificial Intelligence Board, a body which might be made up of representatives from Member States; a scientific panel of independent experts; and a broader advisory forum comprised of stakeholders including industry, startups and SMEs, academia, think tanks and civil society.
“The primary meeting of the AI Board should happen by the tip of June,” the Commission noted in a press release, adding: “The AI Office is preparing guidelines on the AI system definition and on the prohibitions, each due six months after the entry into force of the AI Act. The Office can be on the point of coordinate the drawing up of codes of practice for the obligations for general-purpose AI models, due 9 months after entry into force.”
This report was updated with the names of confirmed appointments after the Commission provided the data