Big Tech AI infrastructure tie-ups set for deeper scrutiny, says EU antitrust chief

Date:

ChicMe WW
Kinguin WW
Lilicloth WW

The impact of AI have to be front of mind for enforcers of merger control policy, the European Union’s antitrust chief and digital EVP, Margrethe Vestager, said yesterday, warning that “wide-reaching” digital markets can result in unexpected economic effects. Speaking during a seminar discussing the way to prevent tech giants like Microsoft, Google and Meta from monopolizing AI, she gave a verbal shot across the bows of Big Tech to expect more — and deeper — scrutiny of their operations.

“We have now to look rigorously at vertical integration and at ecosystems. We have now to take account of the impact of AI in how we assess mergers. We even need to take into consideration how AI might result in latest sorts of algorithmic collusion,” she said.

Her remarks suggest the bloc will likely be quite a bit more lively in its assessments of tech M&A going forward — and, indeed, cosy AI partnerships.

Last month the EU said it could look into whether Microsoft’s investment in generative AI giant OpenAI is reviewable under the bloc’s merger regulations.

Vestager’s address was also notable for clearly expressing that competition challenges are inherent to how leading edge AI is developed, with the Commission EVP flagging “barriers to entry in all places”.

“Large Language Models [LLMs] rely upon huge amounts of information, they rely upon cloud space, and so they rely upon chips. There are barriers to entry in all places. Add to this the proven fact that the tech giants have the resources to amass one of the best and brightest talent,” she said. “We’re not going to see disruption driven by a handful of faculty drop-outs who one way or the other manage to outperform Microsoft’s partner Open AI or Google’s DeepMind. The disruption from AI will come from inside the nest of existing tech ecosystems.”

The blistering rise of generative AI over the past 12 months+ has shone a highlight on how developments are dominated by a handful of firms with either close ties to familiar Big Tech platforms or who’re tech giants themselves. Examples include ChatGPT maker OpenAI’s close partnership with hyperscaler Microsoft; Google and Amazon ploughing investment into OpenAI rival Anthropic; and Facebook’s parent Meta mining its social media data mountain to develop its own series of foundational models (aka LLaMA).

How European AI startups can hope to compete without equivalent access to key AI infrastructure was a running thread within the seminar discussions.

Challenges and uncertainties

“We’ve seen LLaMa 2 being open sourced. Will LLaMa 3 even be open sourced?” wondered Tobias Haar, general counsel of the German foundational model AI startup Aleph Alpha, speaking during a panel discussion that followed Vestager’s address. “Will there be corporations that depend on open source Large Language Models that suddenly, a minimum of not in the following iterative stage, are not any longer available as an open source?”

Haar emphasized that uncertainty over access to key AI inputs is why the startup took the choice to speculate in constructing and training its own foundational models in its own data center — “as a way to keep and maintain this independence”. At the identical time, he flagged the challenge inherent for a European startup in attempting to compete with US hyperscalers and the dedicated compute resource they’ll roll out for training AIs with their chosen partners.

Aleph Alpha’s own data center runs 512 A100 Nvidia GPUs — the “largest business AI cluster” in Europe, per Haar. But he emphasized this pales as compared to Big Tech’s infrastructure for training — pointing to Microsoft’s announcement that it could be installing circa 10,000 GPUs within the UK last 12 months alone, as a part of a £25BN investment over three years (which can actually fund greater than 20,000 GPUs by 2026).

“So as to put it into perspective — and perspective can also be what’s relevant within the competition, legal assessment of what is happening out there field — we run 512 A100 GPUs by Nvidia,” he said. “That is quite a bit since it makes us somewhat independent however it’s still nothing in comparison with the sheer computing power there’s for other organisations to coach and to positive tune their LLMs on. And I do know that OpenAI has been training the LLMs — but I understand that Microsoft is positive tuning them also to their needs. So that is already [not a level playing field].”

In her address, Vestager didn’t offer any concrete plan for the way the bloc might move to level the playing field for homegrown generative AI startups — nor even entirely commit to the necessity for the bloc to intervene. (But tackling digital market concentration, which was built up, partially, under her watch, stays a difficult subject for the EU — which has increasingly been accused of regulating all the things but changing nothing in relation to Big Tech’s market power.)

Nonetheless, her address suggests the EU is preparing to get quite a bit tougher and more comprehensive in scrutinizing tech deals, as a consequence of recent developments in AI.

Only a handful of years ago Vestager cleared Google’s controversial acquisition of fitness wearable maker Fitbit, accepting commitments from the tech giant it wouldn’t use Fitbit’s data for ads for a period of ten-years — but leaving the tech giant free to mine users’ data for other purposes, including AI. (To wit: Last 12 months Google added a generative AI chatbot to the Fitbit app.)

However the days of Big Tech attending to cherry-pick acquisition targets, and grab juicy-looking AI training data, could also be winding down in Europe.

Vestager also implied the bloc will seek to make full use of existing competition tools, including the Digital Markets Act (DMA) — an ex ante competition reform which comes into application on six tech giants (including Microsoft, Google and Meta) early next month — as a part of its playbook to shape how the AI market develops, suggesting the EU’s competition policy must work hand-in-glove with digital regulations to maintain pace with risks and harms.

There have been doubts over how — and even whether — the DMA applies to generative AI, given no cloud services have to this point been designated under the regulation as so-called “core platform services”. So there are worries the bloc has, once more, missed the boat in relation to putting meaningful market controls on the following wave of disruptive tech.

In her address, Vestager rejected the concept it’s already too late for the EU to stop Big Tech sewing up AI markets — tentatively suggesting “we will make an impact” — but she also warned the “window of opportunity” for enforcers and lawmakers to shape outcomes which are “truly useful to our economy, to our residents and to our democracies”, as she put it, will only be briefly open.

Still, her speech raised quite a bit more questions over how enforcers and policymakers should reply to the layered challenges thrown up by AI — including democratic integrity, mental property and the moral application of such systems, to call a couple of — than she had actual solutions. She also sounded a bit hesitant when it got here to the way to weigh competition considerations with the broader sweep of societal harms AI use may entail. So her message — and resolve — seemed a bit conflicted.

“There are still big questions around how mental property rights are respected. About how ethical AI is deployed. About areas where AI should never be deployed. In each of those decisions, there’s a contest policy dimension that should be considered. Conversely, how AI regulation is enforced will affect the openness and accessibility of the markets it impacts,” she said, implying there could also be trade offs between regulating AI risks and making a vibrant AI ecosystem.

“There are questions around input neutrality and the influence such systems could have on our democracies. A Large Language Model is barely nearly as good because the inputs it receives, and for this there must at all times be a discretionary element. Can we actually need our opinion-making to be reliant on AI systems which are under the control not of the European people — but of tech oligarchs and their shareholders?” she also wondered, suggesting the bloc may have to take into consideration drafting much more laws to manage AI risks.

Clearly, coming with more laws now isn’t a recipe for fast motion on AI — yet her speech literally called for “acting swiftly” (and “considering ahead” and “cooperating”) to maximise the advantages of AI while minimizing the chance.

Overall, despite the promise of more intelligent merger scrutiny, the tone she struck veered toward ‘managing expectations’. And her call to motion appealed to a broader collective of international enforcers, regulators and policymakers to affix forces to repair this one — slightly than the EU sticking its head above the parapet.

While Vestager avoided easy answers for derailing Big Tech’s well-funded dash to monopolize AI, other panellists offered a couple of.

Solutions

The fieriest ideas got here from Barry Lynn of the Washington-based Open Markets Institute, a non-profit whose stated mission starts with stopping monopolies. “Let’s break off cloud,” he suggested. “Let’s turn cloud right into a utility. It’s pretty easy to do. This is definitely one among the simplest solutions we will embrace without delay — and it could take away an enormous amount of their leverage.”

He also called for a blanket non-discrimination regime (i.e. “common carrier” type rules for platforms to ban price discrimination and knowledge manipulation); and for a requisitioning of aggregated “public data” tech giants have amassed by tracking web users. “Why does Google own the information? That’s our data,” he argued. “It’s public data… It doesn’t belong to Google — doesn’t belong to any of those folks. It’s our data. Let’s exert ownership over it.”

Microsoft’s director of competition, Carel Maske, who had — awkwardly enough — been seated right next to Lynn on the panel, all but broke right into a sweat when the moderator offered him the prospect to reply to that. “I believe there’s quite a bit to debate,” he hedged, before doing his best to brush aside Lynn’s case for immediate structural separation of hyperscalers.

“I’m undecided you’re addressing, really, the needs of the investments which are needed in cloud and infrastructure,” he got out, dangling a skeletal argument against being broken up (i.e. that structural separation of Big Tech from core AI infrastructure would undermine the investment needed to drive innovation forward), before hurrying to route the chat back to more comfortable topics (like “the way to make competition tools work” or “what the suitable regulatory framework is”), which Microsoft evidently feels won’t prevent Big Tech business as usual.

Talking of whether existing competition tools are capable of do the job of bringing tech giants’ scramble for AI to heel, one other panellist, Andreas Mundt — president of the German competition authority, the Federal Cartel Office (FCO) — had a negative perspective to recount, drawn from recent experience.

Existing merger processes have already failed, domestically, to tackle Microsoft’s cosy relationship with OpenAI, he identified. The FCO took an early have a look at whether the partnership needs to be subject to merger control — before deciding, last November, the arrangement didn’t “currently” meet the bar.

In the course of the panel, Mundt said he would have liked a really different consequence. He argued tech giants have — very evidently — modified tack from the sooner “killer acquisition” strategy they deployed to slay emergent competition — to a softer partnership model that enables these close engagements to fly under enforcers’ radar.

“All we see are very soft cooperations,” he noted. “That is why we checked out this Microsoft OpenAI issue — and what did we discover? Well, we weren’t very pleased about it but from a proper perspective, we couldn’t say this was a merger.

“What we found — and this shouldn’t be underestimated — in 2019 when Microsoft invested greater than €1 billion into OpenAI we saw the creation of a considerable competitive influence of Microsoft into OpenAI. And that was long before Sam Altman was fired and rehired again. So there’s this influence, as we see it, and for this reason merger control is so necessary.

“But we couldn’t prohibit that as a merger, by the best way, because by that point, OpenAI had no impact in Germany — they weren’t lively on German markets — for this reason it was not a merger from our perspective. But what stays, it is extremely, very necessary, there’s this substantial, competitive influence — and we must have a look at that.”

Asked what he would have liked to give you the chance to do about Microsoft OpenAI, the FCO’s Mundt said he wanted to take a look at the core query: “Was it a merger? And was it a merger that perhaps must go to phase two — that we should always assess and perhaps block?”

Striking a more positive note, the FCO president professed himself “very pleased” the European Commission took the next decision — last month — to open its own proceeding to envision whether Microsoft and OpenAI’s partnership falls under the bloc’s merger rules. He also highlighted the UK competition authority’s move here, in December, when it said it could have a look at whether the tie-up amounts to a “relevant merger” situation.

Those proceedings are ongoing.

“I can promise you, we’ll have a look at all these cooperations very rigorously — and if we see, if it only gets near a merger, we’ll attempt to get it in [to merger rules],” Mundt added, factoring fellow enforcers’ actions into his calculation of what success looks like here.

A complete army of competition and digital rule enforcers working together — even in parallel — to attack the knotty problems thrown up by Big Tech + AI was also named by Vestager as a critical piece for cracking this puzzle. (And on this front, she encouraged responses to an open consultation on generative AI and virtual worlds the competition unit is running open until March 11.)

“For me, the very first lesson from our experience to this point is that our impact will at all times be biggest once we work together, communicate clearly, and act early on,” she emphasized, adding: “I’ll proceed to have interaction with my counterparts in america and elsewhere, to align our approach as much as possible.”

Share post:

High Performance VPS Hosting

Popular

More like this
Related