Tech giants form an industry group to assist develop next-gen AI chip components

Date:

Kinguin WW
ChicMe WW
Lilicloth WW

Intel, Google, Microsoft, Meta and other tech heavyweights are establishing a brand new industry group, the Ultra Accelerator Link (UALink) Promoter Group, to guide the event of the components that link together AI accelerator chips in data centers.

Announced Thursday, the UALink Promoter Group — which also counts AMD (but not Arm just yet), Hewlett Packard Enterprise, Broadcom and Cisco amongst its members — is proposing a brand new industry standard to attach the AI accelerator chips found inside a growing variety of servers. Broadly defined, AI accelerators are chips starting from GPUs to custom-designed solutions to hurry up the training, fine-tuning and running of AI models.

“The industry needs an open standard that will be moved forward in a short time, in an open [format] that enables multiple corporations so as to add value to the general ecosystem,” Forrest Norrod, AMD’s GM of information center solutions, told reporters in a briefing Wednesday. “The industry needs a typical that enables innovation to proceed at a rapid clip unfettered by any single company.”

Version certainly one of the proposed standard, UALink 1.0, will connect as much as 1,024 AI accelerators — GPUs only — across a single computing “pod.” (The group defines a pod as one or several racks in a server.) UALink 1.0, based on “open standards” including AMD’s Infinity Fabric, will allow for direct loads and stores between the memory attached to AI accelerators, and customarily boost speed while lowering data transfer latency in comparison with existing interconnect specs, in keeping with the UALink Promoter Group.

Image Credits: UALink Promoter Group

The group says it’ll create a consortium, the UALink Consortium, in Q3 to oversee development of the UALink spec going forward. UALink 1.0 shall be made available around the identical time to corporations that join the consortium, with a higher-bandwidth updated spec, UALink 1.1, set to reach in Q4 2024.

The primary UALink products will launch “in the following couple of years,” Norrod said.

Glaringly absent from the list of the group’s members is Nvidia, which is by far the biggest producer of AI accelerators with an estimated 80% to 95% of the market. Nvidia declined to comment for this story. However it’s not tough to see why the chipmaker isn’t enthusiastically throwing its weight behind UALink.

For one, Nvidia offers its own proprietary interconnect tech for linking GPUs inside an information center server. The corporate might be none too keen to support a spec based on rival technologies.

Then there’s the incontrovertible fact that Nvidia’s operating from a position of enormous strength and influence.

In Nvidia’s most up-to-date fiscal quarter (Q1 2025), the corporate’s data center sales, which include sales of its AI chips, rose greater than 400% from the year-ago quarter. If Nvidia continues on its current trajectory, it’s set to surpass Apple because the world’s second-most invaluable firm sometime this 12 months.

So, simply put, Nvidia doesn’t should play ball if it doesn’t wish to.

As for Amazon Web Services (AWS), the lone public cloud giant not contributing to UALink, it is perhaps in a “wait and see” mode because it chips (no pun intended) away at its various in-house accelerator hardware efforts. It may be that AWS, with a stranglehold on the cloud services market, doesn’t see much of a strategic point in opposing Nvidia, which supplies much of the GPUs it serves to customers.

AWS didn’t reply to TechCrunch’s request for comment.

Indeed, the most important beneficiaries of UALink — besides AMD and Intel — appear to be Microsoft, Meta and Google, which combined have spent billions of dollars on Nvidia GPUs to power their clouds and train their ever-growing AI models. All wish to wean themselves off of a vendor they see as worrisomely dominant within the AI hardware ecosystem.

In a recent report, Gartner estimates that the worth of AI accelerators utilized in servers will total $21 billion this 12 months, increasing to $33 billion by 2028. Revenue from AI chips will hit $33.4 billion by 2025, meanwhile, projects Gartner.

Google has custom chips for training and running AI models, TPUs and Axion. Amazon has several AI chip families under its belt. Microsoft last 12 months jumped into the fray with Maia and Cobalt. And Meta is refining its own lineup of accelerators.

Elsewhere, Microsoft and its close collaborator, OpenAI, reportedly plan to spend a minimum of $100 billion on a supercomputer for training AI models that’ll be outfitted with future versions of Cobalt and Maia chips. Those chips will need something to link them — and maybe it’ll be UALink.

Share post:

High Performance VPS Hosting

Popular

More like this
Related

Should Gilberto Ramirez Risk It All Against Jai Opetaia?

Promoter Eddie Hearn wants the unified WBA & WBO...

Box office for European movies falling worldwide

Critically, European movies are having a hell of a...