Intel, Google and other tech giants team as much as develop UALink AI chip interconnect

Date:

Kinguin WW
Lilicloth WW
ChicMe WW

Eight of the tech industry’s largest players are teaming as much as launch the UALink Promoter Group, a brand new artificial intelligence hardware initiative detailed today.

The project focuses on developing an industry-standard approach to linking together AI chips comparable to graphics processing units. The goal, the initiative’s backers detailed, is to ease the duty of assembling AI clusters that contain numerous chips. Improved infrastructure scalability is one other goal.

The UALink Promoter Group is backed by chipmakers Intel Corp., Advanced Micro Devices Inc. and Broadcom Corp. Two of the industry’s three largest cloud providers, Microsoft Corp. and Google LLC, are participating as well together with Meta Platforms Inc., Cisco Systems Inc. and Hewlett Packard Enterprise Co. It’s seen as a foil to Nvidia Corp.’s dominance in GPUs and all of the systems across the chips.

The group plans to include an official industry consortium within the third quarter to oversee the event effort. The UALink Consortium, because the body is ready to be called, will release the primary round-number iteration of its AI interconnect technology later in the identical quarter. The specification will probably be accessible to the businesses that take part in the initiative.

Advanced AI models are typically trained using not one but multiple processors. Each processor runs a separate copy of the neural network being developed and teaches it a small subset of the training dataset. To finish the event process, the chips sync their copies of the neural network, which necessitates a channel through which those chips can exchange data with each other.

That’s the requirement the UALink Consortium’s planned interconnect is supposed to deal with. In accordance with the group, the technology will make it possible to link together as much as 1,024 AI accelerators in a single cluster. Moreover, UALink will probably be able to connecting such clusters to network switches that may help optimize the flow of knowledge traffic between the person processors.

The consortium detailed that certainly one of the features within the works is a capability for facilitating “direct loads and stores between the memory attached to accelerators.” Facilitating direct access to AI chips’ memory is a way of speeding up machine learning applications. Nvidia Corp. also provides an implementation of this method in the shape of GPUDirect, a technology available for its data center graphics cards.

Normally, a chunk of knowledge that travels from one GPU to a different has to make several pit stops before reaching its  destination. Specifically, the knowledge must travel through the central processing units of the servers that host the graphics cards. Nvidia’s GPUDirect technology makes it possible to bypass the CPU, which allows data to achieve its destination faster and thereby hastens processing.

The UALink Consortium is not less than the third industry group focused on AI chips to have launched previously five years.

AI clusters include not only machine learning accelerators but in addition CPUs that perform various supporting tasks. In 2019, Intel released an interconnect called CXL for linking AI accelerators to CPUs. It also established an industry consortium to advertise the event and adoption of the usual.

CXL is a customized version of the widely used PCIe interconnect for linking together server components. Intel modified the latter technology with several AI-specific optimizations. Certainly one of those optimizations allows the interconnected CPUs and GPUs in an AI cluster to share memory with each other, which allows them to exchange data more efficiently.

Last 12 months, Intel teamed up with Arm Holdings plc and several other other chipmakers to launch an AI software consortium called the UXL Foundation. The group’s goal is to ease the duty of developing AI applications that may run on multiple kinds of machine learning accelerators. The technology the UXL Foundation is developing to that end is predicated on oneAPI, a toolkit for constructing multiprocessor software that was originally developed by Intel.

Image: Unsplash

Your vote of support is essential to us and it helps us keep the content FREE.

One click below supports our mission to offer free, deep, and relevant content.  

Join our community on YouTube

Join the community that features greater than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and lots of more luminaries and experts.

“TheCUBE is a vital partner to the industry. You guys really are a component of our events and we actually appreciate you coming and I do know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Share post:

High Performance VPS Hosting

Popular

More like this
Related

US is vulnerable to inflation shocks, top Fed official warns

Unlock the White House Watch newsletter free of chargeYour...

UE looks to beat pressure with Final 4 within sight

UE Red Warriors’ coach Jack Santiago during a UAAP...