Multichip inference cloud startup Gimlet Labs receives $80M to unravel one in all AI’s biggest bottlenecks

Gimlet Labs Inc. said today it has raised $80 million in early-stage funding to unravel a bottleneck that’s holding back artificial intelligence inference.

The startup, which has raised $92 million in total, has created what’s said to be the world’s first and only “multi-silicon inference cloud.” It differs from standard inference clouds, since it enables AI workloads to be run concurrently across various sorts of chips. As an illustration, an AI application’s work might be split across traditional central processing units, high-performance graphics processing units, and different kinds of processors. Inference is the technique of using a trained machine learning model to make predictions or decisions on recent, unseen data, turning AI into motion.

Menlo Ventures partner Tim Tully discussed in a blog post why multi-silicon inference is so useful. He explained that when an autonomous AI agent is assigned a task, it could “chain together dozens of model calls, retrieval steps and gear invocations across non-linear branching logic.” Each step on this chain is best performed by different hardware. As an illustration, prefill is compute-bound, decode is memory-bound and gear calls are network-bound.

“No single chip can handle all three efficiently. As an alternative, the reply is heterogeneous,” Tully said.

Compute-intensive batch inference is best done using GPUs, while latency-sensitive workloads can profit from running on specialized static random-access memory-heavy processors equivalent to Groq, Cerebras and d-Matrix, as these deliver exceptional speed. Tasks equivalent to orchestration and gear use, alternatively, generally run higher on CPUs. “The multi-silicon fleet is prepared – it’s just missing the software layer to make it work,” Tully said.

By splitting up AI tasks across multiple processors in this fashion, Gimlet Labs says, it could dramatically improve efficiency and reduce the time chips spend sitting idle, waiting for instructions. It reckons that it could speed up inference workloads by anywhere from three to 10 times for a similar cost and power. It may possibly even slice up AI models themselves, in order that different parts of them run on different chips.

Gimlet Labs founder and Chief Executive Zain Asgar told TechCrunch in an interview that existing hardware generally runs only at between 15% and 30% efficiency. “You’re wasting a whole lot of billions of dollars since you’re just leaving idle resources,” he said. “Our goal was principally to attempt to determine how you may get AI workloads to be 10 times more efficient than ever, today.”

The startup’s software just isn’t aimed toward regular rank-and-file developers. As an alternative, Gimlet Labs goes after the large boys who run the most important AI model labs and probably the most expansive data centers. Its partners include among the biggest chipmakers, including Nvidia Corp., Advanced Micro Devices Inc., Intel Corp., Arm Holdings Plc and Cerebras Systems Inc. Asgar told TechCrunch that the corporate is already generating revenue of eight figures despite only launching its platform in October. Within the last 4 months it has doubled its customer base, with clients including a significant model maker and an “extremely large” cloud computing company, he said.

The Series A round was led by Menlo Ventures and saw participation from Factory, which led the corporate’s seed, in addition to Eclipse, Prosperity7 and Triamtomic.

Today’s funding round is all about giving Gimlet Labs the resources it must scale and ensure high-speed, efficient multichip inference becomes the norm. With that in mind, the startup is planning to expand its team and grow its inference cloud to fulfill the rapidly growing demand for faster inference.

Photo: Gimlet Labs

Support our mission to maintain content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with greater than 11,400 tech and business leaders shaping the longer term through a singular trusted-based network.

About SiliconANGLE Media

SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. Because the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the Latest York Stock Exchange — SiliconANGLE Media operates on the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our recent proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to assist technology firms make data-driven decisions and stay on the forefront of industry conversations.

Related Post

Leave a Reply