A Wall Street Journal report on Friday said Nvidia insiders had expressed doubts in regards to the transaction and that Huang had privately criticized what he described as a scarcity of discipline in OpenAI’s business approach. The Journal also reported that Huang had expressed concern in regards to the competition OpenAI faces from Google and Anthropic. Huang called those claims “nonsense.”
Nvidia shares fell about 1.1 percent on Monday following the reports. Sarah Kunst, managing director at Cleo Capital, told CNBC that the back-and-forth was unusual. “Considered one of the things I did notice about Jensen Huang is that there wasn’t a robust ‘It should be $100 billion.’ It was, ‘It should be big. It should be our biggest investment ever.’ And so I do think there are some query marks there.”
In September, Bryn Talkington, managing partner at Requisite Capital Management, noted the circular nature of such investments to CNBC. “Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and offers it back to Nvidia,” Talkington said. “I feel like that is going to be very virtuous for Jensen.”
Tech critic Ed Zitron has been critical of Nvidia’s circular investments for a while, which touch dozens of tech corporations, including major players and startups. Also they are all Nvidia customers.
“NVIDIA seeds corporations and offers them the guaranteed contracts mandatory to boost debt to purchase GPUs from NVIDIA,” Zitron wrote on Bluesky last September, “Regardless that these corporations are horribly unprofitable and can eventually die from a scarcity of any real demand.”
Chips from other places
Outside of sourcing GPUs from Nvidia, OpenAI has reportedly discussed working with startups Cerebras and Groq, each of which construct chips designed to scale back inference latency. But in December, Nvidia struck a $20 billion licensing cope with Groq, which Reuters sources say ended OpenAI’s talks with Groq. Nvidia hired Groq’s founder and CEO Jonathan Ross together with other senior leaders as a part of the arrangement.
In January, OpenAI announced a $10 billion cope with Cerebras as an alternative, adding 750 megawatts of computing capability for faster inference through 2028. Sachin Katti, who joined OpenAI from Intel in November to steer compute infrastructure, said the partnership adds “a dedicated low-latency inference solution” to OpenAI’s platform.
But OpenAI has clearly been hedging its bets. Beyond the Cerebras deal, the corporate struck an agreement with AMD in October for six gigawatts of GPUs and announced plans with Broadcom to develop a custom AI chip to wean itself off of Nvidia dependence. When those chips will probably be ready, nevertheless, is currently unknown.

