OpenAI’s Project Strawberry Said to Be Constructing AI That Reasons and Does ‘Deep Research’

Date:

Giftmio [Lifetime] Many GEOs
Pheromones
Boutiquefeel WW
Cotosen WW

Despite their uncanny language skills, today’s leading AI chatbots still struggle with reasoning. A secretive recent project from OpenAI could reportedly be on the verge of fixing that.

While today’s large language models can already perform a bunch of useful tasks, they’re still a great distance from replicating the form of problem-solving capabilities humans have. Particularly, they’re not good at coping with challenges that require them to take multiple steps to succeed in an answer.

Imbuing AI with those sorts of skills would greatly increase its utility and has been a serious focus for lots of the leading research labs. In response to recent reports, OpenAI could also be near a breakthrough on this area.

An article in Reuters this week claimed its journalists had been shown an internal document from the corporate discussing a project code-named Strawberry that’s constructing models able to planning, navigating the web autonomously, and carrying out what OpenAI refers to as “deep research.”

A separate story from Bloomberg said the corporate had demoed research at a recent all-hands meeting that gave its GPT-4 model skills described as much like human reasoning abilities. It’s unclear whether the demo was a part of project Strawberry.

According, to the Reuters report, project Strawberry is an extension of the Q* project that was revealed last yr just before OpenAI CEO Sam Altman was ousted by the board. The model in query was supposedly able to solving grade-school math problems.

That may sound innocuous, but some contained in the company believed it signaled a breakthrough in problem-solving capabilities that might speed up progress towards artificial general intelligence, or AGI. Math has long been an Achilles’ heel for big language models, and capabilities on this area are seen as proxy for reasoning skills.

A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent rating on a difficult test of AI math skills, though it again couldn’t confirm if this was related to project Strawberry. But one other two sources reported seeing demos from the Q* project that involved models solving math and science questions that might be beyond today’s leading industrial AIs.

Exactly how OpenAI has achieved these enhanced capabilities is unclear at present. The Reuters report notes that Strawberry involves fine-tuning OpenAI’s existing large language models, which have already been trained on reams of knowledge. The approach, in accordance with the article, is analogous to 1 detailed in a 2022 paper from Stanford researchers called Self-Taught Reasoner or STaR.

That method builds on an idea generally known as “chain-of-thought” prompting, during which a big language model is asked to elucidate the reasoning steps behind its answer to a question. Within the STaR paper, the authors showed an AI model a handful of those “chain-of-thought” rationales as examples after which asked it to give you answers and rationales for numerous questions.

If it got the query mistaken, the researchers would show the model the proper answer after which ask it to give you a brand new rationale. The model was then fine-tuned on all the rationales that led to an accurate answer, and the method was repeated. This led to significantly improved performance on multiple datasets, and the researchers note that the approach effectively allowed the model to self-improve by training on reasoning data it had produced itself.

How closely Strawberry mimics this approach is unclear, but when it relies on self-generated data, that could possibly be significant. The holy grail for a lot of AI researchers is “recursive self-improvement,” during which weak AI can enhance its own capabilities to bootstrap itself to higher orders of intelligence.

Nevertheless, it’s necessary to take vague leaks from industrial AI research labs with a pinch of salt. These corporations are highly motivated to provide the looks of rapid progress behind the scenes.

The incontrovertible fact that project Strawberry appears to be little greater than a rebranding of Q*, which was first reported over six months ago, should give pause. So far as concrete results go, publicly demonstrated progress has been fairly incremental, with essentially the most recent AI releases from OpenAI, Google, and Anthropic providing modest improvements over previous versions.

At the identical time, it could be unwise to discount the opportunity of a major breakthrough. Leading AI corporations have been pouring billions of dollars into making the subsequent great leap in performance, and reasoning has been an obvious bottleneck on which to focus resources. If OpenAI has genuinely made a major advance, it probably won’t be long until we discover out.

Share post:

Popular

More like this
Related

Michael Madsen Files for Divorce From DeAnna, Calls Marriage ‘Abusive’

DeAnna Madsen and Michael Madsen. INSTAR Images Actor Michael...

Revolutionary visible-light-antenna ligand enhances samarium-catalyzed reactions

Samarium (Sm), a rare earth metal, is significant to...