Why most AI benchmarks tell us so little

On Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. Just just a few days later, rival Inflection AI unveiled a model that it asserts comes near matching a number of the most capable models on the market, including OpenAI’s GPT-4, in quality.

Anthropic and Inflection are certainly not the primary AI firms to contend their models have the competition met or beat by some objective measure. Google argued the identical of its Gemini models at their release, and OpenAI said it of GPT-4 and its predecessors, GPT-3, GPT-2 and GPT-1. The list goes on.

But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what’s that mean, exactly? Perhaps more to the purpose: Will a model that technically “performs” higher than another model actually feel improved in a tangible way?

On that last query, unlikely.

The explanation — or fairly, the issue — lies with the benchmarks AI corporations use to quantify a model’s strengths — and weaknesses.

Esoteric measures

Essentially the most commonly used benchmarks today for AI models — specifically chatbot-powering models like OpenAI’s ChatGPT and Anthropic’s Claude — do a poor job of capturing how the typical person interacts with the models being tested. For instance, one benchmark cited by Anthropic in its recent announcement, GPQA (“A Graduate-Level Google-Proof Q&A Benchmark”), accommodates a whole bunch of Ph.D.-level biology, physics and chemistry questions — yet most individuals use chatbots for tasks like responding to emails, writing cover letters and talking about their feelings.

Jesse Dodge, a scientist on the Allen Institute for AI, the AI research nonprofit, says that the industry has reached an “evaluation crises.”

“Benchmarks are typically static and narrowly focused on evaluating a single capability, like a model’s factuality in a single domain, or its ability to resolve mathematical reasoning multiple alternative questions,” Dodge told TechCrunch in an interview. “Many benchmarks used for evaluation are three-plus years old, from when AI systems were mostly just used for research and didn’t have many real users. As well as, people use generative AI in some ways — they’re very creative.”

The improper metrics

It’s not that the most-used benchmarks are totally useless. Someone’s undoubtedly asking ChatGPT Ph.D.-level math questions. Nonetheless, as generative AI models are increasingly positioned as mass market, “do-it-all” systems, old benchmarks have gotten less applicable.

David Widder, a postdoctoral researcher at Cornell studying AI and ethics, notes that lots of the abilities common benchmarks test — from solving grade school-level math problems to identifying whether a sentence accommodates an anachronism — won’t ever be relevant to nearly all of users.

“Older AI systems were often built to resolve a selected problem in a context (e.g. medical AI expert systems), making a deeply contextual understanding of what constitutes good performance in that individual context more possible,” Widder told TechCrunch. “As systems are increasingly seen as ‘general purpose,’ that is less possible, so we increasingly see a deal with testing models on a wide range of benchmarks across different fields.”

Errors and other flaws

Misalignment with the use cases aside, there’s questions as as to whether some benchmarks even properly measure what they purport to measure.

An evaluation of HellaSwag, a test designed to guage commonsense reasoning in models, found that greater than a 3rd of the test questions contained typos and “nonsensical” writing. Elsewhere, MMLU (short for “Massive Multitask Language Understanding”), a benchmark that’s been pointed to by vendors including Google, OpenAI and Anthropic as evidence their models can reason through logic problems, asks questions that will be solved through rote memorization.

Test questions from the HellaSwag benchmark.

“[Benchmarks like MMLU are] more about memorizing and associating two keywords together,” Widder said. “I can find [a relevant] article fairly quickly and answer the query, but that doesn’t mean I understand the causal mechanism, or could use an understanding of this causal mechanism to really reason through and solve latest and sophisticated problems in unforseen contexts. A model can’t either.”

Fixing what’s broken

So benchmarks are broken. But can they be fixed?

Dodge thinks so — with more human involvement.

“The suitable path forward, here, is a mix of evaluation benchmarks with human evaluation,” she said, “prompting a model with an actual user query after which hiring an individual to rate how good the response is.”

As for Widder, he’s less optimistic that benchmarks today — even with fixes for the more obvious errors, like typos — will be improved to the purpose where they’d be informative for the overwhelming majority of generative AI model users. As a substitute, he thinks that tests of models should deal with the downstream impacts of those models and whether the impacts, good or bad, are perceived as desirable to those impacted.

“I’d ask which specific contextual goals we would like AI models to give you the chance for use for and evaluate whether or not they’d be — or are — successful in such contexts,” he said. “And hopefully, too, that process involves evaluating whether we ought to be using AI in such contexts.”