Data is at the center of today’s advanced AI systems, but it surely’s costing increasingly — making it out of reach for all however the wealthiest tech corporations.
Last yr, James Betker, a researcher at OpenAI, penned a post on his personal blog concerning the nature of generative AI models and the datasets on which they’re trained. In it, Betker claimed that training data — not a model’s design, architecture or some other characteristic — was the important thing to increasingly sophisticated, capable AI systems.
“Trained on the identical data set for long enough, just about every model converges to the identical point,” Betker wrote.
Is Betker right? Is training data the largest determiner of what a model can do, whether it’s answer an issue, draw human hands, or generate a practical cityscape?
It’s definitely plausible.
Statistical machines
Generative AI systems are mainly probabilistic models — an enormous pile of statistics. They guess based on vast amounts of examples which data makes essentially the most “sense” to position where (e.g., the word “go” before “to the market” within the sentence “I’m going to the market”). It seems intuitive, then, that the more examples a model has to go on, the higher the performance of models trained on those examples.
“It does seem to be the performance gains are coming from data,” Kyle Lo, a senior applied research scientist on the Allen Institute for AI (AI2), a AI research nonprofit, told TechCrunch, “no less than once you’ve got a stable training setup.”
Lo gave the instance of Meta’s Llama 3, a text-generating model released earlier this yr, which outperforms AI2’s own OLMo model despite being architecturally very similar. Llama 3 was trained on significantly more data than OLMo, which Lo believes explains its superiority on many popular AI benchmarks.
(I’ll indicate here that the benchmarks in wide use within the AI industry today aren’t necessarily the very best gauge of a model’s performance, but outside of qualitative tests like our own, they’re considered one of the few measures now we have to go on.)
That’s to not suggest that training on exponentially larger datasets is a sure-fire path to exponentially higher models. Models operate on a “garbage in, garbage out” paradigm, Lo notes, and so data curation and quality matter an incredible deal, perhaps greater than sheer quantity.
“It is feasible that a small model with fastidiously designed data outperforms a big model,” he added. “For instance, Falcon 180B, a big model, is ranked 63rd on the LMSYS benchmark, while Llama 2 13B, a much smaller model, is ranked 56th.”
In an interview with TechCrunch last October, OpenAI researcher Gabriel Goh said that higher-quality annotations contributed enormously to the improved image quality in DALL-E 3, OpenAI’s text-to-image model, over its predecessor DALL-E 2. “I believe that is the fundamental source of the improvements,” he said. “The text annotations are rather a lot higher than they were [with DALL-E 2] — it’s not even comparable.”
Many AI models, including DALL-E 3 and DALL-E 2, are trained by having human annotators label data in order that a model can learn to associate those labels with other, observed characteristics of that data. For instance, a model that’s fed a lot of cat pictures with annotations for every breed will eventually “learn” to associate terms like bobtail and shorthair with their distinctive visual traits.
Bad behavior
Experts like Lo worry that the growing emphasis on large, high-quality training datasets will centralize AI development into the few players with billion-dollar budgets that may afford to amass these sets. Major innovation in synthetic data or fundamental architecture could disrupt the establishment, but neither seem like on the near horizon.
“Overall, entities governing content that’s potentially useful for AI development are incentivized to lock up their materials,” Lo said. “And as access to data closes up, we’re mainly blessing just a few early movers on data acquisition and pulling up the ladder so no one else can get access to data to catch up.”
Indeed, where the race to scoop up more training data hasn’t led to unethical (and maybe even illegal) behavior like secretly aggregating copyrighted content, it has rewarded tech giants with deep pockets to spend on data licensing.
Generative AI models similar to OpenAI’s are trained totally on images, text, audio, videos and other data — some copyrighted — sourced from public web pages (including, problematically, AI-generated ones). The OpenAIs of the world assert that fair use shields them from legal reprisal. Many rights holders disagree — but, no less than for now, they’ll’t do much to forestall this practice.
There are lots of, many examples of generative AI vendors acquiring massive datasets through questionable means so as to train their models. OpenAI reportedly transcribed greater than 1,000,000 hours of YouTube videos without YouTube’s blessing — or the blessing of creators — to feed to its flagship model GPT-4. Google recently broadened its terms of service partly to find a way to tap public Google Docs, restaurant reviews on Google Maps and other online material for its AI products. And Meta is alleged to have considered risking lawsuits to train its models on IP-protected content.
Meanwhile, corporations large and small are counting on staff in third-world countries paid only just a few dollars per hour to create annotations for training sets. A few of these annotators — employed by mammoth startups like Scale AI — work literal days on end to finish tasks that expose them to graphic depictions of violence and bloodshed with none advantages or guarantees of future gigs.
Growing cost
In other words, even the more aboveboard data deals aren’t exactly fostering an open and equitable generative AI ecosystem.
OpenAI has spent tons of of tens of millions of dollars licensing content from news publishers, stock media libraries and more to coach its AI models — a budget far beyond that of most academic research groups, nonprofits and startups. Meta has gone thus far as to weigh acquiring the publisher Simon & Schuster for the rights to e-book excerpts (ultimately, Simon & Schuster sold to personal equity firm KKR for $1.62 billion in 2023).
With the marketplace for AI training data expected to grow from roughly $2.5 billion now to shut to $30 billion inside a decade, data brokers and platforms are rushing to charge top dollar — in some cases over the objections of their user bases.
Stock media library Shutterstock has inked deals with AI vendors starting from $25 million to $50 million, while Reddit claims to have made tons of of tens of millions from licensing data to orgs similar to Google and OpenAI. Few platforms with abundant data collected organically over time haven’t signed agreements with generative AI developers, it seems — from Photobucket to Tumblr to Q&A site Stack Overflow.
It’s the platforms’ data to sell — no less than depending on which legal arguments you think. But generally, users aren’t seeing a dime of the profits. And it’s harming the broader AI research community.
“Smaller players won’t find a way to afford these data licenses, and subsequently won’t find a way to develop or study AI models,” Lo said. “I worry this may lead to a scarcity of independent scrutiny of AI development practices.”
Independent efforts
If there’s a ray of sunshine through the gloom, it’s the few independent, not-for-profit efforts to create massive datasets anyone can use to coach a generative AI model.
EleutherAI, a grassroots nonprofit research group that began as a loose-knit Discord collective in 2020, is working with the University of Toronto, AI2 and independent researchers to create The Pile v2, a set of billions of text passages primarily sourced from the general public domain.
In April, AI startup Hugging Face released FineWeb, a filtered version of the Common Crawl — the eponymous dataset maintained by the nonprofit Common Crawl, composed of billions upon billions of web pages — that Hugging Face claims improves model performance on many benchmarks.
Just a few efforts to release open training datasets, just like the group LAION’s image sets, have run up against copyright, data privacy and other, equally serious ethical and legal challenges. But among the more dedicated data curators have pledged to do higher. The Pile v2, for instance, removes problematic copyrighted material present in its progenitor dataset, The Pile.
The query is whether or not any of those open efforts can hope to keep up pace with Big Tech. So long as data collection and curation stays a matter of resources, the reply is probably going no — no less than not until some research breakthrough levels the playing field.