For all of the discuss artificial intelligence upending the world, its economic effects remain uncertain. There is huge investment in AI but little clarity about what it can produce.
Examining AI has turn into a big a part of Nobel-winning economist Daron Acemoglu’s work. An Institute Professor at MIT, Acemoglu has long studied the impact of technology in society, from modeling the large-scale adoption of innovations to conducting empirical studies in regards to the impact of robots on jobs.
In October, Acemoglu also shared the 2024 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel with two collaborators, Simon Johnson PhD ’89 of the MIT Sloan School of Management and James Robinson of the University of Chicago, for research on the connection between political institutions and economic growth. Their work shows that democracies with robust rights sustain higher growth over time than other forms of presidency do.
Since lots of growth comes from technological innovation, the best way societies use AI is of keen interest to Acemoglu, who has published quite a lot of papers in regards to the economics of the technology in recent months.
“Where will the brand new tasks for humans with generative AI come from?” asks Acemoglu. “I don’t think we all know those yet, and that’s what the problem is. What are the apps which might be really going to vary how we do things?”
What are the measurable effects of AI?
Since 1947, U.S. GDP growth has averaged about 3 percent annually, with productivity growth at about 2 percent annually. Some predictions have claimed AI will double growth or no less than create the next growth trajectory than usual. Against this, in a single paper, “The Easy Macroeconomics of AI,” published within the August issue of Economic Policy, Acemoglu estimates that over the following decade, AI will produce a “modest increase” in GDP between 1.1 to 1.6 percent over the following 10 years, with a roughly 0.05 percent annual gain in productivity.
Acemoglu’s assessment is predicated on recent estimates about what number of jobs are affected by AI, including a 2023 study by researchers at OpenAI, OpenResearch, and the University of Pennsylvania, which finds that about 20 percent of U.S. job tasks could be exposed to AI capabilities. A 2024 study by researchers from MIT FutureTech, in addition to the Productivity Institute and IBM, finds that about 23 percent of computer vision tasks that will be ultimately automated may very well be profitably done so inside the following 10 years. Still more research suggests the typical cost savings from AI is about 27 percent.
In the case of productivity, “I don’t think we should always belittle 0.5 percent in 10 years. That’s higher than zero,” Acemoglu says. “Nevertheless it’s just disappointing relative to the guarantees that folks within the industry and in tech journalism are making.”
To be certain, that is an estimate, and extra AI applications may emerge: As Acemoglu writes within the paper, his calculation doesn’t include the usage of AI to predict the shapes of proteins — for which other scholars subsequently shared a Nobel Prize in October.
Other observers have suggested that “reallocations” of employees displaced by AI will create additional growth and productivity, beyond Acemoglu’s estimate, though he doesn’t think this can matter much. “Reallocations, ranging from the actual allocation that we’ve got, typically generate only small advantages,” Acemoglu says. “The direct advantages are the large deal.”
He adds: “I attempted to jot down the paper in a really transparent way, saying what’s included and what isn’t included. People can disagree by saying either the things I even have excluded are a giant deal or the numbers for the things included are too modest, and that’s completely effective.”
Which jobs?
Conducting such estimates can sharpen our intuitions about AI. Loads of forecasts about AI have described it as revolutionary; other analyses are more circumspect. Acemoglu’s work helps us grasp on what scale we would expect changes.
“Let’s exit to 2030,” Acemoglu says. “How different do you think that the U.S. economy goes to be due to AI? You could possibly be a whole AI optimist and think that hundreds of thousands of individuals would have lost their jobs due to chatbots, or perhaps that some people have turn into super-productive employees because with AI they will do 10 times as many things as they’ve done before. I don’t think so. I feel most firms are going to be doing roughly the identical things. Just a few occupations will probably be impacted, but we’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees.”
If that is true, then AI almost definitely applies to a bounded set of white-collar tasks, where large amounts of computational power can process lots of inputs faster than humans can.
“It’s going to affect a bunch of office jobs which might be about data summary, visual matching, pattern recognition, et cetera,” Acemoglu adds. “And people are essentially about 5 percent of the economy.”
While Acemoglu and Johnson have sometimes been considered skeptics of AI, they view themselves as realists.
“I’m trying to not be bearish,” Acemoglu says. “There are things generative AI can do, and I imagine that, genuinely.” Nonetheless, he adds, “I imagine there are methods we could use generative AI higher and get larger gains, but I don’t see them as the main target area of the industry in the mean time.”
Machine usefulness, or employee substitute?
When Acemoglu says we may very well be using AI higher, he has something specific in mind.
Certainly one of his crucial concerns about AI is whether or not it can take the shape of “machine usefulness,” helping employees gain productivity, or whether it can be aimed toward mimicking general intelligence in an effort to switch human jobs. It’s the difference between, say, providing recent information to a biotechnologist versus replacing a customer support employee with automated call-center technology. To this point, he believes, firms have been focused on the latter kind of case.
“My argument is that we currently have the fallacious direction for AI,” Acemoglu says. “We’re using it an excessive amount of for automation and never enough for providing expertise and data to employees.”
Acemoglu and Johnson delve into this issue in depth of their high-profile 2023 book “Power and Progress” (PublicAffairs), which has a simple leading query: Technology creates economic growth, but who captures that economic growth? Is it elites, or do employees share within the gains?
As Acemoglu and Johnson make abundantly clear, they favor technological innovations that increase employee productivity while keeping people employed, which should sustain growth higher.
But generative AI, in Acemoglu’s view, focuses on mimicking whole people. This yields something he has for years been calling “so-so technology,” applications that perform at best only just a little higher than humans, but save firms money. Call-center automation isn’t all the time more productive than people; it just costs firms lower than employees do. AI applications that complement employees seem generally on the back burner of the large tech players.
“I don’t think complementary uses of AI will miraculously appear by themselves unless the industry devotes significant energy and time to them,” Acemoglu says.
What does history suggest about AI?
The indisputable fact that technologies are sometimes designed to switch employees is the main target of one other recent paper by Acemoglu and Johnson, “Learning from Ricardo and Thompson: Machinery and Labor within the Early Industrial Revolution — and within the Age of AI,” published in August in Annual Reviews in Economics.
The article addresses current debates over AI, especially claims that even when technology replaces employees, the following growth will almost inevitably profit society widely over time. England through the Industrial Revolution is typically cited as a working example. But Acemoglu and Johnson contend that spreading the advantages of technology doesn’t occur easily. In Nineteenth-century England, they assert, it occurred only after a long time of social struggle and employee motion.
“Wages are unlikely to rise when employees cannot push for his or her share of productivity growth,” Acemoglu and Johnson write within the paper. “Today, artificial intelligence may boost average productivity, however it also may replace many employees while degrading job quality for individuals who remain employed. … The impact of automation on employees today is more complex than an automatic linkage from higher productivity to higher wages.”
The paper’s title refers back to the social historian E.P Thompson and economist David Ricardo; the latter is commonly considered the discipline’s second-most influential thinker ever, after Adam Smith. Acemoglu and Johnson assert that Ricardo’s views went through their very own evolution on this subject.
“David Ricardo made each his academic work and his political profession by arguing that machinery was going to create this amazing set of productivity improvements, and it will be useful for society,” Acemoglu says. “After which sooner or later, he modified his mind, which shows he may very well be really open-minded. And he began writing about how if machinery replaced labor and didn’t do anything, it will be bad for employees.”
This mental evolution, Acemoglu and Johnson contend, is telling us something meaningful today: There aren’t forces that inexorably guarantee broad-based advantages from technology, and we should always follow the evidence about AI’s impact, a technique or one other.
What’s the perfect speed for innovation?
If technology helps generate economic growth, then fast-paced innovation may appear ideal, by delivering growth more quickly. But in one other paper, “Regulating Transformative Technologies,” from the September issue of American Economic Review: Insights, Acemoglu and MIT doctoral student Todd Lensman suggest another outlook. If some technologies contain each advantages and disadvantages, it’s best to adopt them at a more measured tempo, while those problems are being mitigated.
“If social damages are large and proportional to the brand new technology’s productivity, the next growth rate paradoxically results in slower optimal adoption,” the authors write within the paper. Their model suggests that, optimally, adoption should occur more slowly at first after which speed up over time.
“Market fundamentalism and technology fundamentalism might claim it’s best to all the time go at the utmost speed for technology,” Acemoglu says. “I don’t think there’s any rule like that in economics. More deliberative considering, especially to avoid harms and pitfalls, will be justified.”
Those harms and pitfalls could include damage to the job market, or the rampant spread of misinformation. Or AI might harm consumers, in areas from internet advertising to online gaming. Acemoglu examines these scenarios in one other paper, “When Big Data Enables Behavioral Manipulation,” forthcoming in American Economic Review: Insights; it’s co-authored with Ali Makhdoumi of Duke University, Azarakhsh Malekian of the University of Toronto, and Asu Ozdaglar of MIT.
“If we’re using it as a manipulative tool, or an excessive amount of for automation and never enough for providing expertise and data to employees, then we might desire a course correction,” Acemoglu says.
Definitely others might claim innovation has less of a downside or is unpredictable enough that we should always not apply any handbrakes to it. And Acemoglu and Lensman, within the September paper, are simply developing a model of innovation adoption.
That model is a response to a trend of the last decade-plus, by which many technologies are hyped are inevitable and celebrated due to their disruption. Against this, Acemoglu and Lensman are suggesting we will reasonably judge the tradeoffs involved particularly technologies and aim to spur additional discussion about that.
How can we reach the proper speed for AI adoption?
If the concept is to adopt technologies more steadily, how would this occur?
To begin with, Acemoglu says, “government regulation has that role.” Nonetheless, it isn’t clear what sorts of long-term guidelines for AI could be adopted within the U.S. or around the globe.
Secondly, he adds, if the cycle of “hype” around AI diminishes, then the push to make use of it “will naturally decelerate.” This will likely well be more likely than regulation, if AI doesn’t produce profits for firms soon.
“The rationale why we’re going so fast is the hype from enterprise capitalists and other investors, because they think we’re going to be closer to artificial general intelligence,” Acemoglu says. “I feel that hype is making us invest badly when it comes to the technology, and lots of businesses are being influenced too early, without knowing what to do. We wrote that paper to say, look, the macroeconomics of it can profit us if we’re more deliberative and understanding about what we’re doing with this technology.”
On this sense, Acemoglu emphasizes, hype is a tangible aspect of the economics of AI, because it drives investment in a specific vision of AI, which influences the AI tools we may encounter.
“The faster you go, and the more hype you will have, that course correction becomes less likely,” Acemoglu says. “It’s very difficult, should you’re driving 200 miles an hour, to make a 180-degree turn.”