On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven way forward for tech progress and global prosperity in a brand new personal blog post titled “The Intelligence Age.” The essay paints an image of human advancement accelerated by AI, with Altman suggesting that superintelligent AI could emerge inside the following decade.
“It is feasible that we’ll have superintelligence in a number of thousand days (!); it could take longer, but I’m confident we’ll get there,” he wrote.
OpenAI’s current goal is to create AGI (artificial general intelligence), which is a term for hypothetical technology that might match human intelligence in performing many tasks without the necessity for specific training. Against this, superintelligence surpasses AGI, and it could possibly be seen as a hypothetical level of machine intelligence that may dramatically outperform humans at any mental task, even perhaps to an unfathomable degree.
Superintelligence (sometimes called “ASI” for “artificial superintelligence”) is a preferred but sometimes fringe topic among the many machine-learning community, and it has been for years—especially since controversial philosopher Nick Bostrom authored a book titled Superintelligence: Paths, Dangers, Strategies in 2014. Former OpenAI co-founder and Chief Scientist Ilya Sutskever left OpenAI in June to found an organization with the term in its name: Protected Superintelligence. Meanwhile, Altman himself has been talking about developing superintelligence since a minimum of last yr.
So, just how long is “a number of thousand days”? There is no telling exactly. The likely reason Altman picked a vague number is because he doesn’t exactly know when ASI will arrive, however it feels like he thinks it could occur inside a decade. For comparison, 2,000 days is about 5.5 years, 3,000 days is around 8.2 years, and 4,000 days is nearly 11 years.
It is simple to criticize Altman’s vagueness here; nobody can truly predict the long run, but Altman, as CEO of OpenAI, is probably going aware of AI research techniques coming down the pipeline that are not broadly known to the general public. So even when couched with a broad timeframe, the claim comes from a noteworthy source within the AI field—albeit one who’s heavily invested in ensuring that AI progress doesn’t stall.
Not everyone shares Altman’s optimism and enthusiasm. Computer scientist and frequent AI critic Grady Booch quoted Altman’s “few thousand days” prediction and wrote on X, “I’m so freaking uninterested in all of the AI hype: it has no basis in point of fact and serves only to inflate valuations, inflame the general public, garnet [sic] headlines, and distract from the actual work occurring in computing.”
Despite the criticism, it’s notable when the CEO of what’s probably the defining AI company of the moment makes a broad prediction about future capabilities—even when which means he’s perpetually trying to lift money. Constructing infrastructure to power AI services is foremost on many tech CEOs’ minds lately.
“If we would like to place AI into the hands of as many individuals as possible,” Altman writes in his essay, “we want to drive down the price of compute and make it abundant (which requires a lot of energy and chips). If we don’t construct enough infrastructure, AI will probably be a really limited resource that wars get fought over and that becomes mostly a tool for wealthy people.”
Altman’s vision for “The Intelligence Age”
Elsewhere within the essay, Altman frames our present era because the dawn of “The Intelligence Age,” the following transformative technology era in human history, following the Stone Age, Agricultural Age, and Industrial Age. He credits the success of deep-learning algorithms because the catalyst for this latest era, stating simply: “How did we get to the doorstep of the following leap in prosperity? In three words: deep learning worked.”
The OpenAI chief envisions AI assistants becoming increasingly capable, eventually forming “personal AI teams” that will help individuals accomplish almost anything they will imagine. He predicts AI will enable breakthroughs in education, health care, software development, and other fields.
While acknowledging potential downsides and labor market disruptions, Altman stays optimistic about AI’s overall impact on society. He writes, “Prosperity alone doesn’t necessarily make people comfortable—there are many miserable wealthy people—however it would meaningfully improve the lives of individuals world wide.”
Even with AI regulation like SB-1047 the recent topic of the day, Altman didn’t mention sci-fi dangers from AI specifically. On X, Bloomberg columnist Matthew Yglesias wrote, “Notable that @sama isn’t any longer even paying lip service to existential risk concerns, the one downsides he’s contemplating are labor market adjustment issues.”
While smitten by AI’s potential, Altman urges caution, too, but vaguely. He writes, “We want to act correctly but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and very high-stakes challenges. It’ll not be a completely positive story, however the upside is so tremendous that we owe it to ourselves, and the long run, to work out find out how to navigate the risks in front of us.”
Apart from the labor market disruptions, Altman doesn’t say how the Intelligence Age is not going to entirely be positive, but he closes with an analogy of an outdated occupation that was lost because of technological changes.
“Lots of the jobs we do today would have looked like trifling wastes of time to people a number of hundred years ago, but no one is looking back on the past, wishing they were a lamplighter,” he wrote. “If a lamplighter could see the world today, he would think the prosperity throughout him was unimaginable. And if we could fast-forward 100 years from today, the prosperity throughout us would feel just as unimaginable.”