What Is AI Superintelligence? Could It Destroy Humanity? And Is It Really Almost Here?

Date:

In 2014, the British philosopher Nick Bostrom published a book in regards to the way forward for artificial intelligence with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the concept that advanced AI systems—“superintelligences” more capable than humans—might sooner or later take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “just a few thousand days” away. A 12 months ago, Altman’s OpenAI cofounder Ilya Sutskever arrange a team inside the company to give attention to “secure superintelligence,” but he and his team have now raised a billion dollars to create a startup of their very own to pursue this goal.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that may mean in practice can get a bit tricky.

Different Sorts of AI

In my opinion, essentially the most useful strategy to take into consideration different levels and sorts of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google.

Their framework lists six levels of AI performance: no AI, emerging, competent, expert, virtuoso, and superhuman. It also makes a crucial distinction between narrow systems, which may perform a small range of tasks, and more general systems.

A narrow, no-AI system is something like a calculator. It carries out various mathematical tasks in keeping with a set of explicitly programmed rules.

There are already loads of very successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov way back in 1997 for instance of a virtuoso-level narrow AI system.

Table: The Conversation * Source: Adapted from Morris et al. * Created with Datawrapper

Some narrow systems even have superhuman capabilities. One example is AlphaFold, which uses machine learning to predict the structure of protein molecules, and whose creators won the Nobel Prize in Chemistry this 12 months.What about general systems? That is software that may tackle a much wider range of tasks, including things like learning latest skills.

A general no-AI system may be something like Amazon’s Mechanical Turk: It will probably do a big selection of things, nevertheless it does them by asking real people.

Overall, general AI systems are far less advanced than their narrow cousins. In response to Morris, the state-of-the-art language models behind chatbots similar to ChatGPT are general AI—but they’re to date on the “emerging” level (meaning they’re “equal to or somewhat higher than an unskilled human”), and yet to achieve “competent” (pretty much as good as 50 percent of expert adults).

So by this reckoning, we’re still a ways from general superintelligence.

How Intelligent Is AI Right Now?

As Morris points out, precisely determining where any given system sits would rely on having reliable tests or benchmarks.

Depending on our benchmarks, an image-generating system similar to DALL-E may be at virtuoso level (because it could produce images 99 percent of humans couldn’t draw or paint), or it may be emerging (since it produces errors no human would, similar to mutant hands and inconceivable objects).

There is critical debate even in regards to the capabilities of current systems. One notable 2023 paper argued GPT-4 showed “sparks of artificial general intelligence.”

OpenAI says its latest language model, o1, can “perform complex reasoning” and “rivals the performance of human experts” on many benchmarks.

Nevertheless, a recent paper from Apple researchers found o1 and plenty of other language models have significant trouble solving real mathematical reasoning problems. Their experiments show the outputs of those models appear to resemble sophisticated pattern-matching reasonably than true advanced reasoning. This means superintelligence shouldn’t be as imminent as many have suggested.

Will AI Keep Getting Smarter?

Some people think the rapid pace of AI progress over the past few years will proceed and even speed up. Tech corporations are investing a whole lot of billions of dollars in AI hardware and capabilities, so this doesn’t seem inconceivable.

If this happens, we may indeed see general superintelligence inside the “few thousand days” proposed by Sam Altman (that’s a decade or so in less sci-fi terms). Sutskever and his team mentioned an analogous timeframe of their superalignment article.

Many recent successes in AI have come from the appliance of a way called “deep learning,” which, in simplistic terms, finds associative patterns in gigantic collections of information. Indeed, this 12 months’s Nobel Prize in Physics has been awarded to John Hopfield and likewise the “Godfather of AI” Geoffrey Hinton, for his or her invention of the Hopfield network and Boltzmann machine, that are the inspiration of many powerful deep learning models used today.

General systems similar to ChatGPT have relied on data generated by humans, much of it in the shape of text from books and web sites. Improvements of their capabilities have largely come from increasing the dimensions of the systems and the quantity of information on which they’re trained.

Nevertheless, there is probably not enough human-generated data to take this process much further (although efforts to make use of data more efficiently, generate synthetic data, and improve transfer of skills between different domains may bring improvements). Even when there have been enough data, some researchers say language models similar to ChatGPT are fundamentally incapable of reaching what Morris would call general competence.

One recent paper has suggested an important feature of superintelligence could be open-endedness, at the very least from a human perspective. It could must find a way to repeatedly generate outputs that a human observer would regard as novel and find a way to learn from.

Existing foundation models usually are not trained in an open-ended way, and existing open-ended systems are quite narrow. This paper also highlights how either novelty or learnability alone shouldn’t be enough. A brand new form of open-ended foundation model is required to realize superintelligence.

What Are the Risks?

So what does all this mean for the risks of AI? Within the short term, at the very least, we don’t must worry about superintelligent AI taking up the world.

But that’s to not say AI doesn’t present risks. Again, Morris and co have thought this through: As AI systems gain great capability, they can also gain greater autonomy. Different levels of capability and autonomy present different risks.

For instance, when AI systems have little autonomy and folks use them as a type of consultant—once we ask ChatGPT to summarize documents, say, or let the YouTube algorithm shape our viewing habits—we would face a risk of over-trusting or over-relying on them.

Within the meantime, Morris points out other risks to observe out for as AI systems develop into more capable, starting from people forming parasocial relationships with AI systems to mass job displacement and society-wide ennui.

What’s Next?

Let’s suppose we do sooner or later have superintelligent, fully autonomous AI agents. Will we then face the chance they might concentrate power or act against human interests?

Not necessarily. Autonomy and control can go hand in hand. A system will be highly automated, yet provide a high level of human control.

Like many within the AI research community, I consider secure superintelligence is possible. Nevertheless, constructing it should be a fancy and multidisciplinary task, and researchers can have to tread unbeaten paths to get there.

Share post:

Popular

More like this
Related

Save on a Ryzen-Powered Gaming Machine

The Dell G15 Gaming Laptop is currently available for...

Bill Belichick Interviewed For North Carolina HC Job; Latest On NFL Interest

This season marked Bill Belichick‘s first out of the...

Lance Bass Says His CW Show Was Killed After Coming Out as Gay

Lance Bass opened up about one in all the...