Understanding the nuances of human-like intelligence | MIT News

What can we study human intelligence by studying how machines “think?” Can we higher understand ourselves if we higher understand the substitute intelligence systems which are becoming a more significant a part of our on a regular basis lives?

These questions could also be deeply philosophical, but for Phillip Isola, finding the answers is as much about computation because it is about cogitation.

Isola, the newly tenured associate professor within the Department of Electrical Engineering and Computer Science (EECS), studies the elemental mechanisms involved in human-like intelligence from a computational perspective.

While understanding intelligence is the overarching goal, his work focuses mainly on computer vision and machine learning. Isola is especially eager about exploring how intelligence emerges in AI models, how these models learn to represent the world around them, and what their “brains” share with the brains of their human creators.

“I see all the various sorts of intelligence as having lots of commonalities, and I’d like to grasp those commonalities. What’s it that each one animals, humans, and AIs have in common?” says Isola, who can be a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

To Isola, a greater scientific understanding of the intelligence that AI agents possess will help the world integrate them safely and effectively into society, maximizing their potential to profit humanity.

Asking questions

Isola began pondering scientific questions at a young age.

While growing up in San Francisco, he and his father steadily went mountain climbing along the northern California coastline or camping around Point Reyes and within the hills of Marin County.

He was fascinated by geological processes and sometimes wondered what made the natural world work. At school, Isola was driven by an insatiable curiosity, and while he gravitated toward technical subjects like math and science, there was no limit to what he desired to learn.

Not entirely sure what to check as an undergraduate at Yale University, Isola dabbled until he got here upon cognitive sciences.

“My earlier interest had been with nature — how the world works. But then I noticed that the brain was much more interesting, and more complex than even the formation of the planets. Now, I desired to know what makes us tick,” he says.

As a first-year student, he began working within the lab of his cognitive sciences professor and soon-to-be mentor, Brian Scholl, a member of the Yale Department of Psychology. He remained in that lab throughout his time as an undergraduate.

After spending a niche 12 months working with some childhood friends at an indie video game company, Isola was able to dive back into the complex world of the human brain. He enrolled within the graduate program in brain and cognitive sciences at MIT.

“Grad school was where I felt like I finally found my place. I had lots of great experiences at Yale and in other phases of my life, but once I got to MIT, I noticed this was the work I actually loved and these are the individuals who think similarly to me,” he says.

Isola credits his PhD advisor, Ted Adelson, the John and Dorothy Wilson Professor of Vision Science, as a significant influence on his future path. He was inspired by Adelson’s give attention to understanding fundamental principles, fairly than only chasing recent engineering benchmarks, that are formalized tests used to measure the performance of a system.

A computational perspective

At MIT, Isola’s research drifted toward computer science and artificial intelligence.

“I still loved all those questions from cognitive sciences, but I felt I could make more progress on a few of those questions if I got here at it from a purely computational perspective,” he says.

His thesis was focused on perceptual grouping, which involves the mechanisms people and machines use to arrange discrete parts of a picture as a single, coherent object.

If machines can learn perceptual groupings on their very own, that might enable AI systems to acknowledge objects without human intervention. Any such self-supervised learning has applications in areas such autonomous vehicles, medical imaging, robotics, and automatic language translation.

After graduating from MIT, Isola accomplished a postdoc on the University of California at Berkeley so he could broaden his perspectives by working in a lab solely focused on computer science.

“That have helped my work change into loads more impactful because I learned to balance understanding fundamental, abstract principles of intelligence with the pursuit of some more concrete benchmarks,” Isola recalls.

At Berkeley, he developed image-to-image translation frameworks, an early type of generative AI model that might turn a sketch right into a photographic image, for example, or turn a black-and-white photo right into a color one.

He entered the tutorial job market and accepted a school position at MIT, but Isola deferred for a 12 months to work at a then-small startup called OpenAI.

“It was a nonprofit, and I liked the idealistic mission at the moment. They were really good at reinforcement learning, and I assumed that gave the impression of a vital topic to learn more about,” he says.

He enjoyed working in a lab with a lot scientific freedom, but after a 12 months Isola was able to return to MIT and begin his own research group.

Studying human-like intelligence

Running a research lab immediately appealed to him.

“I actually love the early stage of an idea. I feel like I’m a type of startup incubator where I’m continuously in a position to do recent things and learn recent things,” he says.

Constructing on his interest in cognitive sciences and desire to grasp the human brain, his group studies the elemental computations involved within the human-like intelligence that emerges in machines.

One primary focus is representation learning, or the flexibility of humans and machines to represent and perceive the sensory world around them.

In recent work, he and his collaborators observed that the numerous varied kinds of machine-learning models, from LLMs to computer vision models to audio models, appear to represent the world in similar ways.

These models are designed to do vastly different tasks, but there are lots of similarities of their architectures. And as they get larger and are trained on more data, their internal structures change into more alike.

This led Isola and his team to introduce the Platonic Representation Hypothesis (drawing its name from the Greek philosopher Plato) which says that the representations all these models learn are converging toward a shared, underlying representation of reality.

“Language, images, sound — all of those are different shadows on the wall from which you’ll be able to infer that there may be some type of underlying physical process — some type of causal reality — on the market. For those who train models on all these several types of data, they need to converge on that world model ultimately,” Isola says.

A related area his team studies is self-supervised learning. This involves the ways by which AI models learn to group related pixels in a picture or words in a sentence without having labeled examples to learn from.

Because data are expensive and labels are limited, using only labeled data to coach models could hold back the capabilities of AI systems. With self-supervised learning, the goal is to develop models that may give you an accurate internal representation of the world on their very own.

“For those who can give you a great representation of the world, that ought to make subsequent problem solving easier,” he explains.

The main focus of Isola’s research is more about finding something recent and surprising than about constructing complex systems that may outdo the most recent machine-learning benchmarks.

While this approach has yielded much success in uncovering progressive techniques and architectures, it means the work sometimes lacks a concrete end goal, which might result in challenges.

As an example, keeping a team aligned and the funding flowing might be difficult when the lab is targeted on looking for unexpected results, he says.

“In a way, we’re all the time working at the hours of darkness. It’s high-risk and high-reward work. Every once in while, we discover some kernel of truth that’s recent and surprising,” he says.

Along with pursuing knowledge, Isola is obsessed with imparting knowledge to the subsequent generation of scientists and engineers. Amongst his favorite courses to show is 6.7960 (Deep Learning), which he and several other other MIT faculty members launched 4 years ago.

The category has seen exponential growth, from 30 students in its initial offering to greater than 700 this fall.

And while the recognition of AI means there isn’t a shortage of interested students, the speed at which the sphere moves could make it difficult to separate the hype from truly significant advances.

“I tell the scholars they should take all the pieces we are saying in the category with a grain of salt. Possibly in just a few years we’ll tell them something different. We’re really on the sting of information with this course,” he says.

But Isola also emphasizes to students that, for all of the hype surrounding the most recent AI models, intelligent machines are far simpler than most individuals suspect.

“Human ingenuity, creativity, and emotions — many individuals consider these can never be modeled. That may develop into true, but I feel intelligence is fairly easy once we understand it,” he says.

Although his current work focuses on deep-learning models, Isola continues to be fascinated by the complexity of the human brain and continues to collaborate with researchers who study cognitive sciences.

All of the while, he has remained captivated by the great thing about the natural world that inspired his first interest in science.

Although he has less time for hobbies nowadays, Isola enjoys mountain climbing and backpacking within the mountains or on Cape Cod, skiing and kayaking, or finding scenic places to spend time when he travels for scientific conferences.

And while he looks forward to exploring recent questions in his lab at MIT, Isola can’t help but contemplate how the role of intelligent machines might change the course of his work.

He believes that artificial general intelligence (AGI), or the purpose where machines can learn and apply their knowledge in addition to humans can, is just not that far off.

“I don’t think AIs will just do all the pieces for us and we’ll go and revel in life on the beach. I feel there may be going to be this coexistence between smart machines and humans who still have lots of agency and control. Now, I’m fascinated with the interesting questions and applications once that happens. How can I help the world on this post-AGI future? I don’t have any answers yet, nevertheless it’s on my mind,” he says.

Related Post

Leave a Reply