What does the long run hold for generative AI?

Date:

Boutiquefeel WW
Pheromones
Giftmio [Lifetime] Many GEOs
Cotosen WW

Speaking on the “Generative AI: Shaping the Future” symposium on Nov. 28, the kickoff event of MIT’s Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAI’s ChatGPT and Google’s Bard.

“Hype results in hubris, and hubris results in conceit, and vanity results in failure,” cautioned Brooks, who can also be a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founding father of Robust.AI.

“Nobody technology has ever surpassed every little thing else,” he added.

The symposium, which drew a whole bunch of attendees from academia and industry to the Institute’s Kresge Auditorium, was laced with messages of hope concerning the opportunities generative AI offers for making the world a greater place, including through art and creativity, interspersed with cautionary tales about what could go improper if these AI tools aren’t developed responsibly.

Generative AI is a term to explain machine-learning models that learn to generate latest material that appears like the information they were trained on. These models have exhibited some incredible capabilities, corresponding to the flexibility to provide human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to make use of generative AI to make a positive impact on the earth. For instance, the work of the Axim Collaborative, a web-based education initiative launched by MIT and Harvard, includes exploring the tutorial features of generative AI to assist underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform people’s lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase such a innovation, but in addition generate “collaborative collisions” amongst attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will likely be critical if we’re to soundly integrate a rapidly evolving technology like generative AI in ways which are humane and help humans solve problems, she told the audience.

“I truthfully cannot consider a challenge more closely aligned with MIT’s mission. It’s a profound responsibility, but I actually have every confidence that we will face it, if we face it head on and if we face it as a community,” she said.

While generative AI holds the potential to assist solve among the planet’s most pressing problems, the emergence of those powerful machine learning models has blurred the excellence between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It isn’t any longer an issue of whether we will make machines that produce latest content, she said, but how we will use these tools to reinforce businesses and ensure sustainability. 

“Today, we’ll discuss the potential of a future where generative AI does not only exist as a technological marvel, but stands as a source of hope and a force for good,” said Rus, who can also be the Andrew and Erna Viterbi Professor within the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor within the MIT Literature Section and Distinguished Chair of the Humanities, was asked to put in writing a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trek along with his father and touched on the importance of passing traditions right down to the following generation.

In his keynote remarks, Brooks got down to unpack among the deep, scientific questions surrounding generative AI, in addition to explore what the technology can tell us about ourselves.

To start, he sought to dispel among the mystery swirling around generative AI tools like ChatGPT by explaining the fundamentals of how this massive language model works. ChatGPT, as an example, generates text one word at a time by determining what the following word needs to be within the context of what it has already written. While a human might write a story by interested by entire phrases, ChatGPT only focuses on the following word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the net during training. (The most recent iteration, ChatGPT 4, is even larger.) It learns correlations between words on this massive corpus of text and uses this data to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, corresponding to the flexibility to put in writing a sonnet about robots within the variety of Shakespeare’s famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to put in writing side-by-side along with his own sonnet.

But while researchers still don’t fully understand exactly how these models work, Brooks assured the audience that generative AI’s seemingly incredible capabilities aren’t magic, and it doesn’t mean these models can do anything.

His biggest fears about generative AI don’t revolve around models that would someday surpass human intelligence. Reasonably, he’s most apprehensive about researchers who may throw away many years of wonderful work that was nearing a breakthrough, simply to jump on shiny latest advancements in generative AI; enterprise capital firms that blindly swarm toward technologies that may yield the very best margins; or the chance that a complete generation of engineers will ignore other types of software and AI.

At the tip of the day, those that imagine generative AI can solve the world’s problems and those that imagine it would only generate latest problems have not less than one thing in common: Each groups are likely to overestimate the technology, he said.

“What’s the self-esteem with generative AI? The self-esteem is that it’s one way or the other going to guide to artificial general intelligence. By itself, it just isn’t,” Brooks said.

Following Brooks’ presentation, a bunch of MIT faculty spoke about their work using generative AI and took part in a panel discussion about future advances, essential but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor within the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator on the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the potential of integrating perceptual systems, drawing on human senses like touch and smell, quite than focusing totally on language and pictures. The researchers also spoke concerning the importance of engaging with policymakers and the general public to make sure generative AI tools are produced and deployed responsibly.

“Certainly one of the massive risks with generative AI today is the danger of digital snake oil. There may be an enormous risk of a number of products going out that claim to do miraculous things but in the long term might be very harmful,” Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel “Metropolis,” read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the long run of generative AI. The discussion included Joshua Tenenbaum, a professor within the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the potential of developing generative AI models that may transcend what we will do as humans, corresponding to tools that may sense someone’s emotions through the use of electromagnetic signals to know how an individual’s respiratory and heart rate are changing.

But one key to integrating AI like this into the actual world safely is to be certain that we will trust it, Tegmark said. If we all know an AI tool will meet the specifications we insist on, then “we not should be afraid of constructing really powerful systems that exit and do things for us on the earth,” he said.

Share post:

Popular

More like this
Related

Yo Gotti Shows Love With Lavish Birthday Trip

Yo Gotti is making it clear that he’s not...

Not much of a feat, but not less than, Terrafirma’s in win column

Stanley Pringle and Terrafirma had good enough reasons to...

Release date, price, and contents for Terrifier bundle

Halloween events are at all times an enormous deal...

Volcanoes may help reveal interior heat on Jupiter moon

By staring into the hellish landscape of Jupiter's moon...