When Rodney Brooks talks about robotics and artificial intelligence, you need to listen. Currently the Panasonic Professor of Robotics Emeritus at MIT, he also co-founded three key firms, including Rethink Robotics, iRobot and his current endeavor, Robust.ai. Brooks also ran the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) for a decade starting in 1997.
In reality, he likes to make predictions in regards to the way forward for AI and keeps a scorecard on his blog of how well he’s doing.
He knows what he’s talking about, and he thinks possibly it’s time to place the brakes on the screaming hype that’s generative AI. Brooks thinks it’s impressive technology, but possibly not quite as capable as many are suggesting. “I’m not saying LLMs are usually not vital, but we now have to watch out [with] how we evaluate them,” he told TechCrunch.
He says the difficulty with generative AI is that, while it’s perfectly able to performing a certain set of tasks, it could possibly’t do the whole lot a human can, and humans are inclined to overestimate its capabilities. “When a human sees an AI system perform a task, they immediately generalize it to things which are similar and make an estimate of the competence of the AI system; not only the performance on that, however the competence around that,” Brooks said. “And so they’re normally very over-optimistic, and that’s because they use a model of an individual’s performance on a task.”
He added that the issue is that generative AI just isn’t human and even human-like, and it’s flawed to attempt to assign human capabilities to it. He says people see it as so capable they even need to use it for applications that don’t make sense.
Brooks offers his latest company, Robust.ai, a warehouse robotics system, for example of this. Someone suggested to him recently that it could be cool and efficient to inform his warehouse robots where to go by constructing an LLM for his system. In his estimation, nevertheless, this just isn’t an inexpensive use case for generative AI and would actually slow things down. It’s as an alternative much simpler to attach the robots to a stream of information coming from the warehouse management software.
“When you’ve got 10,000 orders that just got here in that you’ve got to ship in two hours, you’ve got to optimize for that. Language just isn’t gonna help; it’s just going to slow things down,” he said. “We have now massive data processing and large AI optimization techniques and planning. And that’s how we get the orders accomplished fast.”
One other lesson Brooks has learned in relation to robots and AI is which you could’t attempt to do an excessive amount of. It’s best to solve a solvable problem where robots may be integrated easily.
“We want to automate in places where things have already been cleaned up. So the instance of my company is we’re doing pretty much in warehouses, and warehouses are literally pretty constrained. The lighting doesn’t change with those big buildings. There’s not stuff lying around on the ground since the people pushing carts would run into that. There’s no floating plastic bags going around. And largely it’s not within the interest of the individuals who work there to be malicious to the robot,” he said.
Brooks explains that it’s also about robots and humans working together, so his company designed these robots for practical purposes related to warehouse operations, versus constructing a human-looking robot. On this case, it looks like a shopping cart with a handle.
“So the shape factor we use just isn’t humanoids walking around — although I actually have built and delivered more humanoids than anyone else. These appear to be shopping carts,” he said. “It’s got a handlebar, so if there’s an issue with the robot, an individual can grab the handlebar and do what they want with it,” he said.
In spite of everything these years, Brooks has learned that it’s about making the technology accessible and purpose-built. “I at all times attempt to make technology easy for people to know, and due to this fact we are able to deploy it at scale, and at all times have a look at the business case; the return on investment can also be very vital.”
Even with that, Brooks says we now have to just accept that there are at all times going to be hard-to-solve outlier cases in relation to AI, that might take a long time to unravel. “Without fastidiously boxing in how an AI system is deployed, there may be at all times an extended tail of special cases that take a long time to find and fix. Paradoxically all those fixes are AI complete themselves.”
Brooks adds that there’s this mistaken belief, mostly due to Moore’s law, that there’ll at all times be exponential growth in relation to technology — the concept if ChatGPT 4 is that this good, imagine what ChatGPT 5, 6 and seven shall be like. He sees this flaw in that logic, that tech doesn’t at all times grow exponentially, regardless of Moore’s law.
He uses the iPod for example. For a number of iterations, it did in truth double in storage size from 10 all of the option to 160GB. If it had continued on that trajectory, he discovered we’d have an iPod with 160TB of storage by 2017, but after all we didn’t. The models being sold in 2017 actually got here with 256GB or 160GB because, as he identified, no one actually needed greater than that.
Brooks acknowledges that LLMs could help in some unspecified time in the future with domestic robots, where they might perform specific tasks, especially with an aging population and never enough people to handle them. But even that, he says, could include its own set of unique challenges.
“People say, ‘Oh, the massive language models are gonna make robots give you the chance to do things they couldn’t do.’ That’s not where the issue is. The issue with with the ability to do stuff is about control theory and all styles of other hardcore math optimization,” he said.
Brooks explains that this might eventually result in robots with useful language interfaces for people in care situations. “It’s not useful within the warehouse to inform a person robot to exit and get one thing for one order, but it surely could also be useful for eldercare in homes for people to give you the chance to say things to the robots,” he said.