Google is training its robots to be more like humans

Placeholder while article actions load

MOUNTAIN VIEW, Calif. — Researchers here at Google’s lab recently asked a robot to build a burger out of various plastic toy ingredients.

The mechanical arm knew enough to add ketchup after the meat and before the lettuce, but thought the right way to do so was to put the entire bottle inside the burger.

While that robot won’t be working as a line cook any time soon, it is representative of a bigger breakthrough announced by Google engineers on Tuesday. Using recently developed artificial intelligence software known as large language models, the researchers say they’ve been able to design robots that can help humans with a broader range of everyday tasks.

Instead of providing a laundry list of instructions — directing each of the robot’s movements one by one — the robots can now respond to complete requests, more like a human.

In one demonstration last week, a researcher told a robot, “I’m hungry, can you get me a snack?” The robot then proceeded to search through a cafeteria, open a drawer, find a bag of chips, and bring it to the human.

It’s the first time language models have been integrated into robots, Google’s execs and researchers say.

“This is very fundamentally a different paradigm,” says Brian Ichter, a research scientist at Google and one of the authors of a new paper released Tuesday describing the progress the company has made.

Robots are already commonplace. Millions of them work in factories around the world, but they follow specific instructions and usually only focus on one or two tasks, such as moving a product down the assembly line or welding two pieces of metal together. The race to build a robot that can do a range of everyday tasks, and learn on the job, is much more complex. Tech companies big and small have labored to build such general-purpose robots for years.

Big Tech builds AI with bad data. So scientists sought better data.

Language models work by taking huge amounts of text uploaded to the internet and using it to train artificial intelligence software to guess what kinds of responses might come after certain questions or comments. The models have become so good at predicting the right response that engaging with one often feels like having a conversation with a knowledgeable human. Google and other companies, including OpenAI and Microsoft, have poured resources into building better models and training them on ever-bigger sets of text, in multiple languages.

The work is controversial. In July, Google fired one of its employees who had claimed he believed the software was sentient. The consensus among AI experts is that the models are not sentient, but many are concerned that they exhibit biases because they’ve been trained on huge amounts of unfiltered, human-generated text.

Some language models have shown themselves to be racist or sexist, or easily manipulated into spouting hate speech or lies when prompted with the right statements or questions.

In general, language models could give robots knowledge of high-level planning steps, said Carnegie Mellon assistant professor Deepak Pathak, who studies AI and robotics and was commenting on the field, not specifically Google. But those models won’t give robots all the information they need — for example, how much force to apply when opening a refrigerator. That knowledge has to come from somewhere else.

“It solves only the high-level planning issue,” he said.

Still, Google is forging ahead, and has now melded the language models with some of its robots. Now, instead of having to encode specific technical instructions for each task a robot can do, researchers can simply talk to them in everyday language. Even more importantly, the new software helps the robots parse complex multistep instructions on their own. Now, the robots can interpret instructions they’ve never heard before and come up with responses and actions that make sense.

These robots were trained on AI. They became racist and sexist.

Robots that can use language models could change how manufacturing and distribution facilities are run, said Zac Stewart Rogers, a supply chain management assistant professor from Colorado State University.

“A human and a robot working together is always the most productive” now, he said. “Robots can do manual heavy lifting. Humans can do the nuanced troubleshooting.”

If robots were able to figure out complex tasks, it could mean that distribution centers could be smaller, with fewer humans and more robots. That could mean fewer jobs for people, though Rogers points out that generally when there is a contraction due to automation in one area, jobs are created in other areas.

It’s also probably still a long way away. Artificial intelligence techniques such as neural networks and reinforcement learning have been used to train robots for years. It’s led to some breakthroughs, but progress is still slow. Google’s robots are nowhere near ready for the real world, and in interviews, Google’s researchers and execs said repeatedly they are simply running a research lab and do not have plans to commercialize the technology yet.

But it’s clear Google and other Big Tech companies have a serious interest in robotics. Amazon uses many robots in its warehouses, is experimenting with drone delivery and earlier this month agreed to buy the maker of the Roomba vacuum cleaner robot for $1.7 billion. (Amazon founder Jeff Bezos owns The Washington Post).

Tesla says it is building a ‘friendly’ robot that will perform menial tasks, won’t fight back

Tesla, which has developed some autonomous driving features for its cars, is also working on general-purpose robots.

In 2013, Google went on a spending spree, buying several robotics companies, including Boston Dynamics, the maker of the robot dogs that often go viral on social media. But the executive in charge of the program was accused of sexual misconduct, and left the company soon after. In 2017, Google sold Boston Dynamics to Japanese telecom and tech investment giant Softbank. The hype around ever-smarter robots designed by the most powerful tech companies faded.

In the language model project, Google researchers worked alongside those from Everyday Robotics, a separate but wholly owned company inside Google that works specifically on building robots that can do a range of “repetitive” and “drudgerous” tasks. The bots are already at work in various Google cafeterias, wiping down counters and throwing out trash.

Loading…

Source: WP