At Google, Robots Go To School And Learn Using AI Algorithms

Comment

MOUNTAIN VIEW, Calif. — Researchers at Google’s lab recently asked a robot to build a hamburger out of various plastic toy ingredients.

Advertising

The mechanical arm knew enough to add ketchup after the meat and before the lettuce, but figured the right way to do it was to put the whole bottle inside the burger.

While this robot won’t be working as a line cook anytime soon, it’s representative of a bigger breakthrough announced by Google engineers on Tuesday. Using newly developed artificial intelligence software known as big language models, the researchers say they have been able to design robots that can help humans with a wider range of everyday tasks.

Instead of providing a list of instructions – directing each of the robot’s movements one by one – robots can now respond to full requests, more like a human.

During a demonstration last week, a researcher said to a robot, “I’m hungry, can you get me a snack?” The robot then rummaged through a cafeteria, opened a drawer, found a bag of chips and brought it to the human.

This is the first time that language models have been embedded in robots, according to Google executives and researchers.

“It’s very fundamentally a different paradigm,” said Brian Ichter, a Google researcher and one of the authors of a new paper published Tuesday outlining the progress the company has made.

Robots are already commonplace. Millions of them work in factories around the world, but they follow specific instructions and usually only focus on one or two tasks, such as moving a product on the assembly line or welding two pieces of metal together. The race to build a robot that can perform a range of daily tasks and learn on the job is much more complex. Tech companies big and small have worked to build such general-purpose robots for years.

Big Tech is building AI with bad data. So scientists looked for better data.

Language models work by taking huge amounts of text downloaded from the internet and using it to train artificial intelligence software to guess what kinds of responses might come after certain questions or comments. Models have become so good at predicting the correct answer that engaging with someone often feels like having a conversation with a knowledgeable human. Google and other companies, including OpenAI and Microsoft, have devoted resources to building better models and training them on increasingly large sets of text, in multiple languages.

The work is controversial. In July, Google fired one of its employees who claimed he believed the software was sensitive. The consensus among AI experts is that the models are not sensitive, but many fear they are biased because they were trained on huge amounts of unfiltered human-generated text.

Some language patterns have proven to be racist or sexist, or easily manipulated to spread hate speech or lies when asked the right statements or questions.

In general, language models could give robots knowledge of high-level planning steps, said Deepak Pathak, an assistant professor at Carnegie Mellon, who studies AI and robotics and comments on the field, not Google specifically. But these models won’t give robots all the information they need – for example, how much force to apply when opening a refrigerator. This knowledge must come from elsewhere.

“It only solves the high-level planning problem,” he said.

Still, Google is moving forward and has now merged the language models with some of its bots. Now, instead of having to encode specific technical instructions for each task a robot can perform, researchers can simply talk to them in everyday language. More importantly, the new software helps robots analyze complex, multi-step instructions on their own. Now bots can interpret instructions they’ve never heard before and come up with responses and actions that make sense.

These robots have been trained in AI. They have become racist and sexist.

Robots that can use language models could change the way manufacturing and distribution facilities are run, said Zac Stewart Rogers, assistant professor of supply chain management at Colorado State University.

“A human and a robot working together are always the most productive” now, he said. “Robots can manually lift heavy objects. Humans can do the nuanced troubleshooting. »

If robots were able to understand complex tasks, it could mean fulfillment centers could be smaller, with fewer humans and more robots. That could mean fewer jobs for people, though Rogers points out that typically when there’s a contraction due to automation in one area, jobs are created in other areas.

It’s also probably still a long way off. Artificial intelligence techniques such as neural networks and reinforcement learning have been used to train robots for years. This has led to some breakthroughs, but progress is still slow. Google’s robots are nowhere near ready for the real world, and in interviews Google researchers and executives have repeatedly said that they are simply running a research lab and have no plans yet. to commercialize the technology.

But it’s clear that Google and other big tech companies have a serious interest in robotics. Amazon uses many robots in its warehouses, is experimenting with drone delivery and earlier this month agreed to buy robot vacuum maker Roomba for $1.7 billion. (Amazon founder Jeff Bezos owns The Washington Post).

Tesla says it’s building a ‘friendly’ robot that will perform menial tasks, not fight back

Tesla, which has developed some self-driving features for its cars, is also working on general-purpose robots.

In 2013, Google went on a spending spree, buying several robotics companies, including Boston Dynamics, the maker of the robot dogs that often go viral on social media. But the executive responsible for the program was accused of sexual misconduct, and left the company soon after. In 2017, Google sold Boston Dynamics to Japanese telecommunications and tech investment giant Softbank. The hype around ever-smarter robots designed by the most powerful tech companies has died down.

As part of the language model project, Google researchers worked alongside those at Everyday Robotics, a separate but wholly-owned company within Google that works specifically on building robots that can perform a range of “repetitive” and “chore” tasks. The robots are already hard at work in various Google cafeterias, wiping down counters and dumping trash.

Leave a Comment