Google wants to improve everyday robots with natural language

It’s not for now, but more intelligent robots thanks to natural language seem to be on the right track at Google.

Alphabet, the parent company of Google, has many branches whose sectors and projects are sometimes related. This is particularly the case of Everyday Robots (which works on robots designed to help in everyday life) and the AI ​​section of Google, which collaborated on ” Do As I Can, Not As I Say: Grounding Language in Robotic Affordances “.

Soon more useful robots

This project wants to concretely improve the robots of the first with the technology of the second. At the moment, robots in general and those from Everyday Robots are not only still a little slow and hesitant, but above all they can only respond to clear, short and simple voice instructions. Long or complex requests are not yet fully in their bullet points.

Google’s objective here is to boost these robots by relying on natural language. And more exactly by using the LLM (large language model) and specifically the PaLM model. Thanks to a new method, dubbed PaLM-SayCan, robots at Everyday Robots can now take action after hearing a relatively vague phrase like “ I spilled my drink, can you help me? “.

Rather than choosing only the most suitable action from its list as is usually the case, the robot compares several options here. He then chooses the most suitable one, taking into account his environment and the steps to be taken to carry out his task.

Solid progress, but still work

In a test of a hundred such instructions in a test context in a kitchen, the robots succeeded in planning appropriate commands (like fetching a sponge) in 84% of cases, and in applying them in 74 %. There is therefore still room for improvement and the research and development phase still seems long before a real arrival in our daily lives.

The moment when we will understand the intentions of robots more quickly than those of humans is also not yet here.

Leave a Comment