Google adds natural language understanding to the capabilities of its robots

“Ok Google! Give me a glass of water.” And if this instruction allowed to be served a glass of water by a robot? Alphabet, the parent company of Google, announced on August 16, 2022 to equip its robots designed to help people in everyday life with natural language understanding functionality. Concretely, Google is bringing together two of its most ambitious research projects, namely robotics, via its Everyday Robots division, and its research around natural language. Explanations on a more complex reconciliation than it seems.

Communicate in natural language

Often robotic systems execute very short and simple commands like “bring me an apple”. It is easier to operate robots by issuing clear tasks attached to specific rewards. Conversely, robots have difficulty performing long-term or imprecise tasks such as asking for a healthy meal to go to exercise.

Google seeks to go further by exploiting language models in a given environment. In concrete terms, the objective is to be able to communicate with the robot, without giving it a precise instruction, and so that it can itself define the task or tasks to be carried out according to what it has been told, its environment and its skills. The robot filters the request through a list of possible actions and opts for the most probable. This method, which consists of interpreting commands spoken naturally, evaluating possible actions and planning the steps to respond to the request, is called PaLM-SayCan by Google.

74% of requests made

To evaluate it, Google positions robots in a kitchen and gives them tasks in natural language. For example, if someone says “I spilled my drink, can you help me?”, the robot can bring back a sponge from the kitchen. Likewise, the goal is for the robot to be able to respond to requests like “I just worked out, please bring me a snack and a drink to recover.”

Google claims to have tested a total of 101 instructions to evaluate the performance of robots and its PaLM-SayCan model. Bots were 84% successful in scheduling orders and executing them 74% of the time. But the list of instructions is not specified by Google so it is not possible to know how complex they were or not.

A technology not yet perfected

This is a small step towards seeing robots arrive in homes, although there is still a lot of work to be done because the natural language commands are endless and include more or less difficult tasks to perform. For example, if the robot is asked to “clean a broken glass”, the latter must understand whether it should take a broom, the vacuum cleaner or both. In which order. And know where these objects are stored.

The robots do not yet respond to the “Ok Google” command, but it is likely that in time they will. Google aims to offer versatile robots that are as easy to control as single-task robots. For the moment, one of the only relevant domestic robots is Roomba’s autonomous vacuum cleaner (recently acquired by Amazon), which performs a single task all the time: cleaning the floors.

Leave a Comment