Until now only available in English, the multiple search integrated into Google Lens will soon be available in 70 languages, including most certainly French.
Get ready to change your search habits. Multisearch, the Google Lens multiple search module will soon be available in French. Available in an English-only beta version and offered only in the United States, the feature Multisearch can be used in no less than 70 languages over the next few months.
Multiple search for all
As a reminder, the multiple search function of Google Lens allows you to launch a query in the search engine by combining keywords, images, and even voice. By combining all these elements, the search engine paves the way for new research habits. But above all, this system makes it possible to make visual searches more natural by allowing, for example, to ask questions about what you see, as you would with a friend. Also, the combination of several elements in a single query (image, text, voice) makes it easier to search and obtain more relevant results.
At its Google I/O conference last May, Google also indicated that multiple searches could be used to launch queries to find results close to your geographical location. To illustrate the Multisearch Near me (this is the name of the function), the firm had taken as an example the fact of submitting the photo of a dish associated with a request “near here”. The search engine was then able to find a nearby restaurant likely to serve the desired dish. Google, which seems to have made sufficient progress on this project, has just indicated its deployment this fall. However, do not expect to be able to enjoy it immediately in our latitudes. Function Multisearch near me will currently only be available in English, and only in the United States.
Translation via Google Lens more realistic
If you use Google Lens, you know that the application is able to translate content displayed on an image into more than a hundred languages. Thanks to its advances in artificial intelligence, Google Lens will soon be able to display translations in a more realistic way.
The translated text will no longer be displayed floating above the original text, but will simply replace it on the image in a realistic way. A feat made possible thanks to a Machine Learning technology called Generative Adversarial Networks. Google did not give more details about the availability of this novelty, but it seems that it is already being deployed.