AudioLM is new software developed by Google researchers that generates sounds that match the style of the snippet heard while retaining its pitch. Among the sounds he can hear and use to create a suite for them, we can count human voices, but also the sounds of instruments. This technique holds promise for speeding up the process of training AI to generate sounds, and it could eventually be used to automatically generate music to accompany videos.
AI-generated audio is used in everyday life, take the example of home assistant voices like Alexa or Siri that use natural language processing. AI music systems like Jukebox – uses text data to generate song lyrics – which have already produced impressive results, but most existing techniques require the preparation of text-based transcriptions and training beforehand, which takes a lot of time and human labor.
You can listen to all the examples on this link.