YouTube's new AI is cranking out music using voices from real singers
YouTube's Dream Track lets creators sample iconic voices without the hassle of paying royalties.
What you need to know
- Google DeepMind's new Lyria model can create music with instrumentals and vocals in a variety of genres, and it can maintain the complexity of rhythms, melodies, and vocals over long passages.
- YouTube and Google DeepMind are collaborating on two AI experiments for music, including Dream Track, which lets users create 30-second soundtracks for YouTube Shorts in the style of their favorite artists, and Music AI tools to aid the creative process of artists.
- DeepMind is also using SynthID to tag AI-generated audio in a way that is inaudible to humans, and this watermark can be used to track the origin of AI-generated music and prevent misuse.
Google is formally diving into the world of AI music with a new experiment called Dream Track, an AI tool that whips up original tunes mimicking the style of your favorite singers.
Dream Track is already in the hands of a few YouTubers, and it cranks out 30-second tracks mimicking artists like Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Sia, T-Pain, Troye Sivan, and Papoose, Google announced in a blog post.
In a demo, a user tossed in a request like "a ballad about how opposites attract, upbeat acoustic," and Dream Track whipped up a short clip of a new song sung by a voice clone of Charlie Puth. The voice clone sounded just like the real Puth, complete with a backing track that matches the vibe.
Right now, only a "small group of select U.S. creators" can try out Dream Track, and it can only be used to create music for YouTube Shorts. It's rolling out with other AI music sidekicks that can create horn sections and entire orchestras using text prompts and humming. These new tools are still in development, but they'll be available to participants in YouTube's Music AI incubator later this year.
Google's latest project is being developed with the DeepMind team using a new AI music generation model called Lyria. While these tools have the potential to revolutionize the way music is created, they also raise concerns about how to protect artists' intellectual property.
Earlier this year, the music world was buzzing about the song "Heart on My Sleeve," which used AI to sound like Drake and The Weeknd. Universal Music Group (UMG), the label behind Drake and The Weeknd, played the copyright card to yank that track off streaming platforms and social media sites. UMG eventually joined forces with Google to officially license its artists' voices for Dream Track.
The Mountain View-based company aims to address copyright concerns by developing tools that can identify AI-generated music. One such tool, called SynthID, tags AI-generated audio with a watermark that is inaudible to humans.
Be an expert in 5 minutes
Get the latest news from Android Central, your trusted companion in the world of Android
SynthID debuted in late August with the goal of identifying AI-generated images by sniffing out a digital watermark right in the pixels. These watermarks pop up whenever a Google AI model spits out an image.
As for AI-generated music, Google says this watermark will keep your listening experience intact, and it should still be there even if you make changes to the song, like compressing it, speeding it up, or adding noise.
Jay Bonggolto always keeps a nose for news. He has been writing about consumer tech and apps for as long as he can remember, and he has used a variety of Android phones since falling in love with Jelly Bean. Send him a direct message via Twitter or LinkedIn.