Media

YouTube Announces Broader Launch of ‘Dream Track’ AI Audio Generator


This will be handy.

YouTube has announced that all creators in the U.S. can now use its “Dream Track” audio generation process, which enables you to create short audio clips, via AI, based on text prompts.

Dream Track, which YouTube first announced back in 2023, is the result of collaborative work with various artists to create a library of audio cues for the AI system to pull from.

Based on Google DeepMind’s Lyria music model, the initial project, which was made available to a select group of U.S. creators, enabled users to generate unique, royalty-free soundtracks of up to 30 seconds, ideal for Shorts clips.

YouTube CEO Neal Mohan says that all U.S. creators can now use Dream Track to create instrumental audio clips for Shorts, as well as longer clips (via the YouTube Create app), but there are still some restrictions on the types of audio you can generate (i.e. you can’t generate vocals at this stage).

But even so, this could be a handy accompaniment for your video creation efforts, with custom audio, unique to your content, that can help to set the mood, and enhance the presentation of your short clips.

It’s also a sneak peek at the future of audio creation on YouTube, with tools that can generate stock-style audio for your uploads.

As noted, Dream Track initially also included collaborations with various popular artists, and was touted by YouTube as a way to “explore how the technology could be used to create deeper connections between artists and creators, and ultimately, their fans.”

That, seemingly, isn’t where Dream Track has ended up, opting more for basic jingle creation instead. But there may be more to come for the project, which could see this grow into an even more powerful tool for audio generation to enhance your content.

In this sense, this initial release could be more significant than it seems.

Either way, it’s worth an experiment. Dream Track is being rolled out to all U.S. creators from today.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.