Download MusicGen, the AI creation platform launched by Meta: MetaMusicGen is announced to be online. Users can enter the corresponding text according to their own needs to create their favorite music songs. The operation is very simple and no professional knowledge is required. Netizens who like it come and try it.
The Meta AI team, which has shown positive intentions for generative AI, has not missed the technological progress in music generation. Announced MusicGen, an AI music generation service deep learning language model built by their Audiocraft research team. Continue reading Meta releases music generation AI MusicGen: it can create music and also use "text" to adapt existing tracks and report content.
In addition to Animated Drawings that can dance with graffiti characters, the Meta AI team, which has shown positive intentions for generative AI, has not missed the technological progress in music generation. Announced that the AI music generation service deep learning language model MusicGen created by their Audiocraft research team has been open sourced on github and can completely generate music through your own GPU hardware or Google Colab (here are the steps of Facebook Research). MusicGen’s ability to “adapt” existing tracks via prompt text is also available online – you can try it out here. Music clips will be uploaded to produce a piece of approximately 12 seconds of generative music content.
The online version of MusicGen is very easy to use. The author uploaded nearly 4 minutes of melody music. If generated with MusicGen through basic prompt words, it will take more than 200 seconds to process; through relatively complex demonstration prompt words such as "An 80s driving pop song with heavy drums and synth pads in the background" Generating will grow more.
As far as the effect of the adaptation is concerned, I personally think it is quite interesting – I also really want to know what the effect of creating an AI music completely from text will be. Meta AI employees also released MusicGen’s music adaptation effects on Twitter (as above).
Officials also confidently mentioned that compared with other existing services including MusicLM, Riffusion and Musai, MusicGen can produce better results. This may be related to the fact that the research team mentioned that MusicGen is different from other technologies in that it does not need to self-monitor semantic performance and can provide 50 self-regressive audio processing steps per second. So what is the key to achieving better generation performance?