Tongyi Lab recently opened the source of its innovative music generation technology InspireMusic, an AIGC toolkit integrating music, song and audio generation capabilities. It aims to provide researchers, developers and music enthusiasts with a comprehensive and easy-to-use creative platform, lower the threshold for music creation, and provide rich models and tools to help users generate diverse music works. InspireMusic supports a variety of music styles, emotional expressions and complex music structure controls, and users can easily create through text descriptions or audio prompts.
InspireMusic not only provides researchers and developers with a wealth of training and tuning tools for music/song/audio generation models, but also equips them with efficient models to optimize the generation effect. At the same time, this toolkit also greatly lowers the threshold for music creation, allowing music lovers to easily generate diverse music works through simple text descriptions or audio prompts.
InspireMusic's Wensheng music creation model is particularly eye-catching. It covers a variety of music styles, emotional expressions and complex music structure control, providing users with great creative freedom and flexibility. Users can generate music works that meet their needs by entering text descriptions according to their personal preferences. Whether it is relaxing jazz or childlike melody, InspireMusic can be presented one by one.
In addition, InspireMusic also has flexible inference mode design, including fast model and high-quality model to meet the needs of different users. Whether it is a user who pursues rapid generation or high-quality output, they can find a creative method that suits them in this toolkit.
At present, InspireMusic has open sourced the training and inference code generated by music, and users can access and use it through platforms such as GitHub repository, ModelScope creation space, and HuggingFace Spaces.
In the future, Tongyi Laboratory plans to further open up InspireMusic's singing and audio generation basic models to attract more researchers, developers and users to actively participate in the experience and research and development. I believe that with everyone's joint efforts, InspireMusic will continue to improve and bring more surprises to the field of music creation.
GitHub repository: InspireMusic (https://github.com/FunAudioLLM/InspireMusic)
Online Demo:
ModelScope Creation Space: https://modelscope.cn/studios/iic/InspireMusic/summary
HuggingFace Spaces:https://huggingface.co/spaces/FunAudioLLM/InspireMusic
The open source of InspireMusic has brought new possibilities to the field of music creation and also provided new directions for the application of artificial intelligence in music creation. We look forward to InspireMusic's continued development in the future and bringing the fun of music creation to more people.