[Papier CosyVoice][CosyVoice Studio][Code CosyVoice]
Pour SenseVoice
, visitez le dépôt SenseVoice et l'espace SenseVoice.
2024/07
2024/08
2024/09
À déterminer
Cloner et installer
git clone --recursive https://github.com/FunAudioLLM/CosyVoice.git
# If you failed to clone submodule due to network failures, please run following command until success
cd CosyVoice
git submodule update --init --recursive
conda create -n cosyvoice python=3.8
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
# If you encounter sox compatibility issues
# ubuntu
sudo apt-get install sox libsox-dev
# centos
sudo yum install sox sox-devel
Téléchargement du modèle
Nous vous recommandons fortement de télécharger notre modèle CosyVoice-300M
CosyVoice-300M-SFT
CosyVoice-300M-Instruct
pré-entraîné et notre ressource CosyVoice-ttsfrd
.
Si vous êtes expert dans ce domaine et que vous souhaitez uniquement former votre propre modèle CosyVoice à partir de zéro, vous pouvez ignorer cette étape.
# SDK模型下载
from modelscope import snapshot_download
snapshot_download ( 'iic/CosyVoice-300M' , local_dir = 'pretrained_models/CosyVoice-300M' )
snapshot_download ( 'iic/CosyVoice-300M-25Hz' , local_dir = 'pretrained_models/CosyVoice-300M-25Hz' )
snapshot_download ( 'iic/CosyVoice-300M-SFT' , local_dir = 'pretrained_models/CosyVoice-300M-SFT' )
snapshot_download ( 'iic/CosyVoice-300M-Instruct' , local_dir = 'pretrained_models/CosyVoice-300M-Instruct' )
snapshot_download ( 'iic/CosyVoice-ttsfrd' , local_dir = 'pretrained_models/CosyVoice-ttsfrd' )
# git模型下载,请确保已安装git lfs
mkdir -p pretrained_models
git clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M
git clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz
git clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT
git clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct
git clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd
En option, vous pouvez décompresser la ressource ttsfrd
et installer le package ttsfrd
pour de meilleures performances de normalisation du texte.
Notez que cette étape n'est pas nécessaire. Si vous n'installez pas le package ttsfrd
, nous utiliserons WeTextProcessing par défaut.
cd pretrained_models/CosyVoice-ttsfrd/
unzip resource.zip -d .
pip install ttsfrd-0.3.6-cp38-cp38-linux_x86_64.whl
Utilisation de base
Pour l'inférence zero_shot/cross_lingual, veuillez utiliser le modèle CosyVoice-300M
. Pour l'inférence sft, veuillez utiliser le modèle CosyVoice-300M-SFT
. Pour l’inférence d’instruction, veuillez utiliser le modèle CosyVoice-300M-Instruct
. Tout d’abord, ajoutezthird_party third_party/Matcha-TTS
à votre PYTHONPATH
.
export PYTHONPATH=third_party/Matcha-TTS
from cosyvoice . cli . cosyvoice import CosyVoice
from cosyvoice . utils . file_utils import load_wav
import torchaudio
cosyvoice = CosyVoice ( 'pretrained_models/CosyVoice-300M-SFT' , load_jit = True , load_onnx = False , fp16 = True )
# sft usage
print ( cosyvoice . list_avaliable_spks ())
# change stream=True for chunk stream inference
for i , j in enumerate ( cosyvoice . inference_sft ( '你好,我是通义生成式语音大模型,请问有什么可以帮您的吗?' , '中文女' , stream = False )):
torchaudio . save ( 'sft_{}.wav' . format ( i ), j [ 'tts_speech' ], 22050 )
cosyvoice = CosyVoice ( 'pretrained_models/CosyVoice-300M-25Hz' ) # or change to pretrained_models/CosyVoice-300M for 50Hz inference
# zero_shot usage, <|zh|><|en|><|jp|><|yue|><|ko|> for Chinese/English/Japanese/Cantonese/Korean
prompt_speech_16k = load_wav ( 'zero_shot_prompt.wav' , 16000 )
for i , j in enumerate ( cosyvoice . inference_zero_shot ( '收到好友从远方寄来的生日礼物,那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐,笑容如花儿般绽放。' , '希望你以后能够做的比我还好呦。' , prompt_speech_16k , stream = False )):
torchaudio . save ( 'zero_shot_{}.wav' . format ( i ), j [ 'tts_speech' ], 22050 )
# cross_lingual usage
prompt_speech_16k = load_wav ( 'cross_lingual_prompt.wav' , 16000 )
for i , j in enumerate ( cosyvoice . inference_cross_lingual ( '<|en|>And then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that ' s coming into the family is a reason why sometimes we don ' t buy the whole thing.' , prompt_speech_16k , stream = False )):
torchaudio . save ( 'cross_lingual_{}.wav' . format ( i ), j [ 'tts_speech' ], 22050 )
# vc usage
prompt_speech_16k = load_wav ( 'zero_shot_prompt.wav' , 16000 )
source_speech_16k = load_wav ( 'cross_lingual_prompt.wav' , 16000 )
for i , j in enumerate ( cosyvoice . inference_vc ( source_speech_16k , prompt_speech_16k , stream = False )):
torchaudio . save ( 'vc_{}.wav' . format ( i ), j [ 'tts_speech' ], 22050 )
cosyvoice = CosyVoice ( 'pretrained_models/CosyVoice-300M-Instruct' )
# instruct usage, support <laughter></laughter><strong></strong>[laughter][breath]
for i , j in enumerate ( cosyvoice . inference_instruct ( '在面对挑战时,他展现了非凡的<strong>勇气</strong>与<strong>智慧</strong>。' , '中文男' , 'Theo ' Crimson ' , is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.' , stream = False )):
torchaudio . save ( 'instruct_{}.wav' . format ( i ), j [ 'tts_speech' ], 22050 )
Démarrer la démo Web
Vous pouvez utiliser notre page de démonstration Web pour vous familiariser rapidement avec CosyVoice. Nous prenons en charge l'inférence sft/zero_shot/cross_lingual/instruct dans la démo Web.
Veuillez consulter le site Web de démonstration pour plus de détails.
# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference
python3 webui . py - - port 50000 - - model_dir pretrained_models / CosyVoice - 300 M
Utilisation avancée
Pour les utilisateurs avancés, nous avons fourni des scripts d'entraînement et d'inférence dans examples/libritts/cosyvoice/run.sh
. Vous pourrez vous familiariser avec CosyVoice en suivant cette recette.
Construire pour le déploiement
Facultativement, si vous souhaitez utiliser grpc pour le déploiement de services, vous pouvez exécuter les étapes suivantes. Sinon, vous pouvez simplement ignorer cette étape.
cd runtime/python
docker build -t cosyvoice:v1.0 .
# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference
# for grpc usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c " cd /opt/CosyVoice/CosyVoice/runtime/python/grpc && python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M && sleep infinity "
cd grpc && python3 client.py --port 50000 --mode < sft | zero_shot | cross_lingual | instruct >
# for fastapi usage
docker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c " cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi && python3 server.py --port 50000 --model_dir iic/CosyVoice-300M && sleep infinity "
cd fastapi && python3 client.py --port 50000 --mode < sft | zero_shot | cross_lingual | instruct >
Vous pouvez discuter directement sur les problèmes de Github.
Vous pouvez également scanner le code QR pour rejoindre notre groupe de discussion officiel Dingding.
Le contenu fourni ci-dessus est uniquement destiné à des fins académiques et est destiné à démontrer les capacités techniques. Certains exemples proviennent d’Internet. Si un contenu porte atteinte à vos droits, veuillez nous contacter pour demander sa suppression.