You can find ID_Animator in this link ID_Animator
1、ParlerTTS node:ComfyUI_ParlerTTS
2、Llama3_8B node:ComfyUI_Llama3_8B
3、HiDiffusion node:ComfyUI_HiDiffusion_Pro
4、ID_Animator node: ComfyUI_ID_Animator
5、StoryDiffusion node:ComfyUI_StoryDiffusion
6、Pops node:ComfyUI_Pops
7、stable-audio-open-1.0 node :ComfyUI_StableAudio_Open
8、GLM4 node:ComfyUI_ChatGLM_API
9、CustomNet node:ComfyUI_CustomNet
10、Pipeline_Tool node :ComfyUI_Pipeline_Tool
11、Pic2Story node :ComfyUI_Pic2Story
12、PBR_Maker node:ComfyUI_PBR_Maker
2024-06-15
1.修復animateddiff幀率上限為32的問題。感謝ShmuelRonen 的提醒
2.加入face_lora 及lora_adapter的條件控制,模型位址在下面的模型說明裡。
3、加入diffuser 0.28.0以上版本的支持
--- 既往更新Previous updates
1.輸出改成單幀影像,方便接其他的影片合成節點,取消原作儲存gif動畫的選項。
2.新增模型載入選單,邏輯上更清晰一些,你可以多放幾個動作模型進「.. ComfyUI_ID_Animator/models/animatediff_models」目錄
git https : // github . com / smthemex / ComfyUI_ID_Animator . git
If the module is missing, please refer to the separate installation of the missing module in the "if miss module check this requirements.txt" file
如果缺少模組,請開啟"if miss module check this requirements.txt",單獨安裝缺少的模組
3.1 dir.. ComfyUI_ID_Animator/models
3.2 dir.. ComfyUI_ID_Animator/models/animatediff_models
3.3 dir.. comfy/models/diffusers
3.4 dir.. comfy/models/checkpoints
3.5 dir.. ComfyUI_ID_Animator/models/image_encoder
3.6 dir.. ComfyUI_ID_Animator/models/adapter
3.7 other models
The first run will download the insightface models to the "X/user/username/.insightface/models/buffalo_l" directory
因為"ID_Animator"作者沒有標註開源授權協議,所以我暫時把開源授權協議設定為Apache-2.0 license
Because "ID_Animator"does not indicate the open source license agreement, I have temporarily set the open source license agreement to Apache-2.0 license
Xuanhua He: [email protected]
Quande Liu: [email protected]
Shengju Qian: [email protected]
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={International Conference on Learning Representations},
year={2024}
}
@article{guo2023sparsectrl,
title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2311.16933},
year={2023}
}