Initial support for Tora (https://github.com/alibaba/Tora)
Converted model (included in the autodownload node):
https://huggingface.co/Kijai/CogVideoX-5b-Tora/tree/main
This week there's been some bigger updates that will most likely affect some old workflows, sampler node especially probably need to be refreshed (re-created) if it errors out!
New features:
Initial support for the official I2V version of CogVideoX: https://huggingface.co/THUDM/CogVideoX-5b-I2V
Also needs diffusers 0.30.3
Added initial support for CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun
Note that while this one can do image2vid, this is NOT the official I2V model yet, though it should also be released very soon.
Added experimental support for onediff, this reduced sampling time by ~40% for me, reaching 4.23 s/it on 4090 with 49 frames. This requires using Linux, torch 2.4.0, onediff and nexfort installation:
pip install --pre onediff onediffx
pip install nexfort
First run will take around 5 mins for the compilation.
5b model is now also supported for basic text2vid: https://huggingface.co/THUDM/CogVideoX-5b
It is also autodownloaded to ComfyUI/models/CogVideo/CogVideoX-5b
, text encoder is not needed as we use the ComfyUI T5.
Requires diffusers 0.30.1 (this is specified in requirements.txt)
Uses same T5 model than SD3 and Flux, fp8 works fine too. Memory requirements depend mostly on the video length. VAE decoding seems to be the only big that takes a lot of VRAM when everything is offloaded, peaks at around 13-14GB momentarily at that stage. Sampling itself takes only maybe 5-6GB.
Hacked in img2img to attempt vid2vid workflow, works interestingly with some inputs, highly experimental.
Also added temporal tiling as means of generating endless videos:
https://github.com/kijai/ComfyUI-CogVideoXWrapper
Original repo: https://github.com/THUDM/CogVideo
CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun
Controlnet: https://github.com/TheDenk/cogvideox-controlnet