? ToonCrafter can interpolate two cartoon images by leveraging the pre-trained image-to-video diffusion priors. Please check our project page and paper for more information.
Input starting frame | Input ending frame | Generated video |
Input starting frame | Input ending frame | Input sketch guidance | Generated video |
Input starting frame | Input ending frame | Generated video |
Input sketch | Input reference | Colorization results |
Model | Resolution | GPU Mem. & Inference Time (A100, ddim 50steps) | Checkpoint |
---|---|---|---|
ToonCrafter_512 | 320x512 | TBD (perframe_ae=True ) |
Hugging Face |
Currently, our ToonCrafter can support generating videos of up to 16 frames with a resolution of 512x320. The inference time can be reduced by using fewer DDIM steps.
prerequisites: 3.11>=python>=3.8
, CUDA>=11.3
, ffmpeg
and git
.
Python and Git:
Give unrestricted script access to powershell so venv can work:
Set-ExecutionPolicy Unrestricted
and answer Agit clone https://github.com/sdbds/ToonCrafter-for-windows
Install with Powershell run install.ps1
or install-cn.ps1
(for Chinese)
Download pretrained ToonCrafter_512 and put the model.ckpt
in checkpoints/tooncrafter_512_interp_v1/model.ckpt
.
sh scripts/run.sh
Powershell run with run_gui.ps1
Calm down. Our framework opens up the era of generative cartoon interpolation, but due to the variaity of generative video prior, the success rate is not guaranteed.
This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.