This repository contrains code for the CVPR'24 paper GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos.
Environment setup
Dockerfile
to build the environment (docker build -t genhowto .
) or install the packages manually (pip install diffusers==0.18.2 transformers xformers accelerate
).Download GenHowTo model weights
download_weights.sh
script or download the GenHowTo weights manually.GenHowTo-STATES-96h-v1
for generating state transformations.GenHowTo-ACTIONS-96h-v1
for generating actions.Get predictions
python genhowto.py --weights_path weights/GenHowTo-STATES-96h-v1
--input_image path/to/image.jpg
--prompt "your prompt"
--output_path path/to/output.jpg
--num_images 1
[--num_steps_to_skip 2]
--num_steps_to_skip
is the number of steps to skip in the diffusion process.
The higher the number, the more similar the generated image will be to the input image.To replicate our evaluation, please follow the instructions in the evaluation directory.
@inproceedings{soucek2024genhowto,
title={GenHowTo: Learning to Generate Actions and State Transformations from Instructional Videos},
author={Souv{c}ek, Tom'{a}v{s} and Damen, Dima and Wray, Michael and Laptev, Ivan and Sivic, Josef},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024}
}
This work was partly supported by the EU Horizon Europe Programme under the project EXA4MIND (No. 101092944) and the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90140). Part of this work was done within the University of Bristol’s Machine Learning and Computer Vision (MaVi) Summer Research Program 2023. Research at the University of Bristol is supported by EPSRC UMPIRE (EP/T004991/1) and EPSRC PG Visual AI (EP/T028572/1).