CORUN ? Colabator ??♂️
NeurIPS2024 Spotlight ✨
This is the official PyTorch codes for the paper.
Real-world Image Dehazing with Coherence-based Label Generator and Cooperative Unfolding Network
Chengyu Fang, Chunming He, Fengyang Xiao, Yulun Zhang, Longxiang Tang, Yuelin Zhang, Kai Li, and Xiu Li
Advances in Neural Information Processing Systems 2024
⚠️ We found that the previous installation script installed an incorrect version
of PyTorch and Numpy, which led to erroneous experimental results. Users who used
the repository code before 2024-10-23 should reconfigure the environment using the
new script, and ensure that PyTorch version 2.1.2 is installed.
We provide two types of dataset loading functions for model training: one loads clean images and corresponding depth maps to generate hazy images using the RIDCP Data Generation Pipeline, and the other directly loads paired clean and degraded images. You can choose the appropriate method based on your dataset and task.
For the haze generation method, we support reading the RIDCP500 dataset (where depth maps are stored as .npy files) as well as the OTS/ITS datasets (where depth maps are stored as .mat files). If your dataset contains paired clean images and depth maps, you can also use your own dataset. If your dataset does not include depth maps, you can generate corresponding depth maps using methods such as RA-Depth. For the paired degraded-clean method, you can use any paired degraded-clean image pairs for training and testing.
git clone https://github.com/cnyvfang/CORUN-Colabator.git
conda create -n corun_colabator python=3.9
conda activate corun_colabator
# If necessary, Replace pytorch-cuda=? with the compatible version of your GPU driver.
conda install pytorch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 pytorch-cuda=12.1 -c pytorch -c nvidia
cd basicsr_modified
pip install tb-nightly -i https://mirrors.aliyun.com/pypi/simple # Run this line if in Chinese Mainland
pip install -r requirements.txt
python setup.py develop
cd ..
pip install -r requirements.txt
python setup.py develop
python init_modules.py
Download the pre-trained da-clip weights and place it in ./pretrained_weights/
. You can download the daclip weights we used from Google Drive. You can also choose other type of clip models and corresponding weights from openclip, if you do this, don't forget to modify your options.
sh options/train_corun_with_depth.sh
sh options/train_colabator_with_transmission.sh
✨ To fine-tune your own model using Colabator, you only need to add your network to corun_colabator/archs, define your own configuration file as sample_options and run the script.
Download the pre-trained CORUN weight and place it in ./pretrained_weights/
. You can download the CORUN weight from Google Drive (We will update it before camera-ready.)
CUDA_VISIBLE_DEVICES=0 sh options/valid.corun.sh
# OR
CUDA_VISIBLE_DEVICES=0 python3 corun_colabator/simple_test.py
--opt options/test_corun.yml
--input_dir /path/to/testset/images
--result_dir ./results/CORUN
--weights ./pretrained_weights/CORUN.pth
--dataset RTTS
Caculate the NIMA and BRISQUE results.
CUDA_VISIBLE_DEVICES=0 python evaluate.py --input_dir /path/to/results
We achieved state-of-the-art performance on RTTS and Fattal's datasets and corresponding downstream tasks. More results can be found in the paper. To quickly use the results of our experiments without manual inference or retraining, you can download all files dehazed/restored by our model from Google Drive.
Visual comparison on RTTS
Visual comparison on Fattal’s data
Visual comparison of object detection on RTTS
If you find the code helpful in your resarch or work, please cite the following paper(s).
@misc{fang2024realworld,
title={Real-world Image Dehazing with Coherence-based Label Generator and Cooperative Unfolding Network},
author={Chengyu Fang and Chunming He and Fengyang Xiao and Yulun Zhang and Longxiang Tang and Yuelin Zhang and Kai Li and Xiu Li},
year={2024},
eprint={2406.07966},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
The codes are based on BasicSR. Please also follow their licenses. Thanks for their awesome works.