IS MVSNet
1.0.0
저희 논문이 ECCV 2022 학회논문으로 채택되었습니다!
중요도 샘플링 기반 MVSNet이라고도 불리는 ISMVSNet은 간단하면서도 효과적인 다중 뷰 재구성 방법입니다.
이 저장소는 IS-MVSNet의 Mindspore 기반 구현을 제공합니다. 추가 업데이트를 위해 이 저장소에 별표를 표시 하고 시청할 수 있습니다.
# Centos 7.9.2009 is recommended.
# CUDA == 11.1, GCC == 7.3.0, Python == 3.7.9
conda create -n ismvsnet python=3.7.9
conda install mindspore-gpu=1.7.0 cudatoolkit=11.1 -c mindspore -c conda-forge # Install mindspore == 1.7.0
pip install numpy, opencv-python, tqdm, Pillow
conda activate ismvsnet
백본에 대해 사전 훈련된 가중치는 이미 ./weights
아래에 배치되어 있습니다. 사전 학습된 가중치에서 1~3단계의 가중치를 다운로드할 수 있습니다.
DATAROOT
└───data
| └───tankandtemples
| └───intermediate
| └───Playground
| │ └───rmvs_scan_cams
| │ │ 00000000_cam.txt
| │ │ 00000001_cam.txt
| │ │ ...
| │ └───images
| │ │ 00000000.jpg
| │ │ 00000001.jpg
| │ │ ...
| │ └───pair.txt
| │ └───Playground.log
| └───Family
| └───...
| └───advanced
└───weights
└───src
└───validate.py
└───point_cloud_generator.py
python validate.py
깊이 예측은 'results/{dataset_name}/{split}/length'에 저장됩니다.
python point_cloud_generator.py
융합된 포인트 클라우드는 'results/{dataset_name}/{split}/points'에 저장됩니다.
이 저장소가 도움이 된다고 생각하시면 다음 논문을 인용해 보세요.
@InProceedings{ismvsnet,
author="Wang, Likang
and Gong, Yue
and Ma, Xinjun
and Wang, Qirui
and Zhou, Kaixuan
and Chen, Lei",
editor="Avidan, Shai
and Brostow, Gabriel
and Ciss{'e}, Moustapha
and Farinella, Giovanni Maria
and Hassner, Tal",
title="IS-MVSNet:Importance Sampling-Based MVSNet",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="668--683",
abstract="This paper presents a novel coarse-to-fine multi-view stereo (MVS) algorithm called importance-sampling-based MVSNet (IS-MVSNet) to address a crucial problem of limited depth resolution adopted by current learning-based MVS methods. We proposed an importance-sampling module for sampling candidate depth, effectively achieving higher depth resolution and yielding better point-cloud results while introducing no additional cost. Furthermore, we proposed an unsupervised error distribution estimation method for adjusting the density variation of the importance-sampling module. Notably, the proposed sampling module does not require any additional training and works reasonably well with the pre-trained weights of the baseline model. Our proposed method leads to up to {$}{$}20{backslash}times {$}{$}20{texttimes}promotion on the most refined depth resolution, thus significantly benefiting most scenarios and excellently superior on fine details. As a result, IS-MVSNet outperforms all the published papers on TNT's intermediate benchmark with an F-score of 62.82{%}. Code is available at github.com/NoOneUST/IS-MVSNet.",
isbn="978-3-031-19824-3"
}