IS MVSNet
1.0.0
لقد تم قبول ورقتنا كورقة مؤتمر في ECCV 2022!
ISMVSNet، والمعروف أيضًا باسم MVSNet القائم على أخذ العينات، هي طريقة بسيطة وفعالة لإعادة بناء طرق العرض المتعددة.
يوفر هذا الريبو تطبيقًا قائمًا على Mindspore لـ IS-MVSNet. يمكنك تمييز هذا الريبو ومشاهدته لمزيد من التحديثات.
# Centos 7.9.2009 is recommended.
# CUDA == 11.1, GCC == 7.3.0, Python == 3.7.9
conda create -n ismvsnet python=3.7.9
conda install mindspore-gpu=1.7.0 cudatoolkit=11.1 -c mindspore -c conda-forge # Install mindspore == 1.7.0
pip install numpy, opencv-python, tqdm, Pillow
conda activate ismvsnet
تم بالفعل وضع الأوزان المدربة مسبقًا للعمود الفقري تحت ./weights
. يمكن تنزيل أوزان المراحل من 1 إلى 3 من الأوزان المدربة مسبقًا.
DATAROOT
└───data
| └───tankandtemples
| └───intermediate
| └───Playground
| │ └───rmvs_scan_cams
| │ │ 00000000_cam.txt
| │ │ 00000001_cam.txt
| │ │ ...
| │ └───images
| │ │ 00000000.jpg
| │ │ 00000001.jpg
| │ │ ...
| │ └───pair.txt
| │ └───Playground.log
| └───Family
| └───...
| └───advanced
└───weights
└───src
└───validate.py
└───point_cloud_generator.py
python validate.py
سيتم حفظ تنبؤات العمق في "results/{dataset_name}/{split}/ Deep"
python point_cloud_generator.py
سيتم حفظ السحب النقطية المدمجة في "النتائج/{dataset_name}/{split}/points"
إذا كنت تعتقد أن هذا الريبو مفيد، فيرجى التفكير في الاستشهاد بمقالتنا:
@InProceedings{ismvsnet,
author="Wang, Likang
and Gong, Yue
and Ma, Xinjun
and Wang, Qirui
and Zhou, Kaixuan
and Chen, Lei",
editor="Avidan, Shai
and Brostow, Gabriel
and Ciss{'e}, Moustapha
and Farinella, Giovanni Maria
and Hassner, Tal",
title="IS-MVSNet:Importance Sampling-Based MVSNet",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="668--683",
abstract="This paper presents a novel coarse-to-fine multi-view stereo (MVS) algorithm called importance-sampling-based MVSNet (IS-MVSNet) to address a crucial problem of limited depth resolution adopted by current learning-based MVS methods. We proposed an importance-sampling module for sampling candidate depth, effectively achieving higher depth resolution and yielding better point-cloud results while introducing no additional cost. Furthermore, we proposed an unsupervised error distribution estimation method for adjusting the density variation of the importance-sampling module. Notably, the proposed sampling module does not require any additional training and works reasonably well with the pre-trained weights of the baseline model. Our proposed method leads to up to {$}{$}20{backslash}times {$}{$}20{texttimes}promotion on the most refined depth resolution, thus significantly benefiting most scenarios and excellently superior on fine details. As a result, IS-MVSNet outperforms all the published papers on TNT's intermediate benchmark with an F-score of 62.82{%}. Code is available at github.com/NoOneUST/IS-MVSNet.",
isbn="978-3-031-19824-3"
}