skip to main content
10.1145/3208806.3208821acmconferencesArticle/Chapter ViewAbstractPublication Pagesweb3dConference Proceedingsconference-collections
abstract

Super-resolution of interpolated downsampled semi-dense depth map

Authors Info & Claims
Published:20 June 2018Publication History

ABSTRACT

We study depth map reconstruction for a specific task of fast rough depth approximation having sparse depth samples obtained from low-cost depth sensors or SLAM algorithms. We propose a model interpolating downsampled semi-dense depth values and then processing super-resolution. We study our method in comparison with the state-of-the-art approaches transferring RGB information to depth. It appears that the proposed approach can be used to approximately estimate high-resolution depth maps.

References

  1. Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. 2012. A naturalistic open source movie for optical flow evaluation. In European Conference on Computer Vision. Springer, 611--625. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. David Eigen and Rob Fergus. 2015. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer Vision. 2650--2658. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. C. Cadena et al. 2016. Multi-modal Auto-Encoders as Joint Estimators for Robotics Scene Understanding.. In Robotics: Science and Systems.Google ScholarGoogle Scholar
  4. Kai-Lung Hua, Kai-Han Lo, and Yu-Chiang Frank Frank Wang. 2016. Extended guided filtering for depth map upsampling. IEEE MultiMedia 23, 2 (2016), 72--83.Google ScholarGoogle ScholarCross RefCross Ref
  5. Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision. Springer, 694--711.Google ScholarGoogle ScholarCross RefCross Ref
  6. Yevhen Kuznietsov, Jörg Stückler, and Bastian Leibe. 2017. Semi-supervised deep learning for monocular depth map prediction. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 6647--6655.Google ScholarGoogle ScholarCross RefCross Ref
  7. Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. 2016. Deeper depth prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth International Conference on. IEEE, 239--248.Google ScholarGoogle Scholar
  8. Yiyi Liao, Lichao Huang, Yue Wang, Sarath Kodagoda, Yinan Yu, and Yong Liu. 2017. Parse geometry from a line: Monocular depth estimation with partial laser observation. In Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 5059--5066.Google ScholarGoogle ScholarCross RefCross Ref
  9. Wei Liu, Yijun Li, Xiaogang Chen, Jie Yang, Qiang Wu, and Jingyi Yu. 2015. Robust High Quality Image Guided Depth Upsampling. CoRR abs/1506.05187 (2015). arXiv:1506.05187 http://arxiv.org/abs/1506.05187Google ScholarGoogle Scholar
  10. Fangchang Ma and Sertac Karaman. 2017. Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image. CoRR abs/1709.07492 (2017). arXiv:1709.07492 http://arxiv.org/abs/1709.07492Google ScholarGoogle Scholar
  11. Ilya Makarov, Vladimir Aliev, and Olga Gerasimova. 2017a. Semi-Dense Depth Interpolation using Deep Convolutional Neural Networks. In Proceedings of the 2017 ACM on Multimedia Conference. ACM, 1407--1415. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Ilya Makarov, Vladimir Aliev, Olga Gerasimova, and Pavel Polyakov. 2017b. Depth Map Interpolation Using Perceptual Loss. In Mixed and Augmented Reality (ISMAR-Adjunct), 2017 IEEE International Symposium on. IEEE, 93--94.Google ScholarGoogle ScholarCross RefCross Ref
  13. Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. 2012. Indoor Segmentation and Support Inference from RGBD Images. In ECCV.Google ScholarGoogle Scholar
  14. Min Ni, Jianjun Lei, Runmin Cong, Kaifu Zheng, Bo Peng, and Xiaoting Fan. 2017. Color-Guided Depth Map Super Resolution Using Convolutional Neural Network. IEEE ACCESS 5 (2017), 26666--26672.Google ScholarGoogle ScholarCross RefCross Ref
  15. German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M. Lopez. 2016. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  16. Daniel Scharstein and Chris Pal. 2007. Learning conditional random fields for stereo. In CVPR, 2007. CVPR'07. IEEE Conference on. IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref
  17. Nick Schneider, Lukas Schneider, Peter Pinggera, Uwe Franke, Marc Pollefeys, and Christoph Stiller. 2016. Semantically guided depth upsampling. In German Conference on Pattern Recognition. Springer, 37--48.Google ScholarGoogle ScholarCross RefCross Ref
  18. Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. 2017. Demon: Depth and motion network for learning monocular stereo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol. 5.Google ScholarGoogle ScholarCross RefCross Ref
  19. Yi Xiao, Xiang Cao, Xianyi Zhu, Renzhi Yang, and Yan Zheng. 2018. Joint convolutional neural pyramid for depth map super-resolution. arXiv preprint arXiv:1801.00968 (2018).Google ScholarGoogle Scholar
  20. Qingxiong Yang, Ruigang Yang, James Davis, and David Nistér. 2007. Spatial-depth super resolution for range images. In Computer Vision and Pattern Recognition, 2007. CVPR'07. IEEE Conference on. IEEE, 1--8.Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    Web3D '18: Proceedings of the 23rd International ACM Conference on 3D Web Technology
    June 2018
    199 pages
    ISBN:9781450358002
    DOI:10.1145/3208806

    Copyright © 2018 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 20 June 2018

    Check for updates

    Qualifiers

    • abstract

    Acceptance Rates

    Overall Acceptance Rate27of71submissions,38%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader