skip to main content
10.1145/3387168.3387171acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvispConference Proceedingsconference-collections
research-article

Detecting Protuberant Saliency from a Depth Image

Published:25 May 2020Publication History

ABSTRACT

The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.

References

  1. A. Borji, H. Tavakoli, D. Sihite, and L. Itti. Analysis of scores, datasets, and models in visual saliency prediction. In International Conference on Computer Vision, pages 921--928. IEEE, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. Borji, M. Cheng, H. Jiang, and J. Li. Salient object detection: A benchmark. IEEE Transactions on Image Processing, 24:5706--5722, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. B. Mendiburu. 3D movie making: stereoscopic digital cinema from script to screen. CRC Press, 2012.Google ScholarGoogle Scholar
  4. C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan. Depth matters: Influence of depth cues on visual saliency. In European Conference on Computer Vision, pages 01--115. Springer, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  5. C. Lee, A. Varshney, and D. Jacobs. Mesh saliency. In ACM SIGGRAPH, pages 659--666. ACM, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Ma and H. Hang. Learning-based saliency model with depth information. Journal of vision, 15:19--19, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  7. E. Erdem and A. Erdem. Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of vision, 13:11--11, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  8. H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature integration approach. In Computer Vision and Pattern Recognition, pages 2083--2090, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. H. Peng, B. Li, W. Xiong, W. Hu, and R. Ji. RGBD salient object detection: A benchmark and algorithms. In European Conference on Computer Vision, pages 92--109. Springer, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  10. J. Wang, M. Da Silva, P. Le Callet, and V. Ricordel. Computational model of stereoscopic 3d visual saliency. IEEE Transactions on Image Processing, 22:2151--2165, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. J. Wolfe and T. Horowitz. What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5:495--501, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  12. L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu. Visual saliency detection by spatially weighted dissimilarity. In Computer Vision and Pattern Recognition, pages 473--180. IEEE, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. L. Elazary and L. Itti. Interesting objects are visually salient. Journal of vision, 8:3--3, 2008.Google ScholarGoogle Scholar
  14. L. Jansen, S. Onat, and P. König. Influence of disparity on fixation and saccades in free viewing of natural scenes. Journal of Vision, 9:29--29, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  15. L. Qi, X. Lu, and X. Li. Action recognition by jointly using video proposal and trajectory. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, page 4. ACM, 2018.Google ScholarGoogle Scholar
  16. Li. Improvement of embedding channel-wise activation in soft-attention neural image captioning. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, page 13. ACM, 2018.Google ScholarGoogle Scholar
  17. M. Kümmerer, T. Wallis, and M. Bethge. Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences, 112:16054--16059, 2015.Google ScholarGoogle ScholarCross RefCross Ref
  18. N. Murray, M. Vanrell, X. Otazu, and C. Párraga. Saliency estimation using a non-parametric low-level vision model. In Computer Vision and Pattern Recognition, pages 433--440. IEEE, 2011.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. N. Ouerhani and H. Hugli. Computing visual attention from scene depth. In International Conference on Pattern Recognition, pages 1375--1378. IEEE, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  20. R. Zhong, R. Hu, Y. Shi, Z. Wang, Z. Han, L. Liu, and J. Hu. Just noticeable difference for 3d images with depth saliency. In Pacific-Rim Conference on Multimedia, pages 414--423. Springer, 2012.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Y. Cheng, X. Zhao, K. Huang, and T. Tan. Semi-supervised learning and feature evaluation for rgb-d object recognition. Computer Vision and Image Understanding, 139:149--160, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y. Niu, Y. Geng, X. Li, and F. Liu. Leveraging stereopsis for saliency analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 454--461. IEEE, 2012.Google ScholarGoogle Scholar
  23. Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand. What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605, 2016.Google ScholarGoogle Scholar

Index Terms

  1. Detecting Protuberant Saliency from a Depth Image

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      ICVISP 2019: Proceedings of the 3rd International Conference on Vision, Image and Signal Processing
      August 2019
      584 pages
      ISBN:9781450376259
      DOI:10.1145/3387168

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 25 May 2020

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      ICVISP 2019 Paper Acceptance Rate126of277submissions,45%Overall Acceptance Rate186of424submissions,44%
    • Article Metrics

      • Downloads (Last 12 months)2
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader