ABSTRACT
The visual attention of a human enables quick perception of noticeable regions in an image. The research on the models of visual attention has been actively studied for decades in the computer vision areas. For example, detecting visual saliency in a scene allows to estimate which details humans find interesting in advance to understand the scene. This also forms the important basis of a variety of latter tasks related to visual detection and tracking. By virtue of increasing diffusion of low-cost 3D sensors, many studies have been proposed to examine how to incorporate 3D information into visual attention models. Despite many advantages of depth data, relatively few studies on the visual attention of a depth image have delved into how to fully exploit the structural information of depth perception based on depth data itself. In this paper, Protuberant saliency is proposed to effectively detect the saliency in a depth image. The proposed approach explores the inherent protuberance information encoded in a depth structure. The fixation of a human eye in a depth scene is directly estimated by Protuberant saliency. It is robust to the isometric deformation and varying orientation of a depth region. The experimental results show that the rotation invariant and flexible architecture of Protuberant saliency produces the effectiveness against those challenging conditions.
- A. Borji, H. Tavakoli, D. Sihite, and L. Itti. Analysis of scores, datasets, and models in visual saliency prediction. In International Conference on Computer Vision, pages 921--928. IEEE, 2013.Google ScholarDigital Library
- A. Borji, M. Cheng, H. Jiang, and J. Li. Salient object detection: A benchmark. IEEE Transactions on Image Processing, 24:5706--5722, 2015.Google ScholarDigital Library
- B. Mendiburu. 3D movie making: stereoscopic digital cinema from script to screen. CRC Press, 2012.Google Scholar
- C. Lang, T. Nguyen, H. Katti, K. Yadati, M. Kankanhalli, and S. Yan. Depth matters: Influence of depth cues on visual saliency. In European Conference on Computer Vision, pages 01--115. Springer, 2012.Google ScholarCross Ref
- C. Lee, A. Varshney, and D. Jacobs. Mesh saliency. In ACM SIGGRAPH, pages 659--666. ACM, 2005.Google ScholarDigital Library
- C. Ma and H. Hang. Learning-based saliency model with depth information. Journal of vision, 15:19--19, 2015.Google ScholarCross Ref
- E. Erdem and A. Erdem. Visual saliency estimation by nonlinearly integrating features using region covariances. Journal of vision, 13:11--11, 2013.Google ScholarCross Ref
- H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li. Salient object detection: A discriminative regional feature integration approach. In Computer Vision and Pattern Recognition, pages 2083--2090, 2013.Google ScholarDigital Library
- H. Peng, B. Li, W. Xiong, W. Hu, and R. Ji. RGBD salient object detection: A benchmark and algorithms. In European Conference on Computer Vision, pages 92--109. Springer, 2014.Google ScholarCross Ref
- J. Wang, M. Da Silva, P. Le Callet, and V. Ricordel. Computational model of stereoscopic 3d visual saliency. IEEE Transactions on Image Processing, 22:2151--2165, 2013.Google ScholarDigital Library
- J. Wolfe and T. Horowitz. What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5:495--501, 2004.Google ScholarCross Ref
- L. Duan, C. Wu, J. Miao, L. Qing, and Y. Fu. Visual saliency detection by spatially weighted dissimilarity. In Computer Vision and Pattern Recognition, pages 473--180. IEEE, 2011.Google ScholarDigital Library
- L. Elazary and L. Itti. Interesting objects are visually salient. Journal of vision, 8:3--3, 2008.Google Scholar
- L. Jansen, S. Onat, and P. König. Influence of disparity on fixation and saccades in free viewing of natural scenes. Journal of Vision, 9:29--29, 2009.Google ScholarCross Ref
- L. Qi, X. Lu, and X. Li. Action recognition by jointly using video proposal and trajectory. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, page 4. ACM, 2018.Google Scholar
- Li. Improvement of embedding channel-wise activation in soft-attention neural image captioning. In Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, page 13. ACM, 2018.Google Scholar
- M. Kümmerer, T. Wallis, and M. Bethge. Information-theoretic model comparison unifies saliency metrics. Proceedings of the National Academy of Sciences, 112:16054--16059, 2015.Google ScholarCross Ref
- N. Murray, M. Vanrell, X. Otazu, and C. Párraga. Saliency estimation using a non-parametric low-level vision model. In Computer Vision and Pattern Recognition, pages 433--440. IEEE, 2011.Google ScholarDigital Library
- N. Ouerhani and H. Hugli. Computing visual attention from scene depth. In International Conference on Pattern Recognition, pages 1375--1378. IEEE, 2000.Google ScholarCross Ref
- R. Zhong, R. Hu, Y. Shi, Z. Wang, Z. Han, L. Liu, and J. Hu. Just noticeable difference for 3d images with depth saliency. In Pacific-Rim Conference on Multimedia, pages 414--423. Springer, 2012.Google ScholarDigital Library
- Y. Cheng, X. Zhao, K. Huang, and T. Tan. Semi-supervised learning and feature evaluation for rgb-d object recognition. Computer Vision and Image Understanding, 139:149--160, 2015.Google ScholarDigital Library
- Y. Niu, Y. Geng, X. Li, and F. Liu. Leveraging stereopsis for saliency analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 454--461. IEEE, 2012.Google Scholar
- Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand. What do different evaluation metrics tell us about saliency models? arXiv preprint arXiv:1604.03605, 2016.Google Scholar
Index Terms
- Detecting Protuberant Saliency from a Depth Image
Recommendations
Depth Enhanced Saliency Detection Method
ICIMCS '14: Proceedings of International Conference on Internet Multimedia Computing and ServiceHuman vision system understands the environment from 3D perception. However, most existing saliency detection algorithms detect the salient foreground based on 2D image information. In this paper, we propose a saliency detection method using the ...
Depth-aware saliency detection using convolutional neural networks
Highlights- We propose a depth-aware saliency model which consists of three networks.
- Color ...
AbstractThis paper proposes a new end-to-end depth-aware saliency model using three convolutional neural networks including color saliency network, depth saliency network and saliency fusion network, for saliency detection in RGBD images and ...
Object-level saliency detection with color attributes
Recently, saliency detection has been becoming a popular topic in computer vision. In this paper we propose an object-level saliency detection algorithm which explicitly explores bottom-up visual attention and objectness cues. Firstly, some category-...
Comments