Skip to main content

MAS3K: An Open Dataset for Marine Animal Segmentation

  • Conference paper
  • First Online:
Benchmarking, Measuring, and Optimizing (Bench 2020)

Abstract

Recent advances in marine animal research have raised significant demands for fine-grained marine animal segmentation techniques. Deep learning has shown remarkable success in a variety of object segmentation tasks. However, deep based marine animal segmentation is lack of investigation due to the short of a marine animal dataset. To this end, we elaborately construct the first open Marine Animal Segmentation dataset, called MAS3K, which consists of more than three thousand images of diverse marine animals, with common and camouflaged appearances, in different underwater conditions, such as low illumination, turbid water quality, photographic distortion, etc. Each image from the MAS3K dataset has rich annotations, including an object-level annotation, a category name, an animal camouflage method (if applicable), and attribute annotations. In addition, based on MAS3K, we systematically evaluate 6 cutting-edge object segmentation models using five widely-used metrics. We perform comprehensive analysis and report detailed qualitative and quantitative benchmark results in the paper. Our work provides valuable insights into the marine animal segmentation, which will boost the development in this direction effectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://unsplash.com/.

  2. 2.

    https://www.google.com/imghp.

References

  1. Beijbom, O., et al.: Improving automated annotation of benthic survey images using wide-band fluorescence. Sci. Rep. 6, 23166 (2016)

    Article  Google Scholar 

  2. Borji, A., Cheng, M.M., Jiang, H., Li, J.: Salient object detection: a benchmark. IEEE Trans. Image Process. 24(12), 5706–5722 (2015)

    Article  MathSciNet  Google Scholar 

  3. Carraway, L.N., Verts, B., et al.: A bibliography of Oregon mammalogy (1982)

    Google Scholar 

  4. Chen, Q., et al.: EF-Net: a novel enhancement and fusion network for RGB-D saliency detection. Pattern Recogn. 112, 107740 (2020)

    Google Scholar 

  5. Cheng, M.M., Liu, Y., Lin, W.Y., Zhang, Z., Rosin, P.L., Torr, P.H.: BING: binarized normed gradients for objectness estimation at 300fps. Comput. Vis. Media 5(1), 3–20 (2019)

    Article  Google Scholar 

  6. Cheng, M.M., Mitra, N.J., Huang, X., Torr, P.H., Hu, S.M.: Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 569–582 (2014)

    Article  Google Scholar 

  7. Chiang, J.Y., Chen, Y.C.: Underwater image enhancement by wavelength compensation and dehazing. IEEE Trans. Image Process. 21(4), 1756–1769 (2011)

    Article  MathSciNet  Google Scholar 

  8. Cott, H.B.: Adaptive Coloration in Animals. Oxford University Press, Methuen (1940)

    Google Scholar 

  9. Cutter, G., Stierhoff, K., Zeng, J.: Automated detection of rockfish in unconstrained underwater videos using Haar cascades and a new image dataset: labeled fishes in the wild. In: 2015 IEEE Winter Applications and Computer Vision Workshops, pp. 57–62. IEEE (2015)

    Google Scholar 

  10. Dawkins, M., Stewart, C., Gallager, S., York, A.: Automatic scallop detection in benthic environments. In: 2013 IEEE Workshop on Applications of Computer Vision, pp. 160–167. IEEE (2013)

    Google Scholar 

  11. Fan, D.-P., Cheng, M.-M., Liu, J.-J., Gao, S.-H., Hou, Q., Borji, A.: Salient objects in clutter: bringing salient object detection to the foreground. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 196–212. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01267-0_12

    Chapter  Google Scholar 

  12. Fan, D.P., Cheng, M.M., Liu, Y., Li, T., Borji, A.: Structure-measure: a new way to evaluate foreground maps. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4548–4557 (2017)

    Google Scholar 

  13. Fan, D.P., Gong, C., Cao, Y., Ren, B., Cheng, M.M., Borji, A.: Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421 (2018)

  14. Fan, D.P., Ji, G.P., Sun, G., Cheng, M.M., Shen, J., Shao, L.: Camouflaged object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2777–2787 (2020)

    Google Scholar 

  15. Fan, D.P., et al.: PraNet: parallel reverse attention network for polyp segmentation. arXiv preprint arXiv:2006.11392 (2020)

  16. Fan, D.P., Lin, Z., Zhang, Z., Zhu, M., Cheng, M.M.: Rethinking RGB-D salient object detection: models, data sets, and large-scale benchmarks. In: IEEE Transactions on Neural Networks and Learning Systems (2020)

    Google Scholar 

  17. Fan, D.-P., Zhai, Y., Borji, A., Yang, J., Shao, L.: BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12357, pp. 275–292. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58610-2_17

    Chapter  Google Scholar 

  18. Fu, K., Fan, D.P., Ji, G.P., Zhao, Q.: JL-DCF: joint learning and densely-cooperative fusion framework for RGB-D salient object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3052–3062 (2020)

    Google Scholar 

  19. Fu, X., Zhuang, P., Huang, Y., Liao, Y., Zhang, X.P., Ding, X.: A retinex-based enhancing approach for single underwater image. In: 2014 IEEE International Conference on Image Processing, pp. 4572–4576. IEEE (2014)

    Google Scholar 

  20. Guo, Y., Li, H., Zhuang, P.: Underwater image enhancement using a multiscale dense generative adversarial network. IEEE J. Oceanic Eng. 45, 862–870 (2019)

    Google Scholar 

  21. Huang, Z., Chen, H.X., Zhou, T., Yang, Y.Z., Wang, C.Y.: Multi-level cross-modal interaction network for RGB-D salient object detection. arXiv preprint arXiv:2007.14352 (2020)

  22. Islam, M.J., Luo, P., Sattar, J.: Simultaneous enhancement and super-resolution of underwater imagery for improved visual perception. arXiv preprint arXiv:2002.01155 (2020)

  23. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  24. Jäger, J., Simon, M., Denzler, J., Wolff, V., Fricke-Neuderth, K., Kruschel, C.: Croatian fish dataset: fine-grained classification of fish species in their natural habitat. In: British Machine Vision Conference, Swansea (2015)

    Google Scholar 

  25. Jian, M., Qi, Q., Dong, J., Yin, Y., Zhang, W., Lam, K.M.: The OUC-vision large-scale underwater image database. In: 2017 IEEE International Conference on Multimedia and Expo, pp. 1297–1302. IEEE (2017)

    Google Scholar 

  26. Jobson, D.J., Rahman, Z., Woodell, G.A.: A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 6(7), 965–976 (1997)

    Google Scholar 

  27. Johnsen, S.: Hidden in plain sight: the ecology and physiology of organismal transparency. Biol. Bull. 201(3), 301–318 (2001)

    Article  Google Scholar 

  28. Kisantal, M., Wojna, Z., Murawski, J., Naruniec, J., Cho, K.: Augmentation for small object detection. arXiv preprint arXiv:1902.07296 (2019)

  29. Le, T.N., Nguyen, T.V., Nie, Z., Tran, M.T., Sugimoto, A.: Anabranch network for camouflaged object segmentation. Comput. Vis. Image Underst. 184, 45–56 (2019)

    Article  Google Scholar 

  30. Li, C., Guo, C., Ren, W., Cong, R., Hou, J., Kwong, S., Tao, D.: An underwater image enhancement benchmark dataset and beyond. IEEE Trans. Image Process. 29, 4376–4389 (2019)

    Article  Google Scholar 

  31. Li, C., Guo, J., Guo, C.: Emerging from water: underwater image color correction based on weakly supervised color transfer. IEEE Signal Process. Lett. 25(3), 323–327 (2018)

    Article  Google Scholar 

  32. Li, G., Yu, Y.: Visual saliency based on multiscale deep features. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5455–5463 (2015)

    Google Scholar 

  33. Li, J., Skinner, K.A., Eustice, R.M., Johnson-Roberson, M.: WaterGAN: unsupervised generative network to enable real-time color correction of monocular underwater images. IEEE Robot. Autom. Lett. 3(1), 387–394 (2017)

    Google Scholar 

  34. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  35. Liu, J.J., Hou, Q., Cheng, M.M., Feng, J., Jiang, J.: A simple pooling-based design for real-time salient object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3917–3926 (2019)

    Google Scholar 

  36. Lu, M., Wagner, A., Van Male, L., Whitehead, A., Boehnlein, J.: Imagery rehearsal therapy for posttraumatic nightmares in U.S. veterans. J. Trauma. Stress 22(3), 236–239 (2009)

    Article  Google Scholar 

  37. Ludvigsen, M., Sortland, B., Johnsen, G., Singh, H.: Applications of geo-referenced underwater photo mosaics in marine biology and archaeology. Oceanography 20(4), 140–149 (2007)

    Article  Google Scholar 

  38. Mahmood, A., et al.: Automatic annotation of coral reefs using deep learning. In: MTS/IEEE Conference OCEANS16, Monterey, pp. 1–5. IEEE (2016)

    Google Scholar 

  39. Margolin, R., Zelnik-Manor, L., Tal, A.: How to evaluate foreground maps? In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2014)

    Google Scholar 

  40. McFall-Ngai, M.J.: Crypsis in the pelagic environment. Am. Zool. 30(1), 175–188 (1990)

    Article  Google Scholar 

  41. Movahedi, V., Elder, J.H.: Design and perceptual validation of performance measures for salient object segmentation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 49–56. IEEE (2010)

    Google Scholar 

  42. Pedersen, M., Bruslund Haurum, J., Gade, R., Moeslund, T.B.: Detection of marine animals in a new underwater dataset with varying visibility. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 18–26 (2019)

    Google Scholar 

  43. Perazzi, F., Krähenbühl, P., Pritch, Y., Hornung, A.: Saliency filters: contrast based filtering for salient region detection. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 733–740. IEEE (2012)

    Google Scholar 

  44. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 724–732 (2016)

    Google Scholar 

  45. Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O.R., Jagersand, M.: U2-Net: going deeper with nested U-structure for salient object detection. Pattern Recogn. 106, 107404 (2020)

    Article  Google Scholar 

  46. Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., Jagersand, M.: BASNet: boundary-aware salient object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7479–7489 (2019)

    Google Scholar 

  47. Rasmussen, C., Zhao, J., Ferraro, D., Trembanis, A.: Deep census: AUV-based scallop population monitoring. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 2865–2873 (2017)

    Google Scholar 

  48. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)

    Google Scholar 

  49. Siddiqui, S.A., et al.: Automatic fish species classification in underwater videos: exploiting pre-trained deep neural network models to compensate for limited labelled data. ICES J. Mar. Sci. 75, 374–389 (2017). Handling editor: Howard Browman

    Google Scholar 

  50. Skurowski, P., Abdulameer, H., Błaszczyk, J., Depta, T., Kornacki, A., Kozieł, P.: Animal camouflage analysis: Chameleon database (2018, unpublished manuscript)

    Google Scholar 

  51. Villon, S., Chaumont, M., Subsol, G., Villéger, S., Claverie, T., Mouillot, D.: Coral reef fish detection and recognition in underwater videos by supervised machine learning: comparison between deep learning and HOG+SVM methods. In: Blanc-Talon, J., Distante, C., Philips, W., Popescu, D., Scheunders, P. (eds.) ACIVS 2016. LNCS, vol. 10016, pp. 160–171. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48680-2_15

    Chapter  Google Scholar 

  52. Wallace, A.R.: The colours of animals. Nature 42(1082), 289–291 (1890)

    Article  Google Scholar 

  53. Wang, L., et al.: Learning to detect salient objects with image-level supervision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 136–145 (2017)

    Google Scholar 

  54. Wu, Y.H., et al.: JCS: an explainable COVID-19 diagnosis system by joint classification and segmentation. arXiv preprint arXiv:2004.07054 (2020)

  55. Wu, Z., Su, L., Huang, Q.: Stacked cross refinement network for edge-aware salient object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7264–7273 (2019)

    Google Scholar 

  56. Yan, Q., Xu, L., Shi, J., Jia, J.: Hierarchical saliency detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1155–1162 (2013)

    Google Scholar 

  57. Yang, C., Zhang, L., Lu, H., Ruan, X., Yang, M.H.: Saliency detection via graph-based manifold ranking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3166–3173 (2013)

    Google Scholar 

  58. Zeng, Y., Zhang, P., Zhang, J., Lin, Z., Lu, H.: Towards high-resolution salient object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7234–7243 (2019)

    Google Scholar 

  59. Zhang, J., et al.: UC-Net: uncertainty inspired RGB-D saliency detection via conditional variational autoencoders. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8582–8591 (2020)

    Google Scholar 

  60. Zhang, P., Wang, D., Lu, H., Wang, H., Ruan, X.: Amulet: aggregating multi-level convolutional features for salient object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 202–211 (2017)

    Google Scholar 

  61. Zhang, S., Wang, T., Dong, J., Yu, H.: Underwater image enhancement via extended multi-scale retinex. Neurocomputing 245, 1–9 (2017)

    Article  Google Scholar 

  62. Zhao, J.X., Liu, J.J., Fan, D.P., Cao, Y., Yang, J., Cheng, M.M.: EGNet: edge guidance network for salient object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 8779–8788 (2019)

    Google Scholar 

  63. Zhao, Z.Q., Zheng, P., Xu, S.t., Wu, X.: Object detection with deep learning: a review. IEEE Transactions on Neural Networks and Learning Systems 30(11), 3212–3232 (2019)

    Google Scholar 

  64. Zhou, T., Fan, D.P., Cheng, M.M., Shen, J., Shao, L.: RGB-D salient object detection: a survey. arXiv preprint arXiv:2008.00230 (2020)

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Junyu Dong or Geng Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, L., Rigall, E., Dong, J., Chen, G. (2021). MAS3K: An Open Dataset for Marine Animal Segmentation. In: Wolf, F., Gao, W. (eds) Benchmarking, Measuring, and Optimizing. Bench 2020. Lecture Notes in Computer Science(), vol 12614. Springer, Cham. https://doi.org/10.1007/978-3-030-71058-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-71058-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-71057-6

  • Online ISBN: 978-3-030-71058-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics