Skip to main content

Do We Need Sound for Sound Source Localization?

  • Conference paper
  • First Online:
Computer Vision – ACCV 2020 (ACCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12627))

Included in the following conference series:

Abstract

During the performance of sound source localization which uses both visual and aural information, it presently remains unclear how much either image or sound modalities contribute to the result, i.e. do we need both image and sound for sound source localization? To address this question, we develop an unsupervised learning system that solves sound source localization by decomposing this task into two steps: (i) “potential sound source localization”, a step that localizes possible sound sources using only visual information (ii) “object selection”, a step that identifies which objects are actually sounding using aural information. Our overall system achieves state-of-the-art performance in sound source localization, and more importantly, we find that despite the constraint on available information, the results of (i) achieve similar performance. From this observation and further experiments, we show that visual information is dominant in “sound” source localization when evaluated with the currently adopted benchmark dataset. Moreover, we show that the majority of sound-producing objects within the samples in this dataset can be inherently identified using only visual information, and thus that the dataset is inadequate to evaluate a system’s capability to leverage aural information. As an alternative, we present an evaluation protocol that enforces both visual and aural information to be leveraged, and verify this property through several experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Breiman, L.: Random forests. Mach. Learn. 45, 5–32 (2001)

    Article  Google Scholar 

  2. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: International Conference on Knowledge Discovery and Data Mining (KDD) (2016)

    Google Scholar 

  3. Ke, G., et al..: LightGBM: a highly efficient gradient boosting decision tree. In: Neural Information Processing Systems (NIPS) (2017)

    Google Scholar 

  4. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: CatBoost: unbiased boosting with categorical features. In: Neural Information Processing Systems (NIPS) (2018)

    Google Scholar 

  5. Arandjelovic, R., Zisserman, A.: Look, listen and learn. In: International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  6. Arandjelović, R., Zisserman, A.: Objects that sound. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 451–466. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_27

    Chapter  Google Scholar 

  7. Hu, D., Nie, F., Li, X.: Deep multimodal clustering for unsupervised audiovisual learning. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  8. Rouditchenko, A., Zhao, H., Gan, C., McDermott, J., Torralba, A.: Self-supervised audio-visual co-segmentation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2019)

    Google Scholar 

  9. Zhao, H., Gan, C., Rouditchenko, A., Vondrick, C., McDermott, J., Torralba, A.: The sound of pixels. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 587–604. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_35

    Chapter  Google Scholar 

  10. Owens, A., Efros, A.A.: Audio-visual scene analysis with self-supervised multisensory features. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 639–658. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1_39

    Chapter  Google Scholar 

  11. Senocak, A., Oh, T.H., Kim, J., Yang, M.H., So Kweon, I.: Learning to localize sound source in visual scenes. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  12. Zhao, H., Gan, C., Ma, W.C., Torralba, A.: The sound of motions. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  13. Gan, C., Huang, D., Zhao, H., Tenenbaum, J.B., Torralba, A.: Music gesture for visual sound separation. In: Computer Vision and Pattern Recognition (CVPR) (2020)

    Google Scholar 

  14. Aytar, Y., Vondrick, C., Torralba, A.: See, hear, and read: deep aligned representations. arXiv preprint arXiv:1706.00932 (2017)

  15. Aytar, Y., Vondrick, C., Torralba, A.: SoundNet: learning sound representations from unlabeled video. In: Neural Information Processing Systems (NIPS) (2016)

    Google Scholar 

  16. Harwath, D., Glass, J.R.: Learning word-like units from joint audio-visual analysis. In: Association for Computational Linguistics (ACL) (2017)

    Google Scholar 

  17. Harwath, D., Torralba, A., Glass, J.: Unsupervised learning of spoken language with visual context. In: Neural Information Processing Systems (NIPS) (2016)

    Google Scholar 

  18. Owens, A., Wu, J., McDermott, J.H., Freeman, W.T., Torralba, A.: Ambient sound provides supervision for visual learning. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 801–816. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_48

    Chapter  Google Scholar 

  19. Tian, Y., Shi, J., Li, B., Duan, Z., Xu, C.: Audio-visual event localization in unconstrained videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 252–268. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01216-8_16

    Chapter  Google Scholar 

  20. Korbar, B., Tran, D., Torresani, L.: Cooperative learning of audio and video models from self-supervised synchronization. In: Neural Information Processing Systems (NIPS) (2018)

    Google Scholar 

  21. Zhou, Y., Wang, Z., Fang, C., Bui, T., Berg, T.L.: Visual to sound: generating natural sound for videos in the wild. In: Computer Vision and Pattern Recognition (CVPR) (2018)

    Google Scholar 

  22. Chen, K., Zhang, C., Fang, C., Wang, Z., Bui, T., Nevatia, R.: Visually indicated sound generation by perceptually optimized classification. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 560–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11024-6_43

    Chapter  Google Scholar 

  23. Gao, R., Grauman, K.: 2.5D visual sound. In: Computer Vision and Pattern Recognition (CVPR) (2019)

    Google Scholar 

  24. Morgado, P., Nvasconcelos, N., Langlois, T., Wang., O.: Self-supervised generation of spatial audio for 360\(^{\circ }\) video. In: Neural Information Processing Systems (NIPS) (2018)

    Google Scholar 

  25. Lyon, R.F.: A computational model of binaural localization and separation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (1983)

    Google Scholar 

  26. Hershey, J.R., Chen, Z., Roux, J.L., Watanabe, S.: Deep clustering: discriminative embeddings for segmentation and separation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2016)

    Google Scholar 

  27. Chen, Z., Luo, Y., Mesgarani, N.: Deep attractor network for single-microphone speaker separation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2017)

    Google Scholar 

  28. Yu, D., Kolbæk, M., Tan, Z.H., Jensen, J.: Permutation invariant training of deep models for speaker-independent multi-talker speech separation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2017)

    Google Scholar 

  29. Parekh, S., Essid, S., Ozerov, A., Duong, N., Pérez, P., Richard, G.: Motion informed audio source separation. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2017)

    Google Scholar 

  30. Sedighin, F., Babaie-Zadeh, M., Rivet, B., Jutten, C.: Two multimodal approaches for single microphone source separation. In: European Signal Processing Conference (EUSIPCO) (2016)

    Google Scholar 

  31. Gao, R., Feris, R., Grauman, K.: Learning to separate object sounds by watching unlabeled video. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 36–54. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_3

    Chapter  Google Scholar 

  32. Smaragdis, P., Casey, M.: Audio/visual independent components. In: International Conference on Independent Component Analysis and Signal Separation (ICA) (2003)

    Google Scholar 

  33. Pu, J., Panagakis, Y., Petridis, S., Pantic, M.: Audio-visual object localization and separation using low-rank and sparsity. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2017)

    Google Scholar 

  34. Gao, R., Grauman, K.: Co-separating sounds of visual objects. In: International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  35. Ephrat, A., et al.: Looking to listen at the cocktail party: a speaker-independent audio-visual model for speech separation. In: Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) (2018)

    Google Scholar 

  36. Gabbay, A., Shamir, A., Peleg, S.: Visual speech enhancement. In: International Speech Communication Association (INTERSPEECH) (2018)

    Google Scholar 

  37. Afouras, T., Chung, J.S., Zisserman, A.: The conversation: deep audio-visual speech enhancement. In: International Speech Communication Association (INTERSPEECH) (2018)

    Google Scholar 

  38. Llagostera Casanovas, A., Monaci, G., Vandergheynst, P., Gribonval, R.: Blind audiovisual source separation based on sparse redundant representations. Trans. Multimedia 12, 358–371 (2010)

    Article  Google Scholar 

  39. Nakadai, K., Okuno, H.G., Kitano, H.: Real-time sound source localization and separation for robot audition. In: International Conference on Spoken Language Processing (ICSLP) (2002)

    Google Scholar 

  40. Argentieri, S., Danès, P., Souères, P.: A survey on sound source localization in robotics: from binaural to array processing methods. Comput. Speech Lang. 34, 87–112 (2015)

    Article  Google Scholar 

  41. Nakamura, K., Nakadai, K., Asano, F., Ince, G.: Intelligent sound source localization and its application to multimodal human tracking. In: International Conference on Intelligent Robots and Systems (IROS) (2011)

    Google Scholar 

  42. Strobel, N., Spors, S., Rabenstein, R.: Joint audio-video object localization and tracking. Sig. Process. Mag. 18, 22–31 (2001)

    Article  Google Scholar 

  43. Hershey, J.R., Movellan, J.R.: Audio vision: using audio-visual synchrony to locate sounds. In: Neural Information Processing Systems (NIPS) (1999)

    Google Scholar 

  44. Kidron, E., Schechner, Y.Y., Elad, M.: Pixels that sound. In: Computer Vision and Pattern Recognition (CVPR) (2005)

    Google Scholar 

  45. Fisher III, J.W., Darrell, T., Freeman, W.T., Viola, P.A.: Learning joint statistical models for audio-visual fusion and segregation. In: Neural Information Processing Systems (NIPS) (2001)

    Google Scholar 

  46. Barzelay, Z., Schechner, Y.: Harmony in motion. In: Computer Vision and Pattern Recognition (CVPR) (2007)

    Google Scholar 

  47. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Neural Information Processing Systems (NIPS) (2012)

    Google Scholar 

  48. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  49. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Computer Vision and Pattern Recognition (CVPR) (2009)

    Google Scholar 

  50. Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning (ICML) (2010)

    Google Scholar 

  51. Lin, M., Chen, Q., Yan, S.: Network in network. In: International Conference on Learning Representations (ICLR) (2014)

    Google Scholar 

  52. Bromley, J., Guyon, I., LeCun, Y., Säckinger, E., Shah, R.: Signature verification using a “siamese” time delay neural network. In: Neural Information Processing Systems (NIPS) (1994)

    Google Scholar 

  53. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: International Conference on Computer Vision (ICCV) (2017)

    Google Scholar 

  54. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

Download references

Acknowledgements

This research is supported by the JST ACCEL (JPMJAC1602), JST-Mirai Program (JPMJMI19B2), JSPS KAKENHI (JP17H06101, JP19H01129 and JP19H04137).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Takashi Oya or Shohei Iwase .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 18037 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oya, T., Iwase, S., Natsume, R., Itazuri, T., Yamaguchi, S., Morishima, S. (2021). Do We Need Sound for Sound Source Localization?. In: Ishikawa, H., Liu, CL., Pajdla, T., Shi, J. (eds) Computer Vision – ACCV 2020. ACCV 2020. Lecture Notes in Computer Science(), vol 12627. Springer, Cham. https://doi.org/10.1007/978-3-030-69544-6_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69544-6_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69543-9

  • Online ISBN: 978-3-030-69544-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics