Skip to main content
Log in

Decoding Auditory Saliency from Brain Activity Patterns during Free Listening to Naturalistic Audio Excerpts

  • Original Article
  • Published:
Neuroinformatics Aims and scope Submit manuscript

Abstract

In recent years, natural stimuli such as audio excerpts or video streams have received increasing attention in neuroimaging studies. Compared with conventional simple, idealized and repeated artificial stimuli, natural stimuli contain more unrepeated, dynamic and complex information that are more close to real-life. However, there is no direct correspondence between the stimuli and any sensory or cognitive functions of the brain, which makes it difficult to apply traditional hypothesis-driven analysis methods (e.g., the general linear model (GLM)). Moreover, traditional data-driven methods (e.g., independent component analysis (ICA)) lack quantitative modeling of stimuli, which may limit the power of analysis models. In this paper, we propose a sparse representation based decoding framework to explore the neural correlates between the computational audio features and functional brain activities under free listening conditions. First, we adopt a biologically-plausible auditory saliency feature to quantitatively model the audio excerpts and meanwhile develop sparse representation/dictionary learning method to learn an over-complete dictionary basis of brain activity patterns. Then, we reconstruct the auditory saliency features from the learned fMRI-derived dictionaries. After that, a group-wise analysis procedure is conducted to identify the associated brain regions and networks. Experiments showed that the auditory saliency feature can be well decoded from brain activity patterns by our methods, and the identified brain regions and networks are consistent and meaningful. At last, our method is evaluated and compared with ICA method and experimental results demonstrated the superiority of our methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

Download references

Acknowledgements

This work is supported by National Key R&D Program of China under contract No. 2017YFB1002201. S. Zhao was supported by the Fundamental Research Funds for the Central Universities under grant 3102017zy030 and the China Postdoctoral Science Foundation under grant 2017 M613206. J. Han was supported by the National Science Foundation of China under Grant 61473231 and 61522207. T Liu was supported by NIH R01 DA-033393, NIH R01 AG-042599, NSF CAREER Award IIS-1149260, NSF CBET-1302089, NSF BCS-1439051 and NSF DBI-1564736. L. Guo was supported by the National Science Foundation of China under Grant 61333017.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Junwei Han or Tianming Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, S., Han, J., Jiang, X. et al. Decoding Auditory Saliency from Brain Activity Patterns during Free Listening to Naturalistic Audio Excerpts. Neuroinform 16, 309–324 (2018). https://doi.org/10.1007/s12021-018-9358-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12021-018-9358-0

Keywords

Navigation