ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

AVLnet: Learning Audio-Visual Language Representations from Instructional Videos

Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, James Glass

Current methods for learning visually grounded language from videos often rely on text annotation, such as human generated captions or machine generated automatic speech recognition (ASR) transcripts. In this work, we introduce the Audio-Video Language Network (AVLnet), a self-supervised network that learns a shared audio-visual embedding space directly from raw video inputs. To circumvent the need for text annotation, we learn audio-visual representations from randomly segmented video clips and their raw audio waveforms. We train AVLnet on HowTo100M, a large corpus of publicly available instructional videos, and evaluate on image retrieval and video retrieval tasks, achieving state-of-the-art performance. Finally, we perform analysis of AVLnet’s learned representations, showing our model utilizes speech and natural sounds to learn audio-visual concepts.


doi: 10.21437/Interspeech.2021-1312

Cite as: Rouditchenko, A., Boggust, A., Harwath, D., Chen, B., Joshi, D., Thomas, S., Audhkhasi, K., Kuehne, H., Panda, R., Feris, R., Kingsbury, B., Picheny, M., Torralba, A., Glass, J. (2021) AVLnet: Learning Audio-Visual Language Representations from Instructional Videos. Proc. Interspeech 2021, 1584-1588, doi: 10.21437/Interspeech.2021-1312

@inproceedings{rouditchenko21_interspeech,
  author={Andrew Rouditchenko and Angie Boggust and David Harwath and Brian Chen and Dhiraj Joshi and Samuel Thomas and Kartik Audhkhasi and Hilde Kuehne and Rameswar Panda and Rogerio Feris and Brian Kingsbury and Michael Picheny and Antonio Torralba and James Glass},
  title={{AVLnet: Learning Audio-Visual Language Representations from Instructional Videos}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={1584--1588},
  doi={10.21437/Interspeech.2021-1312},
  issn={2308-457X}
}