In this paper, we present a novel system that separates the voice of a target speaker from multi-speaker signals, by making use of a reference signal from the target speaker. We achieve this by training two separate neural networks: (1) A speaker recognition network that produces speaker-discriminative embeddings; (2) A spectrogram masking network that takes both noisy spectrogram and speaker embedding as input, and produces a mask. Our system significantly reduces the speech recognition WER on multi-speaker signals, with minimal WER degradation on single-speaker signals.
Cite as: Wang, Q., Muckenhirn, H., Wilson, K., Sridhar, P., Wu, Z., Hershey, J.R., Saurous, R.A., Weiss, R.J., Jia, Y., Moreno, I.L. (2019) VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking. Proc. Interspeech 2019, 2728-2732, doi: 10.21437/Interspeech.2019-1101
@inproceedings{wang19h_interspeech, author={Quan Wang and Hannah Muckenhirn and Kevin Wilson and Prashant Sridhar and Zelin Wu and John R. Hershey and Rif A. Saurous and Ron J. Weiss and Ye Jia and Ignacio Lopez Moreno}, title={{VoiceFilter: Targeted Voice Separation by Speaker-Conditioned Spectrogram Masking}}, year=2019, booktitle={Proc. Interspeech 2019}, pages={2728--2732}, doi={10.21437/Interspeech.2019-1101} }