ISCA Archive Interspeech 2019
ISCA Archive Interspeech 2019

Towards Achieving Robust Universal Neural Vocoding

Jaime Lorenzo-Trueba, Thomas Drugman, Javier Latorre, Thomas Merritt, Bartosz Putrycz, Roberto Barra-Chicote, Alexis Moinet, Vatsal Aggarwal

This paper explores the potential universality of neural vocoders. We train a WaveRNN-based vocoder on 74 speakers coming from 17 languages. This vocoder is shown to be capable of generating speech of consistently good quality (98% relative mean MUSHRA when compared to natural speech) regardless of whether the input spectrogram comes from a speaker or style seen during training or from an out-of-domain scenario when the recording conditions are studio-quality. When the recordings show significant changes in quality, or when moving towards non-speech vocalizations or singing, the vocoder still significantly outperforms speaker-dependent vocoders, but operates at a lower average relative MUSHRA of 75%. These results are shown to be consistent across languages, regardless of them being seen during training (e.g. English or Japanese) or unseen (e.g. Wolof, Swahili, Ahmaric).


doi: 10.21437/Interspeech.2019-1424

Cite as: Lorenzo-Trueba, J., Drugman, T., Latorre, J., Merritt, T., Putrycz, B., Barra-Chicote, R., Moinet, A., Aggarwal, V. (2019) Towards Achieving Robust Universal Neural Vocoding. Proc. Interspeech 2019, 181-185, doi: 10.21437/Interspeech.2019-1424

@inproceedings{lorenzotrueba19_interspeech,
  author={Jaime Lorenzo-Trueba and Thomas Drugman and Javier Latorre and Thomas Merritt and Bartosz Putrycz and Roberto Barra-Chicote and Alexis Moinet and Vatsal Aggarwal},
  title={{Towards Achieving Robust Universal Neural Vocoding}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={181--185},
  doi={10.21437/Interspeech.2019-1424}
}