ISCA Archive Odyssey 2022
ISCA Archive Odyssey 2022

Language-Independent Speaker Anonymization Approach Using Self-Supervised Pre-Trained Models

Xiaoxiao Miao, Xin Wang, Erica Cooper, Junichi Yamagishi, Natalia Tomashenko

Speaker anonymization aims to protect the privacy of speakers while preserving spoken linguistic information from speech. Current mainstream neural network speaker anonymization systems are complicated, containing an F0 extractor, speaker encoder, automatic speech recognition acoustic model (ASR AM), speech synthesis acoustic model and speech waveform generation model. Moreover, as an ASR AM is language-dependent, trained on English data, it is hard to adapt it into another language. In this paper, we propose a simpler self-supervised learning (SSL)-based method for language-independent speaker anonymization without any explicit language-dependent model, which can be easily used for other languages. Extensive experiments were conducted on the VoicePrivacy Challenge 2020 datasets in English and AISHELL-3 datasets in Mandarin to demonstrate the effectiveness of our proposed SSL-based language-independent speaker anonymization method.


doi: 10.21437/Odyssey.2022-39

Cite as: Miao, X., Wang, X., Cooper, E., Yamagishi, J., Tomashenko, N. (2022) Language-Independent Speaker Anonymization Approach Using Self-Supervised Pre-Trained Models. Proc. The Speaker and Language Recognition Workshop (Odyssey 2022), 279-286, doi: 10.21437/Odyssey.2022-39

@inproceedings{miao22_odyssey,
  author={Xiaoxiao Miao and Xin Wang and Erica Cooper and Junichi Yamagishi and Natalia Tomashenko},
  title={{Language-Independent Speaker Anonymization Approach Using Self-Supervised Pre-Trained Models}},
  year=2022,
  booktitle={Proc. The Speaker and Language Recognition Workshop (Odyssey 2022)},
  pages={279--286},
  doi={10.21437/Odyssey.2022-39}
}