ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

Voice conversion in high-order eigen space using deep belief nets

Toru Nakashika, Ryoichi Takashima, Tetsuya Takiguchi, Yasuo Ariki

This paper presents a voice conversion technique using Deep Belief Nets (DBNs) to build high-order eigen spaces of the source/target speakers, where it is easier to convert the source speech to the target speech than in the traditional cepstrum space. DBNs have a deep architecture that automatically discovers abstractions to maximally express the original input features. If we train the DBNs using only the speech of an individual speaker, it can be considered that there is less phonological information and relatively more speaker individuality in the output features at the highest layer. Training the DBNs for a source speaker and a target speaker, we can then connect and convert the speaker individuality abstractions using Neural Networks (NNs). The converted abstraction of the source speaker is then brought back to the cepstrum space using an inverse process of the DBNs of the target speaker. We conducted speaker-voice conversion experiments and confirmed the efficacy of our method with respect to subjective and objective criteria, comparing it with the conventional Gaussian Mixture Model-based method.


doi: 10.21437/Interspeech.2013-102

Cite as: Nakashika, T., Takashima, R., Takiguchi, T., Ariki, Y. (2013) Voice conversion in high-order eigen space using deep belief nets. Proc. Interspeech 2013, 369-372, doi: 10.21437/Interspeech.2013-102

@inproceedings{nakashika13_interspeech,
  author={Toru Nakashika and Ryoichi Takashima and Tetsuya Takiguchi and Yasuo Ariki},
  title={{Voice conversion in high-order eigen space using deep belief nets}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={369--372},
  doi={10.21437/Interspeech.2013-102}
}