ISCA Archive Interspeech 2014
ISCA Archive Interspeech 2014

Long short-term memory recurrent neural network architectures for large scale acoustic modeling

Haşim Sak, Andrew Senior, Françoise Beaufays

Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that was designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we explore LSTM RNN architectures for large scale acoustic modeling in speech recognition. We recently showed that LSTM RNNs are more effective than DNNs and conventional RNNs for acoustic modeling, considering moderately-sized models trained on a single machine. Here, we introduce the first distributed training of LSTM RNNs using asynchronous stochastic gradient descent optimization on a large cluster of machines. We show that a two-layer deep LSTM RNN where each LSTM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance. This architecture makes more effective use of model parameters than the others considered, converges quickly, and outperforms a deep feed forward neural network having an order of magnitude more parameters.


doi: 10.21437/Interspeech.2014-80

Cite as: Sak, H., Senior, A., Beaufays, F. (2014) Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Proc. Interspeech 2014, 338-342, doi: 10.21437/Interspeech.2014-80

@inproceedings{sak14_interspeech,
  author={Haşim Sak and Andrew Senior and Françoise Beaufays},
  title={{Long short-term memory recurrent neural network architectures for large scale acoustic modeling}},
  year=2014,
  booktitle={Proc. Interspeech 2014},
  pages={338--342},
  doi={10.21437/Interspeech.2014-80}
}