Speech recognition is largely taking advantage of deep learning, showing
that substantial benefits can be obtained by modern Recurrent Neural
Networks (RNNs). The most popular RNNs are Long Short-Term Memory (LSTMs),
which typically reach state-of-the-art performance in many tasks thanks
to their ability to learn long-term dependencies and robustness to
vanishing gradients. Nevertheless, LSTMs have a rather complex design
with three multiplicative gates, that might impair their efficient
implementation. An attempt to simplify LSTMs has recently led to Gated
Recurrent Units (GRUs), which are based on just two multiplicative
gates.
This paper builds on these efforts by further revising GRUs and
proposing a simplified architecture potentially more suitable for speech
recognition. The contribution of this work is two-fold. First, we suggest
to remove the reset gate in the GRU design, resulting in a more efficient
single-gate architecture. Second, we propose to replace tanh with ReLU
activations in the state update equations. Results show that, in our
implementation, the revised architecture reduces the per-epoch training
time with more than 30% and consistently improves recognition performance
across different tasks, input features, and noisy conditions when compared
to a standard GRU.
Cite as: Ravanelli, M., Brakel, P., Omologo, M., Bengio, Y. (2017) Improving Speech Recognition by Revising Gated Recurrent Units. Proc. Interspeech 2017, 1308-1312, doi: 10.21437/Interspeech.2017-775
@inproceedings{ravanelli17_interspeech, author={Mirco Ravanelli and Philemon Brakel and Maurizio Omologo and Yoshua Bengio}, title={{Improving Speech Recognition by Revising Gated Recurrent Units}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={1308--1312}, doi={10.21437/Interspeech.2017-775} }