Skip to main content
Log in

Deep recurrent neural network for mobile human activity recognition with high throughput

  • Original Article
  • Published:
Artificial Life and Robotics Aims and scope Submit manuscript

Abstract

In this paper, we propose a method of human activity recognition with high throughput from raw accelerometer data applying a deep recurrent neural network (DRNN), and investigate various architectures and its combination to find the best parameter values. The “high throughput” refers to short time at a time of recognition. We investigated various parameters and architectures of the DRNN by using the training dataset of 432 trials with 6 activity classes from 7 people. The maximum recognition rate was 95.42% and 83.43% against the test data of 108 segmented trials each of which has single activity class and 18 multiple sequential trials, respectively. Here, the maximum recognition rates by traditional methods were 71.65% and 54.97% for each. In addition, the efficiency of the found parameters was evaluated using additional dataset. Further, as for throughput of the recognition per unit time, the constructed DRNN was requiring only 1.347 ms, while the best traditional method required 11.031 ms which includes 11.027 ms for feature calculation. These advantages are caused by the compact and small architecture of the constructed real time oriented DRNN.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  1. Amodei D, Ananthanarayanan S, Anubhai R, Bai J, Battenberg E, Case C, Casper J, Catanzaro B, Cheng Q, Chen G, Chen J, Chen J, Chen Z, Chrzanowski M, Coates A, Diamos G, Ding K, Du N, Elsen E, Engel J, Fang W, Fan L, Fougner C, Gao L, Gong C, Hannun A, Han T, Johannes L, Jiang B, Ju C, Jun B, LeGresley P, Lin L, Liu J, Liu Y, Li W, Li X, Ma D, Narang S, Ng A, Ozair S, Peng Y, Prenger R, Qian S, Quan Z, Raiman J, Rao V, Satheesh S, Seetapun D, Sengupta S, Srinet K, Sriram A, Tang H, Tang L, Wang C, Wang J, Wang K, Wang Y, Wang Z, Wang Z, Wu S, Wei L, Xiao B, Xie W, Xie Y, Yogatama D, Yuan B, Zhan J, Zhu Z (Jun 2016) Deep speech 2: end-to-end speech recognition in English and mandarin. In: Balcan MF, Weinberger KQ (eds) Proceedings of The 33rd international conference on machine learning, Proceedings of machine learning research, PMLR, New York, New York, USA, vol 48, 20–22, pp 173–182

  2. Anguita D, Ghio A, Oneto L, Parra X, Reyes-Ortiz JL (2013) A public domain dataset for human activity recognition using Smartphones. In: ESANN

  3. Avci A, Bosch S, Marin-Perianu M, Marin-Perianu R, Havinga P (2010) Activity, recognition using inertial sensing for healthcare, wellbeing and sports applications: a survey. In: 23rd international conference on Architecture of computing systems (ARCS), VDE 2010, pp 1–10

  4. Bao L, Intille S (2004) Activity recognition from user-annotated acceleration data. In: Ferscha A., Mattern F (eds) Pervasive computing, Pervasive 2004. Lecture Notes in Computer Science, vol 3001. Springer, Berlin, Heidelberg, pp 1–17

  5. Bhattacharya S, Nurmi P, Hammerla N, Plötz T (2014) Using unlabeled data in a sparse-coding framework for human activity recognition. Pervas Mobile Comput 15:242–262

    Article  Google Scholar 

  6. Bulling A, Blanke U, Schiele B (2014) A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput Sur 46:1–33

    Article  Google Scholar 

  7. Graves A, Fernández S, Gomez F, Schmidhuber J (2006) Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In: Proceedings of the 23rd international conference on machine learning. ACM, pp 369–376

  8. Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

    MATH  Google Scholar 

  9. Hammerla NY, Halloran S, Ploetz T (2016) Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv preprint arXiv:1604.08880

  10. Haykin S (1994) Neural networks: a comprehensive foundation, 2nd edn. Prentice Hall PTR, Upper Saddle River, NJ, pp 664–682, 732–740

  11. Hinton GE, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov RR (2012) Improving neural networks by preventing co-adaptation of feature detectors. arXiv: 1207.0580, pp 1–18

  12. Hochreiter S, Hochreiter S, Schmidhuber J, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–80

    Article  Google Scholar 

  13. Inoue S, Ueda N, Nohara Y, Nakashima N (2015) mobile activity recognition for a whole day: recognizing real nursing activities with big dataset. In: ACM international conference on pervasive and ubiquitous computing (Ubicomp), Osaka

  14. Inouye SK, Foreman MD, Mion LC, Katz KH, Cooney LM (2001) Nurses’ recognition of delirium and its symptoms: comparison of nurse and researcher ratings. Arch Intern Med 161:2467–2473

    Article  Google Scholar 

  15. Kawaguchi N, Ogawa N, Iwasaki Y, Kaji, K, Terada T, Murao K, Inoue S, Kawahara Y, Sumi Y, Nishio N (2011) Hasc challenge: gathering large scale human activity corpus for the real-world activity understandings. In: Proceedings of the 2nd augmented human international conference. ACM, p 27

  16. Kim E, Helal S, Cook D (2010) Human activity recognition and pattern discovery. Pervas Comput IEEE 9:48–53

    Article  Google Scholar 

  17. Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: International conference on learning representations, pp 1–13

  18. Krishnan NC, Cook DJ (2014) Activity recognition on streaming sensor data. Pervas Mobile Comput 10(PART B):138–154

    Article  Google Scholar 

  19. Kunze K, Barry M, Heinz E, Lukowicz P, Majoe D, Gutknecht J (2006) Towards, recognizing tai chi-an initial experiment using wearable sensors. In: 3rd international forum on applied wearable computing (IFAWC), VDE 2006, pp 1–6

  20. Ladha C, Hammerla NY, Olivier P, Plötz T (2013) ClimbAX: skill assessment for climbing enthusiasts. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing. ACM, pp 235–244

  21. Lane ND, Miluzzo E, Lu H, Peebles D, Choudhury T, Campbell AT (2010) A survey of mobile phone sensing. IEEE Commun Mag 48:140–150

    Article  Google Scholar 

  22. Le QV, Jaitly N, Hinton GE (2015) A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941

  23. Mazilu S, Blanke U, Dorfman M, Gazit E, Mirelman A, Hausdorff J, Tröster G (2015) A wearable assistant for gait training for Parkinsons disease with freezing of gait in out-of-the-lab environments. ACM Trans Interact Intell Syst (TiiS), 5 1:5

    Google Scholar 

  24. Ordóñez FJ, Roggen D (2016) Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16(1):115

    Article  Google Scholar 

  25. Pasa L, Sperduti A (2014) Pre-training of recurrent neural networks via linear autoencoders. Adv Neural Inf Process Syst 27:3572–3580

    Google Scholar 

  26. Pascanu R, Mikolov T, Bengio Y (2012) On the difficulty of training recurrent neural networks. In: Proceedings of the 30th international conference on machine learning, vol 2, pp 1310–1318

  27. Saeedi R, Schimert B, Ghasemzadeh H (2014) Cost-sensitive feature selection for on-body sensor localization. In: 2nd international workshop on human activity sensing corpus and its application (HASCA2014) held at UbiComp 2014, pp 833–842

  28. Strohrmann C, Harms H, Tröster G (2011) What do sensors know about your running performance? In: Proceedings—international symposium on wearable computers, ISWC, pp 101–104

  29. Sutskever I (2013) Training recurrent neural networks. Ph.D. thesis, p 101

  30. Tokui S, Oono K, Hido S, Clayton J (2015) Chainer: a next-generation open source framework for deep learning. In: Proceedings of workshop on machine learning systems (LearningSys) in the twenty-ninth annual conference on neural information processing systems (NIPS)

  31. Wu J, Yildirim I, Lim J, Freeman W, Tenenbaum J (2015) Galileo: perceiving physical object properties by integrating a physics engine with deep learning. In: Advances in neural information processing systems (NIPS 2015), vol 28, pp 1–9

  32. Yang JB, Nguyen MN, San PP, Li XL, Krishnaswamy S (2015) Deep convolutional neural networks on multichannel time series for human activity recognition. In: Proceedings of the 24th international joint conference on artificial intelligence (IJCAI), Buenos Aires, Argentina, pp 25–31

  33. Zhan K, Faux S, Ramos F (2014) Multi-scale conditional random fields for first-person activity recognition. In: IEEE International Conference on Pervasive Computing and Communications (PerCom), pp 51–59

  34. Zhang M, Sawchuk A (2012) Motion primitive-based human activity recognition using a bag-of-features approach. In: Proceedings of the 2nd ACM SIGHIT, vol 1, p 631

  35. Zhang M, Sawchuk AA (2011) A feature selection-based framework for human activity recognition using wearable multimodal sensors. In: International conference on body area networks, pp 92–98

Download references

Acknowledgements

This work was supported by JSPS KAKENHI Grant number 26280041.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takeshi Nishida.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Inoue, M., Inoue, S. & Nishida, T. Deep recurrent neural network for mobile human activity recognition with high throughput. Artif Life Robotics 23, 173–185 (2018). https://doi.org/10.1007/s10015-017-0422-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10015-017-0422-x

Keywords

Navigation