Skip to main content

Using Frame Similarity for Low Energy Software-Only IoT Video Recognition

  • Conference paper
  • First Online:
Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11733))

Included in the following conference series:

  • 1440 Accesses

Abstract

Embedded video-processing applications are everywhere, and need to be low-energy in order to extend battery life. Convolutional Neural Networks (CNNs), frequently used for this task, fail to explore the intrinsic redundancy present in videos: similarity between sequential frames means that analyzing all frames can be avoided. On top of that, while several hardware solutions for low-energy execution have been proposed, they require extra or dedicated hardware, which makes them non attractive for low cost applications. In this work we propose a technique that uses frame similarity to identify and process only areas that have a significant difference when comparing two subsequent frames. Our technique reduces energy consumption by discarding unneeded operations, and can also be used in low-cost hardware readily available for IoT applications. We obtain up to 12-80x speedup of CNN execution with software-only modifications that require no network retraining while impacting little on accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, T., et al.: DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning. SIGARCH Comput. Archit. News 42(1), 269–284 (2014). https://doi.org/10.1145/2654822.2541967

    Article  Google Scholar 

  2. Chen, Y.H., Krishna, T., Emer, J.S., Sze, V.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circ. 52(1), 127–138 (2017). https://doi.org/10.1109/JSSC.2016.2616357

    Article  Google Scholar 

  3. Honovich, J.: Average frame rate video surveillance 2011 (2011). https://ipvm.com/reports/recording-frame-rate-whats-actually-being-used

  4. Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013). https://doi.org/10.1109/TPAMI.2012.59

    Article  Google Scholar 

  5. Kang, D., Emmons, J., Abuzaid, F., Bailis, P., Zaharia, M.: Optimizing deep CNN-based queries over video streams at scale. CoRR abs/1703.02529 (2017). http://arxiv.org/abs/1703.02529

  6. Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014

    Google Scholar 

  7. Livingit: average human running speed: broken down age-wise (2018). https://www.iamlivingit.com/running/average-human-running-speed

  8. Redmon, J.: Darknet: open source neural networks in c (2016). http://pjreddie.com/darknet/

  9. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788, June 2016. https://doi.org/10.1109/CVPR.2016.91

  10. Riera, M., Arnau, J., Gonzalez, A.: Computation reuse in DNNs by exploiting input similarity. In: 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pp. 57–68, June 2018. https://doi.org/10.1109/ISCA.2018.00016

  11. Robert, F., Santos-Victor, J.C.: CAVIAR: context aware vision using image-based active recognition (2006). http://homepages.inf.ed.ac.uk/rbf/CAVIAR/

  12. Sen, S., Raghunathan, A.: Approximate computing for long short term memory (LSTM) neural networks. IEEE Trans. Comput.-Aided Design Integr. Circ. Syst. 37(11), 2266–2276 (2018). https://doi.org/10.1109/TCAD.2018.2858362

    Article  Google Scholar 

  13. Shafiee, M.J., Chywl, B., Li, F., Wong, A.: Fast YOLO: a fast you only look once system for real-time embedded object detection in video. CoRR abs/1709.05943 (2017). http://arxiv.org/abs/1709.05943

  14. Sze, V., Chen, Y.H., Yang, T.J., Emer, J.S.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017). https://doi.org/10.1109/JPROC.2017.2761740

    Article  Google Scholar 

  15. Tan, C., Kulkarni, A., Venkataramani, V., Karunaratne, M., Mitra, T., Peh, L.S.: LOCUS: low-power customizable many-core architecture for wearables. ACM Trans. Embed. Comput. Syst. 17(1), 16:1–16:26 (2017). https://doi.org/10.1145/3122786

    Article  Google Scholar 

  16. Teng, Z., Xing, J., Wang, Q., Lang, C., Feng, S., Jin, Y.: Robust object tracking based on temporal and spatial deep networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 1153–1162, October 2017. https://doi.org/10.1109/ICCV.2017.130

  17. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015

    Google Scholar 

  18. Zhu, Y., Samajdar, A., Mattina, M., Whatmough, P.N.: Euphrates: algorithm-SoC co-design for low-power mobile continuous vision. CoRR abs/1803.11232 (2018). http://arxiv.org/abs/1803.11232

Download references

Acknowledgements

This work was supported by CAPES, CNPQ and FAPERGS.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Larissa Rozales Gonçalves .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gonçalves, L.R., Draghetti, L.K., Rech, P., Carro, L. (2019). Using Frame Similarity for Low Energy Software-Only IoT Video Recognition. In: Pnevmatikatos, D., Pelcat, M., Jung, M. (eds) Embedded Computer Systems: Architectures, Modeling, and Simulation. SAMOS 2019. Lecture Notes in Computer Science(), vol 11733. Springer, Cham. https://doi.org/10.1007/978-3-030-27562-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-27562-4_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-27561-7

  • Online ISBN: 978-3-030-27562-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics