Skip to main content

Recent Trends in Deep Learning with Applications

  • Chapter
  • First Online:
Cognitive Computing for Big Data Systems Over IoT

Part of the book series: Lecture Notes on Data Engineering and Communications Technologies ((LNDECT,volume 14 ))

Abstract

Deep learning methods play a vital role in Internet of things analytics. One of the main subgroups of machine learning algorithm is Deep Learning . Raw data is collected from devices. Collecting data from all situations and doing pre-processing is complex. Monitoring data through sensors continuously is also complex and expensive. Deep learning algorithms will solve these types of issues. A deep learning method signifies at various levels of representation from lower level features to very higher level features of data. The higher level features provide more abstract thoughts of information than the lower level which contains raw data. It is a developing methodology and has been commonly applied in art, image caption , machine translation, natural language processing , object detection , robotics, and visual tracking . The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques. This review paper gives an understanding of deep learning methods and their recent advances in Internet of things.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Deng, L.: A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process. 3, e2 (2014)

    Article  Google Scholar 

  2. LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  3. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Image net classification with deep convolutional neural networks. In: Proceedings of the NIPS (2012)

    Google Scholar 

  4. Lin, M., Chen, Q., Yan, S.: Network in network. In: Proceedings of the ICLR (2013)

    Google Scholar 

  5. Boureau, Y.L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the ICML (2010)

    Google Scholar 

  6. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional neural networks. In: Proceedings of the ECCV (2014)

    Google Scholar 

  7. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceedings of the ECCV (2014)

    Google Scholar 

  8. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the ICLR (2015)

    Google Scholar 

  9. Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: Proceedings of the CVPR (2015)

    Google Scholar 

  10. Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the CVPR (2014)

    Google Scholar 

  11. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the CVPR (2015)

    Google Scholar 

  12. Hinton, G.E., Sejnowski, T.J.: Learning and Relearning in Boltzmann Machines, vol. 1, p. 4.2. MIT Press, Cambridge, MA (1986)

    Google Scholar 

  13. Carreira-Perpinan, M.A., Hinton, G.E.: On contrastive divergence clearing. In: Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, pp. 33–40. Society for Artificial Intelligence and Statistics, NP (2005)

    Google Scholar 

  14. Hinton, G.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 926 (2010)

    Google Scholar 

  15. Cho, K.H., Raiko, T., Ihler, A.T.: Enhanced gradient and adaptive learning rate for training restricted Boltzmann machines. In: Proceedings of the ICML (2011)

    Google Scholar 

  16. Hinton, G., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  17. Nair, V., Hinton, G.E.: 3D object recognition with deep belief nets. In: Proceedings of the NIPS (2009)

    Google Scholar 

  18. Lee, H., Grosse, R., Ranganath, R., et al.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the ICML (2009)

    Google Scholar 

  19. Salakhutdinov, R., Hinton, G.E.: Deep Boltzmann machines. In: Proceedings of the AISTATS (2009)

    Google Scholar 

  20. Ngiam, J., Chen, Z., Koh, P.W., et al.: Learning deep energy models. In: Proceedings of the ICML (2011)

    Google Scholar 

  21. Liou, C.Y., Cheng, W.C., Liou, J.W., et al.: Autoencoder for words. Neuro-computing 139, 84–96 (2014)

    Google Scholar 

  22. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  23. Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Proceedings of the NIPS (2006)

    Google Scholar 

  24. Jiang, X., Zhang, Y., Zhang, W., et al.: A novel sparse auto-encoder for deep unsupervised learning. In: Proceedings of the ICACI (2013)

    Google Scholar 

  25. Vincent, P., Larochelle, H., Bengio, Y., et al.: Extracting and composing robust features with denoising auto encoders. In: Proceedings of the ICML (2008)

    Google Scholar 

  26. Vincent, P., Larochelle, H., Lajoie, I., et al.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MathSciNet  MATH  Google Scholar 

  27. Rifai, S., Vincent, P., Muller, X., et al.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the ICML (2011)

    Google Scholar 

  28. Goroshin, R., LeCun, Y.: Saturating auto-encoders. In: Proceedings of the ICLR (2013)

    Google Scholar 

  29. Masci, J., Meier, U., CireÅŸan, D., et al.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: Proceedings of the ICANN (2011)

    Google Scholar 

  30. Baccouche, M., Mamalet, F., Wolf, C., et al.: Spatio-temporal convolutional sparse auto-encoder for sequence classification. In: Proceedings of the BMVC (2012)

    Google Scholar 

  31. Leng, B., Guo, S., Zhang, X., et al.: 3D object retrieval with stacked local convolutional autoencoder. Signal Process (2014)

    Google Scholar 

  32. Memisevic, R., Konda, K., Krueger, D.: Zero-bias auto encoders and the benefits of co-adapting features. In: Proceedings of the ICLR (2015)

    Google Scholar 

  33. Olshausen, B.A., Field, D.J.: Sparse coding with an over complete basis set: a strategy employed by V1? Vis. Res. 37(23), 3311–3325 (1997)

    Article  Google Scholar 

  34. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  MATH  Google Scholar 

  35. Yang, J., Yu, K., Gong, Y., et al.: Linear spatial pyramid matching using sparse coding for image classification. In: Proceedings of the CVPR (2009)

    Google Scholar 

  36. Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Proceedings of the CVPR (2006)

    Google Scholar 

  37. Gao, S., Tsang, I.W., Chia, L.T., et al.: Local features are not lonely–Laplacian sparse coding for image classification. In: Proceedings of the CVPR (2010)

    Google Scholar 

  38. Gao, S., Tsang, I.W.H., Chia, L.T.: Laplacian sparse coding, hypergraph Laplacian sparse coding, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 92–104 (2013)

    Article  Google Scholar 

  39. Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: Proceedings of the NIPS (2009)

    Google Scholar 

  40. Zhou, X., Yu, K., Zhang, T., et al.: Image classification using super-vector coding of local image descriptors. In: Proceedings of the ECCV (2010)

    Google Scholar 

  41. Nan, X., Bao, L., Zhao, X., Zhao, X., Sangaiah, A.K., Wang, G.G., Ma, Z.: EPuL: an enhanced positive-unlabeled learning algorithm for the prediction of pupylation sites. Molecules 22(9), 1463 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. Balaji .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Balaji, K., Lavanya, K. (2018). Recent Trends in Deep Learning with Applications. In: Sangaiah, A., Thangavelu, A., Meenakshi Sundaram, V. (eds) Cognitive Computing for Big Data Systems Over IoT. Lecture Notes on Data Engineering and Communications Technologies, vol 14 . Springer, Cham. https://doi.org/10.1007/978-3-319-70688-7_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70688-7_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70687-0

  • Online ISBN: 978-3-319-70688-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics