Abstract
Deep learning methods play a vital role in Internet of things analytics. One of the main subgroups of machine learning algorithm is Deep Learning . Raw data is collected from devices. Collecting data from all situations and doing pre-processing is complex. Monitoring data through sensors continuously is also complex and expensive. Deep learning algorithms will solve these types of issues. A deep learning method signifies at various levels of representation from lower level features to very higher level features of data. The higher level features provide more abstract thoughts of information than the lower level which contains raw data. It is a developing methodology and has been commonly applied in art, image caption , machine translation, natural language processing , object detection , robotics, and visual tracking . The main purpose of using deep learning algorithms are such as faster processing, low-cost hardware, and modern growths in machine learning techniques. This review paper gives an understanding of deep learning methods and their recent advances in Internet of things.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Deng, L.: A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process. 3, e2 (2014)
LeCun, Y., Bottou, L., Bengio, Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Image net classification with deep convolutional neural networks. In: Proceedings of the NIPS (2012)
Lin, M., Chen, Q., Yan, S.: Network in network. In: Proceedings of the ICLR (2013)
Boureau, Y.L., Ponce, J., LeCun, Y.: A theoretical analysis of feature pooling in visual recognition. In: Proceedings of the ICML (2010)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional neural networks. In: Proceedings of the ECCV (2014)
He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceedings of the ECCV (2014)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the ICLR (2015)
Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: Proceedings of the CVPR (2015)
Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the CVPR (2014)
Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the CVPR (2015)
Hinton, G.E., Sejnowski, T.J.: Learning and Relearning in Boltzmann Machines, vol. 1, p. 4.2. MIT Press, Cambridge, MA (1986)
Carreira-Perpinan, M.A., Hinton, G.E.: On contrastive divergence clearing. In: Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, pp. 33–40. Society for Artificial Intelligence and Statistics, NP (2005)
Hinton, G.: A practical guide to training restricted Boltzmann machines. Momentum 9(1), 926 (2010)
Cho, K.H., Raiko, T., Ihler, A.T.: Enhanced gradient and adaptive learning rate for training restricted Boltzmann machines. In: Proceedings of the ICML (2011)
Hinton, G., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Nair, V., Hinton, G.E.: 3D object recognition with deep belief nets. In: Proceedings of the NIPS (2009)
Lee, H., Grosse, R., Ranganath, R., et al.: Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In: Proceedings of the ICML (2009)
Salakhutdinov, R., Hinton, G.E.: Deep Boltzmann machines. In: Proceedings of the AISTATS (2009)
Ngiam, J., Chen, Z., Koh, P.W., et al.: Learning deep energy models. In: Proceedings of the ICML (2011)
Liou, C.Y., Cheng, W.C., Liou, J.W., et al.: Autoencoder for words. Neuro-computing 139, 84–96 (2014)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Proceedings of the NIPS (2006)
Jiang, X., Zhang, Y., Zhang, W., et al.: A novel sparse auto-encoder for deep unsupervised learning. In: Proceedings of the ICACI (2013)
Vincent, P., Larochelle, H., Bengio, Y., et al.: Extracting and composing robust features with denoising auto encoders. In: Proceedings of the ICML (2008)
Vincent, P., Larochelle, H., Lajoie, I., et al.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)
Rifai, S., Vincent, P., Muller, X., et al.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the ICML (2011)
Goroshin, R., LeCun, Y.: Saturating auto-encoders. In: Proceedings of the ICLR (2013)
Masci, J., Meier, U., CireÅŸan, D., et al.: Stacked convolutional auto-encoders for hierarchical feature extraction. In: Proceedings of the ICANN (2011)
Baccouche, M., Mamalet, F., Wolf, C., et al.: Spatio-temporal convolutional sparse auto-encoder for sequence classification. In: Proceedings of the BMVC (2012)
Leng, B., Guo, S., Zhang, X., et al.: 3D object retrieval with stacked local convolutional autoencoder. Signal Process (2014)
Memisevic, R., Konda, K., Krueger, D.: Zero-bias auto encoders and the benefits of co-adapting features. In: Proceedings of the ICLR (2015)
Olshausen, B.A., Field, D.J.: Sparse coding with an over complete basis set: a strategy employed by V1? Vis. Res. 37(23), 3311–3325 (1997)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)
Yang, J., Yu, K., Gong, Y., et al.: Linear spatial pyramid matching using sparse coding for image classification. In: Proceedings of the CVPR (2009)
Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Proceedings of the CVPR (2006)
Gao, S., Tsang, I.W., Chia, L.T., et al.: Local features are not lonely–Laplacian sparse coding for image classification. In: Proceedings of the CVPR (2010)
Gao, S., Tsang, I.W.H., Chia, L.T.: Laplacian sparse coding, hypergraph Laplacian sparse coding, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 92–104 (2013)
Yu, K., Zhang, T., Gong, Y.: Nonlinear learning using local coordinate coding. In: Proceedings of the NIPS (2009)
Zhou, X., Yu, K., Zhang, T., et al.: Image classification using super-vector coding of local image descriptors. In: Proceedings of the ECCV (2010)
Nan, X., Bao, L., Zhao, X., Zhao, X., Sangaiah, A.K., Wang, G.G., Ma, Z.: EPuL: an enhanced positive-unlabeled learning algorithm for the prediction of pupylation sites. Molecules 22(9), 1463 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Balaji, K., Lavanya, K. (2018). Recent Trends in Deep Learning with Applications. In: Sangaiah, A., Thangavelu, A., Meenakshi Sundaram, V. (eds) Cognitive Computing for Big Data Systems Over IoT. Lecture Notes on Data Engineering and Communications Technologies, vol 14 . Springer, Cham. https://doi.org/10.1007/978-3-319-70688-7_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-70688-7_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70687-0
Online ISBN: 978-3-319-70688-7
eBook Packages: EngineeringEngineering (R0)