Abstract
As Spark evolves as a unified data processing engine with more features in each new release, its programming abstraction also evolves. The RDD was the initial core programming abstraction when Spark was introduced to the world in 2012. In Spark 1.6, a new programming abstraction, called Structured APIs, was introduced. This is the preferred way of performing data processing for the majority of use cases. The Structured APIs were designed to enhance developers’ productivity with easy-to-use, intuitive, and expressive APIs. In this new way of doing data processing, the data needs to be organized into a structured format, and the data computation logic needs to follow a certain structure. Armed with these two pieces of information, Spark can perform optimizations to speed up data processing applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Hien Luu
About this chapter
Cite this chapter
Luu, H. (2018). Spark SQL (Foundations). In: Beginning Apache Spark 2. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3579-9_4
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3579-9_4
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3578-2
Online ISBN: 978-1-4842-3579-9
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)