Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing
Highlights
► An architectural framework and principles for energy-efficient Cloud computing are defined. ► Energy-aware resource provisioning and allocation algorithms are presented. ► Quality of Service requirements are met efficiently. ► Open research challenges in energy-efficient Cloud computing are explored.
Introduction
Cloud computing can be classified as a new paradigm for the dynamic provisioning of computing services supported by state-of-the-art data centers that usually employ Virtual Machine (VM) technologies for consolidation and environment isolation purposes [1]. Cloud computing delivers an infrastructure, platform, and software (applications) as services that are made available to consumers in a pay-as-you-go model. In industry these services are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) respectively. Many computing service providers including Google, Microsoft, Yahoo, and IBM are rapidly deploying data centers in various locations around the world to deliver Cloud computing services.
A recent Berkeley report [2] stated: “Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service”. Cloud offers significant benefit to IT companies by relieving them from the necessity in setting up basic hardware and software infrastructures, and thus enabling more focus on innovation and creating business value for their services. Moreover, developers with innovative ideas for new Internet services no longer require large capital outlays in hardware to deploy their service or human expenses to operate it [2].
To fully realize the potential of Cloud computing, Cloud service providers have to ensure that they can be flexible in their service delivery to meet various consumer requirements, while keeping the consumers isolated from the underlying infrastructure. Until recently, high performance has been the sole concern in data center deployments, and this demand has been fulfilled without paying much attention to energy consumption. However, an average data center consumes as much energy as 25,000 households [3]. As energy costs are increasing while availability dwindles, there is a need to shift the focus from optimizing data center resource management for pure performance to optimizing them for energy efficiency, while maintaining high service level performance.
Therefore, Cloud service providers need to adopt measures to ensure that their profit margin is not dramatically reduced due to high energy costs. The rising energy cost is a highly potential threat as it increases the Total Cost of Ownership (TCO) and reduces the Return on Investment (ROI) of Cloud infrastructures. There is also increasing pressure from governments worldwide aimed at the reduction of carbon footprints, which have a significant impact on the climate change. For example, the Japanese government has established The Japan Data Center Council to address the soaring energy consumption of data centers [4]. Recently, leading computing service providers have formed a global consortium known as The Green Grid [5] to promote energy efficiency for data centers and minimization of the environmental impact. Thus, providers need to minimize energy consumption of Cloud infrastructures, while ensuring the service delivery.
Lowering the energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. Green Cloud computing is envisioned to achieve not only the efficient processing and utilization of a computing infrastructure, but also to minimize energy consumption [6]. This is essential for ensuring that the future growth of Cloud computing is sustainable. Otherwise, Cloud computing with increasingly pervasive front-end client devices interacting with back-end data centers will cause an enormous escalation of the energy usage. To address this problem and drive Green Cloud computing, data center resources need to be managed in an energy-efficient manner. In particular, Cloud resources need to be allocated not only to satisfy Quality of Service (QoS) requirements specified by users via Service Level Agreements (SLAs), but also to reduce energy usage.
The main objective of this work is to present our vision, discuss open research challenges in energy-aware resource management, and develop efficient policies and algorithms for virtualized data centers so that Cloud computing can be a more sustainable and eco-friendly mainstream technology to drive commercial, scientific, and technological advancements for future generations. Specifically, our work aims to:
- •
Define an architectural framework and principles for energy-efficient Cloud computing.
- •
Investigate energy-aware resource provisioning and allocation algorithms that provision data center resources to client applications in a way that improves the energy efficiency of a data center, without violating the negotiated SLAs.
- •
Develop autonomic and energy-aware mechanisms for self-managing changes in the state of resources effectively and efficiently to satisfy service obligations and achieve energy efficiency.
- •
Develop algorithms for energy-efficient mapping of VMs to suitable Cloud resources in addition to dynamic consolidation of VM resource partitions.
- •
Explore open research challenges in energy-efficient resource management for virtualized Cloud data centers to facilitate advancements of the state-of-the-art operational Cloud environments.
The rest of the paper is organized as follows. Section 2 discusses related work, followed by the Green Cloud architecture and principles for energy-efficient Cloud computing presented in Section 3. The proposed energy-aware resource allocation algorithms are discussed in Section 4. A performance analysis of the proposed energy-aware resource provisioning and allocation algorithms is presented in Section 5. In Section 6 we discuss our vision on open research challenges in energy-efficient Cloud computing. Section 7 concludes the paper with summary and future research directions.
Section snippets
Related work
One of the first works, in which power management has been applied at the data center level, has been done by Pinheiro et al. [7]. In this work the authors have proposed a technique for minimization of power consumption in a heterogeneous cluster of computing nodes serving multiple web-applications. The main technique applied to minimize power consumption is concentrating the workload to the minimum of physical nodes and switching idle nodes off. This approach requires dealing with the
Architectural framework
Clouds aim to drive the design of the next generation data centers by architecting them as networks of virtual services (hardware, database, user-interface, application logic) so that users can access and deploy applications from anywhere in the world on demand at competitive costs depending on their QoS requirements [30]. Fig. 1 shows the high-level architecture for supporting energy-efficient service allocation in a Green Cloud computing infrastructure [6]. There are basically four main
Energy-aware allocation of data center resources
Recent developments in virtualization have resulted in its proliferation across data centers. By supporting the movement of VMs between physical nodes, it enables dynamic migration of VMs according to the performance requirements. When VMs do not use all the provided resources, they can be logically resized and consolidated to the minimum number of physical nodes, while idle nodes can be switched to the sleep mode to eliminate the idle power consumption and reduce the total energy consumption
Performance analysis
In this section, we discuss a performance analysis of the energy-aware allocation heuristics presented in Section 4. In our experiments, we calculate the time needed to perform a live migration of a VM as the size of its memory divided by the available network bandwidth. This is justified as to enable live migration, the images and data of VMs must be stored on a Network Attached Storage (NAS); and therefore, copying the VM’s storage is not required. Live migration creates an extra CPU load;
Open challenges
The virtualization technology, which Cloud computing environments heavily rely on, provides the ability to transfer VMs between physical nodes using live or offline migration. This enables the technique of dynamic consolidation of VMs to the minimum of physical nodes according to the current resource requirements. As a result, the idle nodes can be switched off or put to a power saving mode (e.g. sleep, hibernate) to reduce the total energy consumption by the data center. In this paper we have
Concluding remarks and future directions
This work advances the Cloud computing field in two ways. First, it plays a significant role in the reduction of data center energy consumption costs, and thus helps to develop a strong and competitive Cloud computing industry. Second, consumers are increasingly becoming conscious about the environment. A recent study shows that data centers represent a large and rapidly growing energy consumption sector of the economy and a significant source of CO2 emissions [40]. Reducing greenhouse gas
Acknowledgments
This is a substantially extended version of the keynote paper presented at PDPTA 2010 [6]. We thank Yoganathan Sivaram (Melbourne University), external reviewers and the Guest Editor of this special issue for their suggestions on enhancing the quality of the paper.
Anton Beloglazov is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the Computer Science and Software Engineering department of the University of Melbourne, Australia. He has completed his Bachelor’s and Master’s degrees in Informatics and Computer Science at the faculty of Automation and Computer Engineering of Novosibirsk State Technical University, Russian Federation. Under his Ph.D. studies, Anton Beloglazov is actively involved in research on energy-
References (41)
- et al.
Joint resource and network scheduling with adaptive offset determination for optical burst switched grids
Future Generation Computer Systems
(2010) - et al.
A novel approach for distributed application scheduling based on prediction of communication events
Future Generation Computer Systems
(2010) - et al.
From infrastructure delivery to service management in clouds
Future Generation Computer Systems
(2010) - et al.
Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility
Future Generation Computer Systems
(2009) - P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, A. Warfield, Xen and the art of...
- et al.
A view of cloud computing
Communications of the ACM
(2009) - J. Kaplan, W. Forrest, N. Kindler, Revolutionizing Data Center Energy Efficiency, McKinsey & Company, Tech....
- Ministry of Economy, Trade and Industry, Establishment of the Japan data center council, Press...
- The green grid consortium, 2011. URL:...
- R. Buyya, A. Beloglazov, J. Abawajy, Energy-efficient management of data center resources for cloud computing: a...
Managing energy and server resources in hosting centers
Energy-efficient server clusters
Power-Aware Computer Systems
Virtualpower: coordinated power management in virtualized enterprise systems
ACM SIGOPS Operating Systems Review
No “power” struggles: coordinated multi-level power management for the data center
SIGARCH Computer Architecture News
Power and performancee management of virtualized computing environments via lookahead control
Cluster Computing
Energy aware consolidation for cloud computing
Cluster Computing
pMapper: power and migration cost aware application placement in virtualized systems
Optimal power allocation in server farms
Cited by (2328)
Towards energy and QoS aware dynamic VM consolidation in a multi-resource cloud
2024, Future Generation Computer SystemsSLA-based task offloading for energy consumption constrained workflows in fog computing
2024, Future Generation Computer SystemsSustainable computing across datacenters: A review of enabling models and techniques
2024, Computer Science ReviewA cut-and-solve algorithm for virtual machine consolidation problem
2024, Future Generation Computer SystemsImitation learning enabled fast and adaptive task scheduling in cloud
2024, Future Generation Computer Systems
Anton Beloglazov is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the Computer Science and Software Engineering department of the University of Melbourne, Australia. He has completed his Bachelor’s and Master’s degrees in Informatics and Computer Science at the faculty of Automation and Computer Engineering of Novosibirsk State Technical University, Russian Federation. Under his Ph.D. studies, Anton Beloglazov is actively involved in research on energy- and performance-efficient resource management in virtualized data centers for Cloud computing. He has been contributing to the development of the CloudSim toolkit, a modern open-source framework for modeling and simulation of Cloud computing infrastructures and services. Anton Beloglazov has publications in internationally recognized conferences and journals, such as the 7th International Workshop on Middleware for Grids, Clouds and e-Science (MGC 2009); the 10th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGrid 2010); and Software: Practice and Experience (SPE 2009). He is a frequent reviewer for research conferences and journals, such as European Conference on Information Systems (ECIS), IEEE International Workshop on Internet and Distributed Computing Systems (IDCS), International Conference on High Performance Computing (HiPC), International Conference on Software, Telecommunications and Computer Networks (SoftCOM), and Journal of Network and Computer Applications (JNCA). For further information please visit the web-site: http://beloglazov.info/.
Jemal Abawajy is an associate professor, Deakin University, Australia. Dr. Abawajy is the director of the “Pervasive Computing & Networks” research groups at Deakin University. The research group includes 15 Ph.D. students, several masters and honors students and other staff members. Dr. Abawajy is actively involved in funded research in robust, secure and reliable resource management for pervasive computing (mobile, clusters, enterprise/data grids, web services) and networks (wireless and sensors) and has published more than 200 research articles in refereed international conferences and journals as well as a number of technical reports. Dr. Abawajy has given keynote/invited talks at many conferences. Dr. Abawajy has guest-edited several international journals and served as an associate editor of international conference proceedings. In addition, he is on the editorial board of several international journals. Dr. Abawajy has been a member of the organizing committee for over 100 international conferences serving in various capacity including chair, general co-chair, vice-chair, best paper award chair, publication chair, session chair and program committee. He is also a frequent reviewer for international research journals (e.g., FGCS, TPDS and JPDC), research grant agencies, and Ph.D. examinations.
Rajkumar Buyya is Professor of Computer Science and Software Engineering; and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft Pty Ltd., a spin-off company of the University, commercializing its innovations in Grid and Cloud Computing. He has authored and published over 300 research papers and four text books. The books on emerging topics that Dr. Buyya edited include, High Performance Cluster Computing (Prentice Hall, USA, 1999), Content Delivery Networks (Springer, Germany, 2008) and Market-Oriented Grid and Utility Computing (Wiley, USA, 2009). He is one of the highly cited authors in computer science and software engineering worldwide (-index = 52, -index = 112, 14 500 + citations). Software technologies for Grid and Cloud computing developed under Dr. Buyya’s leadership have gained rapid acceptance and are in use at several academic institutions and commercial enterprises in 40 countries around the world. Dr. Buyya has led the establishment and development of key community activities, including serving as foundation Chair of the IEEE Technical Committee on Scalable Computing and four IEEE conferences (CCGrid, Cluster, Grid, and e-Science). He has presented over 200 invited talks on his vision on IT Futures and advanced computing technologies at international conferences and institutions in Asia, Australia, Europe, North America, and South America. These contributions and international research leadership of Dr. Buyya are recognized through the award of “2009 IEEE Medal for Excellence in Scalable Computing” from the IEEE Computer Society, USA. For further information on Dr. Buyya, please visit his cyberhome: www.buyya.com.