Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing

https://doi.org/10.1016/j.future.2011.04.017Get rights and content

Abstract

Cloud computing offers utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only minimize operational costs but also reduce the environmental impact. In this paper, we define an architectural framework and principles for energy-efficient Cloud computing. Based on this architecture, we present our vision, open research challenges, and resource provisioning and allocation algorithms for energy-efficient management of Cloud computing environments. The proposed energy-aware allocation heuristics provision data center resources to client applications in a way that improves energy efficiency of the data center, while delivering the negotiated Quality of Service (QoS). In particular, in this paper we conduct a survey of research in energy-efficient computing and propose: (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering QoS expectations and power usage characteristics of the devices; and (c) a number of open research challenges, addressing which can bring substantial benefits to both resource providers and consumers. We have validated our approach by conducting a performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant cost savings and demonstrates high potential for the improvement of energy efficiency under dynamic workload scenarios.

Highlights

► An architectural framework and principles for energy-efficient Cloud computing are defined. ► Energy-aware resource provisioning and allocation algorithms are presented. ► Quality of Service requirements are met efficiently. ► Open research challenges in energy-efficient Cloud computing are explored.

Introduction

Cloud computing can be classified as a new paradigm for the dynamic provisioning of computing services supported by state-of-the-art data centers that usually employ Virtual Machine (VM) technologies for consolidation and environment isolation purposes [1]. Cloud computing delivers an infrastructure, platform, and software (applications) as services that are made available to consumers in a pay-as-you-go model. In industry these services are referred to as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) respectively. Many computing service providers including Google, Microsoft, Yahoo, and IBM are rapidly deploying data centers in various locations around the world to deliver Cloud computing services.

A recent Berkeley report [2] stated: “Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service”. Cloud offers significant benefit to IT companies by relieving them from the necessity in setting up basic hardware and software infrastructures, and thus enabling more focus on innovation and creating business value for their services. Moreover, developers with innovative ideas for new Internet services no longer require large capital outlays in hardware to deploy their service or human expenses to operate it [2].

To fully realize the potential of Cloud computing, Cloud service providers have to ensure that they can be flexible in their service delivery to meet various consumer requirements, while keeping the consumers isolated from the underlying infrastructure. Until recently, high performance has been the sole concern in data center deployments, and this demand has been fulfilled without paying much attention to energy consumption. However, an average data center consumes as much energy as 25,000 households [3]. As energy costs are increasing while availability dwindles, there is a need to shift the focus from optimizing data center resource management for pure performance to optimizing them for energy efficiency, while maintaining high service level performance.

Therefore, Cloud service providers need to adopt measures to ensure that their profit margin is not dramatically reduced due to high energy costs. The rising energy cost is a highly potential threat as it increases the Total Cost of Ownership (TCO) and reduces the Return on Investment (ROI) of Cloud infrastructures. There is also increasing pressure from governments worldwide aimed at the reduction of carbon footprints, which have a significant impact on the climate change. For example, the Japanese government has established The Japan Data Center Council to address the soaring energy consumption of data centers [4]. Recently, leading computing service providers have formed a global consortium known as The Green Grid [5] to promote energy efficiency for data centers and minimization of the environmental impact. Thus, providers need to minimize energy consumption of Cloud infrastructures, while ensuring the service delivery.

Lowering the energy usage of data centers is a challenging and complex issue because computing applications and data are growing so quickly that increasingly larger servers and disks are needed to process them fast enough within the required time period. Green Cloud computing is envisioned to achieve not only the efficient processing and utilization of a computing infrastructure, but also to minimize energy consumption [6]. This is essential for ensuring that the future growth of Cloud computing is sustainable. Otherwise, Cloud computing with increasingly pervasive front-end client devices interacting with back-end data centers will cause an enormous escalation of the energy usage. To address this problem and drive Green Cloud computing, data center resources need to be managed in an energy-efficient manner. In particular, Cloud resources need to be allocated not only to satisfy Quality of Service (QoS) requirements specified by users via Service Level Agreements (SLAs), but also to reduce energy usage.

The main objective of this work is to present our vision, discuss open research challenges in energy-aware resource management, and develop efficient policies and algorithms for virtualized data centers so that Cloud computing can be a more sustainable and eco-friendly mainstream technology to drive commercial, scientific, and technological advancements for future generations. Specifically, our work aims to:

  • Define an architectural framework and principles for energy-efficient Cloud computing.

  • Investigate energy-aware resource provisioning and allocation algorithms that provision data center resources to client applications in a way that improves the energy efficiency of a data center, without violating the negotiated SLAs.

  • Develop autonomic and energy-aware mechanisms for self-managing changes in the state of resources effectively and efficiently to satisfy service obligations and achieve energy efficiency.

  • Develop algorithms for energy-efficient mapping of VMs to suitable Cloud resources in addition to dynamic consolidation of VM resource partitions.

  • Explore open research challenges in energy-efficient resource management for virtualized Cloud data centers to facilitate advancements of the state-of-the-art operational Cloud environments.

The rest of the paper is organized as follows. Section 2 discusses related work, followed by the Green Cloud architecture and principles for energy-efficient Cloud computing presented in Section 3. The proposed energy-aware resource allocation algorithms are discussed in Section 4. A performance analysis of the proposed energy-aware resource provisioning and allocation algorithms is presented in Section 5. In Section 6 we discuss our vision on open research challenges in energy-efficient Cloud computing. Section 7 concludes the paper with summary and future research directions.

Section snippets

Related work

One of the first works, in which power management has been applied at the data center level, has been done by Pinheiro et al. [7]. In this work the authors have proposed a technique for minimization of power consumption in a heterogeneous cluster of computing nodes serving multiple web-applications. The main technique applied to minimize power consumption is concentrating the workload to the minimum of physical nodes and switching idle nodes off. This approach requires dealing with the

Architectural framework

Clouds aim to drive the design of the next generation data centers by architecting them as networks of virtual services (hardware, database, user-interface, application logic) so that users can access and deploy applications from anywhere in the world on demand at competitive costs depending on their QoS requirements [30]. Fig. 1 shows the high-level architecture for supporting energy-efficient service allocation in a Green Cloud computing infrastructure [6]. There are basically four main

Energy-aware allocation of data center resources

Recent developments in virtualization have resulted in its proliferation across data centers. By supporting the movement of VMs between physical nodes, it enables dynamic migration of VMs according to the performance requirements. When VMs do not use all the provided resources, they can be logically resized and consolidated to the minimum number of physical nodes, while idle nodes can be switched to the sleep mode to eliminate the idle power consumption and reduce the total energy consumption

Performance analysis

In this section, we discuss a performance analysis of the energy-aware allocation heuristics presented in Section 4. In our experiments, we calculate the time needed to perform a live migration of a VM as the size of its memory divided by the available network bandwidth. This is justified as to enable live migration, the images and data of VMs must be stored on a Network Attached Storage (NAS); and therefore, copying the VM’s storage is not required. Live migration creates an extra CPU load;

Open challenges

The virtualization technology, which Cloud computing environments heavily rely on, provides the ability to transfer VMs between physical nodes using live or offline migration. This enables the technique of dynamic consolidation of VMs to the minimum of physical nodes according to the current resource requirements. As a result, the idle nodes can be switched off or put to a power saving mode (e.g. sleep, hibernate) to reduce the total energy consumption by the data center. In this paper we have

Concluding remarks and future directions

This work advances the Cloud computing field in two ways. First, it plays a significant role in the reduction of data center energy consumption costs, and thus helps to develop a strong and competitive Cloud computing industry. Second, consumers are increasingly becoming conscious about the environment. A recent study shows that data centers represent a large and rapidly growing energy consumption sector of the economy and a significant source of CO2 emissions [40]. Reducing greenhouse gas

Acknowledgments

This is a substantially extended version of the keynote paper presented at PDPTA 2010 [6]. We thank Yoganathan Sivaram (Melbourne University), external reviewers and the Guest Editor of this special issue for their suggestions on enhancing the quality of the paper.

Anton Beloglazov is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the Computer Science and Software Engineering department of the University of Melbourne, Australia. He has completed his Bachelor’s and Master’s degrees in Informatics and Computer Science at the faculty of Automation and Computer Engineering of Novosibirsk State Technical University, Russian Federation. Under his Ph.D. studies, Anton Beloglazov is actively involved in research on energy-

References (41)

  • E. Pinheiro, R. Bianchini, E.V. Carrera, T. Heath, Load balancing and unbalancing for power and performancee in...
  • J.S. Chase et al.

    Managing energy and server resources in hosting centers

  • E. Elnozahy et al.

    Energy-efficient server clusters

    Power-Aware Computer Systems

    (2003)
  • R. Nathuji et al.

    Virtualpower: coordinated power management in virtualized enterprise systems

    ACM SIGOPS Operating Systems Review

    (2007)
  • R. Raghavendra et al.

    No “power” struggles: coordinated multi-level power management for the data center

    SIGARCH Computer Architecture News

    (2008)
  • D. Kusic et al.

    Power and performancee management of virtualized computing environments via lookahead control

    Cluster Computing

    (2009)
  • S. Srikantaiah et al.

    Energy aware consolidation for cloud computing

    Cluster Computing

    (2009)
  • M. Cardosa, M. Korupolu, A. Singh, Shares and utilities based power consolidation in virtualized server environments,...
  • A. Verma et al.

    pMapper: power and migration cost aware application placement in virtualized systems

  • A. Gandhi et al.

    Optimal power allocation in server farms

  • Cited by (2328)

    View all citing articles on Scopus

    Anton Beloglazov is a Ph.D. student at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the Computer Science and Software Engineering department of the University of Melbourne, Australia. He has completed his Bachelor’s and Master’s degrees in Informatics and Computer Science at the faculty of Automation and Computer Engineering of Novosibirsk State Technical University, Russian Federation. Under his Ph.D. studies, Anton Beloglazov is actively involved in research on energy- and performance-efficient resource management in virtualized data centers for Cloud computing. He has been contributing to the development of the CloudSim toolkit, a modern open-source framework for modeling and simulation of Cloud computing infrastructures and services. Anton Beloglazov has publications in internationally recognized conferences and journals, such as the 7th International Workshop on Middleware for Grids, Clouds and e-Science (MGC 2009); the 10th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing (CCGrid 2010); and Software: Practice and Experience (SPE 2009). He is a frequent reviewer for research conferences and journals, such as European Conference on Information Systems (ECIS), IEEE International Workshop on Internet and Distributed Computing Systems (IDCS), International Conference on High Performance Computing (HiPC), International Conference on Software, Telecommunications and Computer Networks (SoftCOM), and Journal of Network and Computer Applications (JNCA). For further information please visit the web-site: http://beloglazov.info/.

    Jemal Abawajy is an associate professor, Deakin University, Australia. Dr. Abawajy is the director of the “Pervasive Computing & Networks” research groups at Deakin University. The research group includes 15 Ph.D. students, several masters and honors students and other staff members. Dr. Abawajy is actively involved in funded research in robust, secure and reliable resource management for pervasive computing (mobile, clusters, enterprise/data grids, web services) and networks (wireless and sensors) and has published more than 200 research articles in refereed international conferences and journals as well as a number of technical reports. Dr. Abawajy has given keynote/invited talks at many conferences. Dr. Abawajy has guest-edited several international journals and served as an associate editor of international conference proceedings. In addition, he is on the editorial board of several international journals. Dr. Abawajy has been a member of the organizing committee for over 100 international conferences serving in various capacity including chair, general co-chair, vice-chair, best paper award chair, publication chair, session chair and program committee. He is also a frequent reviewer for international research journals (e.g., FGCS, TPDS and JPDC), research grant agencies, and Ph.D. examinations.

    Rajkumar Buyya is Professor of Computer Science and Software Engineering; and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft Pty Ltd., a spin-off company of the University, commercializing its innovations in Grid and Cloud Computing. He has authored and published over 300 research papers and four text books. The books on emerging topics that Dr. Buyya edited include, High Performance Cluster Computing (Prentice Hall, USA, 1999), Content Delivery Networks (Springer, Germany, 2008) and Market-Oriented Grid and Utility Computing (Wiley, USA, 2009). He is one of the highly cited authors in computer science and software engineering worldwide (h-index = 52, g-index = 112, 14 500 + citations). Software technologies for Grid and Cloud computing developed under Dr. Buyya’s leadership have gained rapid acceptance and are in use at several academic institutions and commercial enterprises in 40 countries around the world. Dr. Buyya has led the establishment and development of key community activities, including serving as foundation Chair of the IEEE Technical Committee on Scalable Computing and four IEEE conferences (CCGrid, Cluster, Grid, and e-Science). He has presented over 200 invited talks on his vision on IT Futures and advanced computing technologies at international conferences and institutions in Asia, Australia, Europe, North America, and South America. These contributions and international research leadership of Dr. Buyya are recognized through the award of “2009 IEEE Medal for Excellence in Scalable Computing” from the IEEE Computer Society, USA. For further information on Dr. Buyya, please visit his cyberhome: www.buyya.com.

    View full text