Elsevier

Advances in Computers

Volume 82, 2011, Pages 47-111
Advances in Computers

Chapter 3 - A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems

https://doi.org/10.1016/B978-0-12-385512-1.00003-7Get rights and content

Abstract

Traditionally, the development of computing systems has been focused on performance improvements driven by the demand of applications from consumer, scientific, and business domains. However, the ever-increasing energy consumption of computing systems has started to limit further performance growth due to overwhelming electricity bills and carbon dioxide footprints. Therefore, the goal of the computer system design has been shifted to power and energy efficiency. To identify open challenges in the area and facilitate future advancements, it is essential to synthesize and classify the research on power- and energy-efficient design conducted to date. In this study, we discuss causes and problems of high power/energy consumption, and present a taxonomy of energy-efficient design of computing systems covering the hardware, operating system, virtualization, and data center levels. We survey various key works in the area and map them onto our taxonomy to guide future design and development efforts. This chapter concludes with a discussion on advancements identified in energy-efficient computing and our vision for future research directions.

Introduction

The primary focus of designers of computing systems and the industry has been on the improvement of the system performance. According to this objective, the performance has been steadily growing driven by more efficient system design and increasing density of the components described by Moore's law [1]. Although the performance per watt ratio has been constantly rising, the total power drawn by computing systems is hardly decreasing. Oppositely, it has been increasing every year that can be illustrated by the estimated average power use across three classes of servers presented in Table I[2]. If this trend continues, the cost of the energy consumed by a server during its lifetime will exceed the hardware cost [3]. The problem is even worse for large-scale compute infrastructures, such as clusters and data centers. It was estimated that in 2006 IT infrastructures in the United States consumed about 61 billion kWh for the total electricity cost about 4.5 billion dollars [4]. The estimated energy consumption is more than double from what was consumed by IT in 2000. Moreover, under current efficiency trends, the energy consumption tends to double again by 2011, resulting in 7.4 billion dollars annually.

Energy consumption is not only determined by hardware efficiency, but it is also dependent on the resource management system deployed on the infrastructure and the efficiency of applications running in the system. This interconnection of the energy consumption and different levels of computing systems can be seen in Fig. 1. Energy efficiency impacts end-users in terms of resource usage costs, which are typically determined by the total cost of ownership (TCO) incurred by a resource provider. Higher power consumption results not only in boosted electricity bills but also in additional requirements to a cooling system and power delivery infrastructure, that is, uninterruptible power supplies (UPS), power distribution units (PDU), and so on. With the growth of computer components density, the cooling problem becomes crucial, as more heat has to be dissipated for a square meter. The problem is especially important for 1U and blade servers. These types of servers are the most difficult to cool because of high density of the components, and thus lack of space for the air flow. Blade servers bring the advantage of more computational power in less rack space. For example, 60 blade servers can be installed into a standard 42U rack [5]. However, such system requires more than 4000 W to supply the resources and cooling system compared to the same rack filled by 1U servers consuming 2500 W. Moreover, the peak power consumption tends to limit further performance improvements due to constraints of power distribution facilities. For example, to power a server rack in a typical data center, it is necessary to provide about 60 A [6]. Even if the cooling problem can be addressed for future systems, it is likely that delivering current in such data centers will reach the power delivery limits.

Apart from the overwhelming operating costs and the total cost of acquisition (TCA), another rising concern is the environmental impact in terms of carbon dioxide (CO2) emissions caused by high energy consumption. Therefore, the reduction of power and energy consumption has become a first-order objective in the design of modern computing systems. The roots of energy-efficient computing, or Green IT, practices can be traced back to 1992, when the U.S. Environmental Protection Agency launched Energy Star, a voluntary labeling program which is designed to identify and promote energy-efficient products in order to reduce the greenhouse gas emissions. Computers and monitors were the first labeled products. This has led to the widespread adoption of the sleep mode in electronic devices. At that time, the term “green computing” was introduced to refer to energy-efficient personal computers [7]. At the same time, the Swedish confederation of professional employees has developed the TCO certification program—a series of end-user and environmental requirements for IT equipment including video adapters, monitors, keyboards, computers, peripherals, IT systems, and even mobile phones. Later, this program has been extended to include requirements on ergonomics, magnetic and electrical field emission levels, energy consumption, noise level, and use of hazardous compounds in hardware. The Energy Star program was revised in October 2006 to include stricter efficiency requirements for computer equipment, along with a tiered ranking system for approved products.

There are a number of industry initiatives aiming at the development of standardized methods and techniques for the reduction of the energy consumption in computer environments. They include Climate Savers Computing Initiative (CSCI), Green Computing Impact Organization, Inc. (GCIO), Green Electronics Council, The Green Grid, International Professional Practice Partnership (IP3), with membership of companies such as AMD, Dell, HP, IBM, Intel, Microsoft, Sun Microsystems, and VMware.

Energy-efficient resource management has been first introduced in the context of battery-powered mobile devices, where energy consumption has to be reduced in order to improve the battery lifetime. Although techniques developed for mobile devices can be applied or adapted for servers and data centers, this kind of systems requires specific methods. In this chapter, we discuss ways to reduce power and energy consumption in computing systems, as well as recent research works that deal with power and energy efficiency at the hardware and firmware, operating system (OS), virtualization, and data center levels. The main objective of this work is to give an overview of the recent research advancements in energy-efficient computing, identify common characteristics, and classify the approaches. On the other hand, the aim is to show the level of development in the area and discuss open research challenges and direction for future work. The reminder of this chapter is organized as follows: in the next section, power and energy models are introduced; in Section 3, we discuss problems caused by high power and energy consumption; in 4 Taxonomy of Power/Energy Management in Computing Systems, 5 Hardware and Firmware Level, 5.1 Dynamic Component Deactivation, 5.2 Dynamic Performance Scaling, 5.2.1 Dynamic Voltage and Frequency Scaling, 5.3 Advanced Configuration and Power Interface, 6 Operating System Level, 6.1 The On‐Demand Governor (Linux Kernel), 6.2 ECOsystem, 6.3 Nemesis OS, 6.4 The Illinois GRACE Project, 6.5 Linux/RK, 6.6 Coda and Odyssey, 6.7 PowerNap, 7 Virtualization Level, 7.1 Virtualization Technology Vendors, 7.1.1 Xen, 7.1.2 VMware, 7.1.3 Kernel-based Virtual Machine (KVM), 7.2 Energy Management for Hypervisor-based VMs, 8 Data Center Level, we present the taxonomy and survey of the research in energy-efficient design of computing systems, followed by a conclusion and directions for future work in Section 9.

Section snippets

Power and Energy Models

To understand power and energy management mechanisms, it is essential to clarify the terminology. Electric current is the flow of electric charge measured in amperes. Amperes define the amount of electric charge transferred by a circuit per second. Power and energy can be defined in terms of work that a system performs. Power is the rate at which the system performs the work, while energy is the total amount of work performed over a period of time. Power and energy are measured in watts (W) and

Problems of High Power and Energy Consumption

The energy consumption by computing facilities raises various monetary, environmental, and system performance concerns. A recent study on the power consumption of server farms [2] has shown that in 2005 the electricity use by servers worldwide—including their associated cooling and auxiliary equipment—costed 7.2 billion dollars. The study also indicates that the electricity consumption in that year had doubled compared to the consumption in 2000. Clearly, there are environmental issues with the

Taxonomy of Power/Energy Management in Computing Systems

A large volume of research has been done in the area of power and energy-efficient resource management in computing systems. As power and energy management techniques are closely connected, from this point we will refer to them as power management. As shown in Fig. 5, the high-level power management techniques can be divided into static and dynamic. From the hardware point of view, SPM contains all the optimization methods that are applied at the design time at the circuit, logic,

Hardware and Firmware Level

As shown in Fig. 6, DPM techniques applied at the hardware and firmware level can be broadly divided into two categories: dynamic component deactivation (DCD) and dynamic performance scaling (DPS). DCD techniques are built upon the idea of the clock gating of parts of an electronic component or complete disabling during periods of inactivity.

The problem could be easily solved if transitions between power states would cause negligible power and performance overhead. However, transitions to

Operating System Level

In this section, we discuss research works that deal with power-efficient resource management at the OS level. The taxonomy of the characteristics used to classify the works is presented in Fig. 7. To highlight the most important characteristics of the works, they are summarized in Table II (the full table is given in Appendix A).

Virtualization Level

The virtualization level enables the abstraction of an OS and applications running on it from the hardware. Physical resources can be split into a number of logical slices called VMs. Each VM can accommodate an individual OS creating for the user a view of a dedicated physical resource and ensuring the performance and failure isolation between VMs sharing a single physical machine. The virtualization layer lies between the hardware and OS and, therefore, a virtual machine monitor (VMM) takes

Data Center Level

In this section we discuss recent research efforts in the area of power management at the data center level. Most of the approaches to dealing with the energy-efficient resource management at the data center level are based on the idea of consolidating the workload into the minimum of physical resources. Switching off idle resources leads to the reduced energy consumption, as well as the increased utilization of resources; therefore, lowering the TCO and speeding up Returns On Investments (ROI).

Conclusions and Future Directions

In recent years, energy efficiency has emerged as one of the most important design requirements for modern computing systems, ranging from single servers to data centers and Clouds, as they continue to consume enormous amounts of electrical power. Apart from high operating costs incurred by computing resources, this leads to significant emissions of CO2 into the environment. For example, currently, IT infrastructures contribute about 2% of the total CO2 footprints. Unless energy-efficient

Acknowledgments

We would like to thank Adam Wierman (California Institute of Technology), Kresimir Mihic (Stanford University), and Saurabh Kumar Garg (University of Melbourne) for their constructive comments and suggestions on improving the Chapter.

References (70)

  • R. Buyya et al.

    Cloud computing and emerging IT platforms: vision, hype, and reality for delivering computing as the 5th utility

    Future Generation Comput. Syst.

    (2009)
  • G.E. Moore

    Cramming more components onto integrated circuits

    Proc. IEEE

    (1998)
  • J.G. Koomey

    Estimating Total Power Consumption by Servers in the US and the World

    (2007)
  • L. Barroso

    The Price of Performance

    (2005)
  • R. Brown et al.

    Report to Congress on Server and Data Center Energy Efficiency: Public Law 109–431

    (2008)
  • L. Minas et al.

    Energy Efficiency for Information Technology: How to Reduce Power Consumption in Servers and Data Centers

    (2009)
  • P. Ranganathan et al.

    Ensemble-level power management for dense blade servers

  • S. Rowe
  • V. Venkatachalam et al.

    Power reduction techniques for microprocessor systems

    ACM Comput. Surv. CSUR

    (2005)
  • L.A. Barroso et al.

    The case for energy-proportional computing

    Computer

    (2007)
  • X. Fan et al.

    Power provisioning for a warehouse-sized computer

  • M. Blackburn

    Five Ways to Reduce Data Center Server Power Consumption

    (2008)
  • Thermal Guidelines for Data Processing Environments

    (2004)
  • G. Dhiman et al.

    A system for online power prediction in virtualized environments using gaussian mixture models

  • G. Koch

    Discovering multi-core: extending the benefits of Moore's law

    Technology

    (2005)
  • F. Petrini et al.

    What are the future trends in high-performance interconnects for parallel computers?

  • C. Pettey
  • S. Devadas et al.

    A survey of optimization techniques targeting low power VLSI circuits

  • V. Tiwari et al.

    Technology mapping for low power

  • V. Pallipadi et al.

    The ondemand governor

  • E. Elnozahy et al.

    Energy-efficient server clusters

    Power Aware Comput. Syst.

    (2003)
  • E. Pinheiro et al.

    Load balancing and unbalancing for power and performance in cluster-based systems

  • L. Benini et al.

    A survey of design techniques for system-level dynamic power management

    IEEE Trans. VLSI Syst.

    (2000)
  • S. Albers

    Energy-efficient algorithms

    Commun. ACM

    (2010)
  • M.B. Srivastava et al.

    Predictive system shutdown and other architectural techniques for energy efficient programmable computation

    IEEE Trans. VLSI Syst.

    (1996)
  • C.H. Hwang et al.

    A predictive system shutdown method for energy saving of event-driven computation

    ACM Trans. Des. Autom. Electron. Syst.

    (2000)
  • F. Douglis et al.

    Adaptive disk spin-down policies for mobile computers

    Comput. Syst.

    (1995)
  • G. Buttazzo

    Scalable applications for energy-aware processors

  • M. Weiser et al.

    Scheduling for reduced CPU energy

    Mobile Comput.

    (1996)
  • K. Govil et al.

    Comparing algorithm for dynamic speed-setting of a low-power CPU

  • A. Wierman et al.

    Power-aware speed scaling in processor sharing systems

  • L.L. Andrew et al.

    Optimality, fairness, and robustness in speed scaling designs

  • A. Weissel et al.

    Process cruise control: event-driven clock scaling for dynamic power management

  • K. Flautner et al.

    Automatic performance setting for dynamic voltage scaling

    Wireless Netw.

    (2002)
  • S. Lee et al.

    Run-time voltage hopping for low-power real-time systems

  • Cited by (561)

    • Optimizing the use of renewable energy to minimize operational costs in distributer green data centers

      2023, Renewable Energy Production and Distribution: Solutions and Opportunities: Volume 2
    View all citing articles on Scopus
    View full text