Elsevier

Journal of Systems and Software

Volume 155, September 2019, Pages 91-103
Journal of Systems and Software

In Practice
BrownoutCon: A software system based on brownout and containers for energy-efficient cloud computing

https://doi.org/10.1016/j.jss.2019.05.031Get rights and content

Highlights

  • Model that manages the containers and resources in a fine-grained manner.

  • Software system based on Docker Swarm to provide energy-efficient approaches.

  • Evaluations of proposed software system on French Grid’5000 infrastructure.

Abstract

VM consolidation and Dynamic Voltage Frequency Scaling approaches have been proved to be efficient to reduce energy consumption in cloud data centers. However, the existing approaches cannot function efficiently when the whole data center is overloaded. An approach called brownout has been proposed to solve the limitation, which dynamically deactivates or activates optional microservices or containers. In this paper, we propose a brownout-based software system for container-based clouds to handle overloads and reduce power consumption. We present its design and implementation based on Docker Swarm containers. The proposed system is integrated with existing Docker Swarm without the modification of their configurations. To demonstrate the potential of BrownoutCon software in offering energy-efficient services in brownout situation, we implemented several policies to manage containers and conducted experiments on French Grid’5000 cloud infrastructure. The results show the currently implemented policies in our software system can save about 10%–40% energy than the existing baselines while ensuring quality of services.

Introduction

Cloud computing has been regarded as a new paradigm for resource and service provisioning, which has offered vital benefits for IT industry by lowering operational costs and human expenses. However, the huge amount of energy consumption and carbon emissions resulted from cloud data centers have become a significant concern of researchers. Nowadays, data centers contain thousands of servers and their sizes range from 300–4500 square meters, which can consume more than 27,000 kWh energy per day (Mastelic et al., 2015). It is estimated that, in 2010, the energy consumption of data centers consumed 1.1% to 1.5% of total electricity worldwide (Mastelic et al., 2015). Moreover, the excessive usage of brown energy to generate power increases the carbon emission. It is also reported that about 2% carbon emissions of total carbon amount released into the atmosphere worldwide are from data centers (Lavallée, 2014). Recently, some dominant service providers have established a community, called Green Grid, to promote energy-efficient techniques to minimize the environmental impact of data centers (Beloglazov et al., 2012).

Unfortunately, reducing energy consumption is a challenging mission as applications and data are growing complex and consuming more computational resources (Liu et al., 2012). The applications and data are generally required to be processed within the required time, and to meet this requirement large and powerful servers are provisioned. To ensure the sustainability of future growth, cloud data centers are required to utilize the resource computing infrastructure efficiently and minimize energy consumption. To address this problem, the concept of green cloud was proposed, which aimed to reduce power consumption, energy cost, carbon emissions and also optimize renewable energy usage (Kong, Liu, 2015, Buyya, Srirama, Casale, Calheiros, Simmhan, Varghese, Gelenbe, Javadi, Vaquero, Netto, Toosi, Rodriguez, Llorente, Vimercati, Samarati, Milojicic, Varela, Bahsoon, Assuncao, Rana, Zhou, Jin, Gentzsch, Zomaya, Shen, 2018). Therefore, in addition to resource provisioning and Quality of Service (QoS) assurance, data centers are required to be energy-efficient.

The dominant methods to improve resource utilization and reduce energy consumption are Virtual Machine (VM) consolidation (Beloglazov et al., 2012) and Dynamic Voltage Frequency Scaling (DVFS) (Kim et al., 2011). The VM consolidation method migrates VMs from underutilized hosts to minimize the number of active hosts, and the idle hosts are switched to low-power mode to save energy consumption. The DVFS method reduces energy usage by dynamically scaling voltage frequency. When the host is underutilized, the voltage frequency scales to a lower frequency to reduce power. These approaches have been proved to be efficient to save data center’s power consumption, however, when the whole data center is overloaded, both of them cannot function efficiently. For example, the VMs can not be migrated if all the hosts are overloaded.

In data centers, another reason for high energy consumption lies in that computing resources are inefficiently utilized by applications. Thus, applications are currently built with microservice paradigm in order to utilize resources more efficiently. Microservice is referred as a set of self-contained application components. The components encapsulate their logic and expose their functionality via interfaces to enable flexible deployment and replacement. With microservices or components, developers and users can gain technological heterogeneity, resilience, scalability, ease of deployment, organizational alignment, composability and optimization for replaceability (Newman, 2015). In addition, microservices also brings the benefits of more fine-grained utilization control over the application resource.

To overcome the limitations of VM consolidation and DVFS, as well as improve the utilization of applications, we take advantage of brownout, a paradigm inspired from voltage shutdown to cope with emergency cases, in which the light bulbs emit fewer lights to save power (Xu and Buyya, 2019). Brownout is also applied to cloud scenarios, especially for microservices or application components that are allowed to be shortly deactivated to enhance system robustness. In brownout-compliant microservices, a control knob called dimmer is used to show the probability that whether a microservice should be executed or not (Klein et al., 2014). When requests are bursting and the system becomes overloaded, the brownout is triggered to temporally degrade the user experience, so that relieving the overloaded situation as well as saving energy consumption.

Microservices can be featured with brownout characteristic. An example of online shopping system with a recommendation engine is introduced in (Klein et al., 2014). The recommendation engine enhances the function of the system and increases profits via recommending products to users. However, because the engine is not the necessary component and it requires more resources in comparison to other components, it is not mandatory to keep running all the time, especially under the overloaded situation when requests have a long delay or even not served. Deactivating the engine enables service providers to serve more requests with essential requirements or QoS constraints. Apart from this example, brownout paradigm can also be applied to other systems that allow application components to be deactivated, especially for the container-based system that applications are built with microservice paradigm. With container technology, the microservices can be functionally isolated, thus the deactivation of some microservices will not influence other microservices. In addition, as microservices are light-weight, they can be deactivated/activated quickly to support the brownout approach.

In this paper, we propose and develop a software system, called BrownoutCon, which is inspired by brownout-based approach to deliver energy-efficient resource scheduling. The implementation of BrownoutCon is based on Docker Swarm (Docker, 2017) that provides the management of container cluster. The software system is designed and implemented as an add-on for Docker Swarm, which has no necessity to modify the configurations of Docker Swarm. The system also applies the public APIs of Grid'5000 (2017), which is a real testbed that provides power measurement for hosts. The aims of BrownoutCon are twofold: (1) providing an open-source software system based on brownout and Docker Swarm to manage containers; (2) offering an extensible software system for conducting research on reducing energy consumption and handling overloads in cloud data centers.

The BrownoutCon is designed and implemented by following the brownout enabled system model in our previous works (Xu, Dastjerdi, Buyya, 2016, Xu, Buyya, 2017). Mandatory containers and optional containers are introduced in the system model, which are identified according to whether the containers can be temporarily deactivated or not. The brownout controller is the key part of the system model to manage brownout, which also provides the scheduling policies for containers. The problem of designing the brownout controller splits into several sub-problems:

  • 1.

    Predicting the future workloads, so that the system can avoid overloads to foster the system robustness.

  • 2.

    Determining whether a host is overloaded or not, so that the brownout controller will be triggered to relieve the overloads.

  • 3.

    Deciding when to disable the containers, so that the system can relieve overloads and reduce energy consumption while ensuring QoS constraints.

  • 4.

    Selecting the containers to be disabled, so that a better trade-off can be achieved between the reduced energy and QoS constraints.

  • 5.

    Deciding when to turn the hosts on or into the low-power mode, so that the idle hosts can be switched into low-power mode to save power consumption.

Compared with VM consolidation approaches, the software system based on brownout and containers has two advantages: (1) a container can be stopped or restarted in seconds, while VM migration may take minutes. Thus, scheduling with containers is more light-weight and flexible than VMs. (2) the brownout-based approach provides another optional energy-efficient approach apart from VM consolidation and DVFS, which is also available to be combined with VM consolidation to achieve better energy efficiency, especially for the situation when the whole data center is overloaded.

To evaluate the proposed system in practice, we conduct our experiments on Grid'5000 (2017) real testbed. We also evaluate the performance of proposed system with real traces derived from Wikipedia1 workloads.

The main contributions of our work are as follows:

  • Proposed an effective system model that enables brownout approach to manage the containers and resources in a fine-grained manner;

  • Designed and developed a software system based on Docker Swarm to provide energy-efficient approaches for cloud data centers;

  • Experimental evaluations of our proposed software system on French Grid’5000 infrastructure for service providers to deploy microservices in an energy-efficient manner while ensuring QoS constraints.

The rest of the paper is organized as follows. Section 2 discusses the related work, followed by the system design and implementation in Section 3. Brownout-based policies implemented in BrownoutCon are presented in Section 4. In Section 5, we introduce our experiments setup and evaluate the performance of implemented policies under Grid’5000 testbed. Conclusions along with future work are presented in Section 6.

Section snippets

Related work

It is estimated that U.S. data centers will consume 140 billion kWh of electricity annually by the year 2020, which equals to the annual output of about 50 brown power plants that have high carbon emissions (Delforge, Bawden). To minimize the operational expenses and impacts on the environment, a variety of state-of-the-art works have been conducted to reduce data center energy consumption.

There is a close relationship between resource utilization and energy consumption, as inefficient

System architecture, design and implementation

The purpose of BrownoutCon is to provide a software system based on brownout and containers for energy-efficient cloud data centers. The system takes advantage of public APIs of Docker Swarm and is evaluated under Grid’5000 testbed. The system is designed to be extensible, which means new components can be added without the necessity to modify the original codes or configurations of Docker Swarm and Grid’5000.

Our software system is deployed on Docker Swarm master and worker nodes. Docker Swarm

Policies implemented in BrownoutCon

To demonstrate BrownoutCon software system capability, we plugged/incorporated some policies originally evaluated by simulations in Xu et al. (2016). As noted in Section 1, the scheduling problem can be divided into several sub-problems: (1) workload prediction; (2) overloaded status detection (3) brownout trigger; (4) deactivated containers selection; and (5) hosts scaling. In this section, we will introduce the implemented policies for reference. It is noted that the introduced policies are

Performance evaluation

In this section, we evaluate our proposed software prototype system by conducting experiments under Grid’5000 infrastructure. The goals of this section include: (1) evaluating the behavior of the software system in an experimental environment, and (2) demonstrating suitability of the proposed system to enable experimental evaluations and scheduling policies in a practical setting.

Conclusions and future work

In this paper, we proposed the design and development of a software system based on brownout and containers for energy-efficient clouds, called BrownoutCon. BrownoutCon is transparent system based on Docker Swarm for containers management and does not require to modify the default configurations of Docker Swarm via using its APIs. BrownoutCon can be customized for implementing brownout-based algorithms, which dynamically activates or deactivates containers to handle overloads and reduce energy

Acknowledgments

This work is partially supported by China Scholarship Council (CSC) and Australia Research Council (ARC) Discovery Project. We thank Marcos Assuncao from INRIA (France) for providing the access to Grid’5000 infrastructure. We thank Editor-in-Chief (Prof. Paris Avgeriou and Prof. David C. Shepherd), Area Editor (Prof. Helen Karatza), and anonymous reviewers for their excellent comments on improving the paper. We also thank Sukhpal Singh Gill and Shashikant Ilager for their comments on improving

Minxian Xu received the B.Sc degree in 2012 and the MSc degree in 2015, both in software engineering from University of Electronic Science and Technology of China. He is working towards the PhD degree at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems, the University of Melbourne, Australia. His research interests include resource scheduling and optimization in cloud computing with special focus on energy efficiency. He has

References (46)

  • A. Belog1azov et al.

    Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers

    Concurr. Comput. Pract. Exper.

    (2012)
  • A. Beloglazov et al.

    Openstack neat: a framework for dynamic and energy-efficient consolidation of virtual machines in openstack clouds

    Concurr. Comput. Pract. Exper.

    (2015)
  • D. Bernstein

    Containers and cloud: from Lxc to docker to kubernetes

    IEEE Cloud Comput.

    (2014)
  • E. Boutin et al.

    Apollo: scalable and coordinated scheduling for cloud-scale computing

    Proceedings of the 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14)

    (2014)
  • R. Buyya et al.

    A manifesto for future generation cloud computing: research directions for the next decade

    ACM Comput. Surv.

    (2018)
  • Q. Chen et al.

    Utilization-based VM consolidation scheme for power efficiency in cloud data centers

    Proceedings of the IEEE International Conference on Communication Workshop

    (2015)
  • Delforge, P., 2014. Data center efficiency assessment - scaling up energy efficiency across the data center industry:...
  • G. Dhiman et al.

    vgreen: a system for energy efficient computing in virtualized environments

    Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design

    (2009)
  • Docker, 2017. Docker documentation | Docker documentation....
  • W. Dou et al.

    An energy-aware virtual machine scheduling method for service QOS enhancement in clouds over big data

    Concurr. Comput. Pract. Exper.

    (2016)
  • S.S. Gill et al.

    A taxonomy and future directions for sustainable cloud computing: 360 view

    ACM Comput. Surv.

    (2018)
  • Í. Goiri et al.

    Parasol and greenswitch: Managing datacenters powered by renewable energy

    Proceedings of the ACM SIGARCH Computer Architecture News

    (2013)
  • Grid'5000, 2017. Grid'5000:home....
  • Cited by (0)

    Minxian Xu received the B.Sc degree in 2012 and the MSc degree in 2015, both in software engineering from University of Electronic Science and Technology of China. He is working towards the PhD degree at the Cloud Computing and Distributed Systems (CLOUDS) Laboratory, School of Computing and Information Systems, the University of Melbourne, Australia. His research interests include resource scheduling and optimization in cloud computing with special focus on energy efficiency. He has co-authored several peer-reviewed papers published in prominent international journals and conferences, such as ACM Computing Surveys, IEEE Transactions on Sustainable Computing, IEEE Transactions on Automation Science and Engineering, Concurrency and Computation: Practice and Experience, International Conference on Service-Oriented Computing.

    Rajkumar Buyya is a Redmond Barry Distinguished Professor and Director of the Cloud Computing and Distributed Systems (CLOUDS) Laboratory at the University of Melbourne, Australia. He is also serving as the founding CEO of Manjrasoft, a spin-off company of the University, commercializing its innovations in Cloud Computing. He served as a Future Fellow of the Australian Research Council during 2012–2016. He has authored over 625 publications and seven text books including ”Mastering Cloud Computing” published by McGraw Hill, China Machine Press, and Morgan Kaufmann for Indian, Chinese and international markets respectively. He is one of the highly cited authors in computer science and software engineering worldwide (h-index=123, g-index=271, 79,800+ citations). Dr. Buyya is recognized as a “Web of Science Highly Cited Researcher” in 2016, 2017 and 2018 by Thomson Reuters, a Fellow of IEEE, and Scopus Researcher of the Year 2017 with Excellence in Innovative Research Award by Elsevier for his outstanding contributions to Cloud computing. He served as the founding Editor-in-Chief of the IEEE Transactions on Cloud Computing. He is currently serving as Co-Editor-in-Chief of Journal of Software: Practice and Experience, which was established over 45 years ago. For further information on Dr. Buyya, please visit his cyberhome: www.buyya.com

    View full text