skip to main content
10.1145/3297280.3297536acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
poster

Performance overhead of container orchestration frameworks for management of multi-tenant database deployments

Published:08 April 2019Publication History

ABSTRACT

The most preferred approach in the literature on service-level objectives for multi-tenant databases is to group tenants according to their SLA class in separate database processes and find optimal co-placement of tenants across a cluster of nodes. To implement performance isolation between co-located database processes, request scheduling is preferred over hypervisor-based virtualization that introduces a significant performance overhead. A relevant question is whether the more light-weight container technology such as Docker is a viable alternative for running high-end performance database workloads. Moreover, the recent uprise and industry adoption of container orchestration (CO) frameworks for the purpose of automated placement of cloud-based applications raises the question what is the additional performance overhead of CO frameworks in this context. In this paper, we evaluate the performance overhead introduced by Docker engine and two representative CO frameworks, Docker Swarm and Kubernetes, when running and managing a CPU-bound Cassandra workload in OpenStack. Firstly, we have found that Docker engine deployments that run in host mode exhibit negligible performance overhead in comparison to native OpenStack deployments. Secondly, we have found that virtual IP networking introduces a substantial overhead in Docker Swarm and Kubernetes due to virtual network bridges when compared to Docker engine deployments. This demands for service networking approaches that run in true host mode but offer support for network isolation between containers. Thirdly, volume plugins for persistent storage have a large impact on the overall resource model of a database workload; more specifically, we show that a CPU-bound Cassandra workload changes into an I/O-bound workload in both Docker Swarm and Kubernetes because their local volume plugins introduce a disk I/O performance bottleneck that does not appear in Docker engine deployments. These findings imply that solved placement decisions for native or Docker engine deployments cannot be reused for Docker Swarm and Kubernetes.

References

  1. Brendan Burns, Brian Grant, David Oppenheimer, Eric Brewer, and John Wilkes. 2016. Borg, Omega, and Kubernetes. Commun. ACM 59, 5 (2016), 50--57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Apache Cassandra. 2018. Understanding the architecture. URL: http://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archTOC.html, accessed 2018-01-29. (2018).Google ScholarGoogle Scholar
  3. Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, and Russell Sears. 2010. Benchmarking cloud serving systems with YCSB. Proceedings of the 1st ACM symposium on Cloud computing - SoCC '10 (2010), 143--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Docker. 2018. Swarm mode overview. https://docs.docker.com/engine/swarm/. (2018). https://docs.docker.com/engine/swarm/ Accessed: February 14 2018.Google ScholarGoogle Scholar
  5. Eddy Truyen et al. 2018. Performance overhead of container orchestration frameworks for multi-tenant database deployments. (2018). https://goo.gl/PWwR9f Accessed: December 12 2018.Google ScholarGoogle Scholar
  6. Miguel G. Xavier et al. 2015. A Performance Isolation Analysis of Disk-Intensive Workloads on Container-Based Clouds. 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (2015), 253--260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Wes Felter, Alexandre Ferreira, Ram Rajamony, and Juan Rubio. 2015. An updated performance comparison of virtual machines and Linux containers. 2015 IEEE International Symposium on Performance Analysis of Systems and Software (IS-PASS) (2015). http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7095802Google ScholarGoogle ScholarCross RefCross Ref
  8. D. Gehberger, D. Balla, M. Maliosz, and C. Simon. 2018. Performance Evaluation of Low Latency Communication Alternatives in a Containerized Cloud Environment. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). IEEE.Google ScholarGoogle Scholar
  9. Dean Jacobs and Stefan Aulbach. 2007. Ruminations on Multi-Tenant Databases. BTW Proceedings 103 (2007).Google ScholarGoogle Scholar
  10. Nane Kratzke. 2014. A Lightweight Virtualization Cluster Reference Architecture Derived from Open Source PaaS Platforms. Open Journal of Mobile Computing and Cloud Computing 1, 2 (2014), 17--30.Google ScholarGoogle Scholar
  11. Nane Kratzke. 2015. About Microservices, Containers and their Underestimated Impact on Network Performance. In Proceedings of CLOUD COMPUTING 2015 (6th. International Conference on Cloud Computing, GRIDS and Virtualization). 165--169.Google ScholarGoogle Scholar
  12. Rouven Krebs, Christof Momm, and Samuel Kounev. 2014. Metrics and techniques for quantifying performance isolation in cloud environments. Science of Computer Programming 90 (2014), 116--134.Google ScholarGoogle ScholarCross RefCross Ref
  13. Kubernetes. 2018. Production-Grade Container Orchestration. URL: https://kubernetes.io/, accessed 2018-01-23. (2018).Google ScholarGoogle Scholar
  14. Kubernetes. 2018. Using kubeadm to Create a Cluster. URL: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/, accessed 2018-01-29. (2018).Google ScholarGoogle Scholar
  15. Willis Lang, Srinath Shankar, Jignesh M. Patel, and Ajay Kalhan. 2014. Towards multi-tenant performance SLOs. IEEE Transactions on Knowledge and Data Engineering 26, 6 (2014), 1447--1463. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Jacob Leverich and Christos Kozyrakis. 2014. Reconciling High Server Utilization and Sub-millisecond Quality-of-service. In Proceedings of the Ninth European Conference on Computer Systems (EuroSys '14). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Prateek Sharma, Lucas Chaufournier, Prashant Shenoy, and Y C Tay. 2016. Containers and Virtual Machines at Scale: A Comparative Study. In Proceedings of the 17th International Middleware Conference (Middleware '16). ACM, New York, NY, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. David Shue, Michael J Freedman, and Anees Shaikh. 2012. Performance Isolation and Fairness for Multi-Tenant Cloud Storage. 10th USENIX Symposium on Operating Systems Design and Implementation (2012), 1--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Rebecca Taft, Willis Lang, Jennie Duggan, Aaron J Elmore, Michael Stone-braker, and David Dewitt. 2016. STeP : Scalable Tenant Placement for Managing Database-as-a-Service Deployments. Proceedings of the Seventh AMC Symposium on Cloud Computing (2016), 388--400. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Eddy Truyen, Matt Bruzek, Dimitri Van Landuyt, Bert Lagaisse, and Wouter Joosen. 2018. Evaluation of container orchestration systems for deploying and managing NoSQL database clusters. In Cloud Computing (CLOUD), 2018 IEEE 11th International Conference on. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  21. Eddy Truyen, Dimitri Van Landuyt, Vincent Reniers, Ansar Rafique, Bert Lagaisse, and Wouter Joosen. 2016. Towards a container-based architecture for multi-tenant SaaS applications. In ARM 2016 Proceedings of the 15th International Workshop on Adaptive and Reflective Middleware. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. 2015. Large-scale cluster management at Google with Borg. Eurosys (2015). Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. weaveworks. 2018. Weave Net. https://www.weave.works/oss/net/. (2018). https://www.weave.works/oss/net/ Accessed: February 14 2018.Google ScholarGoogle Scholar
  24. Dag Wiers. 2009. Dstat: Versatile resource statistics tool. URL: http://dag.wiee.rs/home-made/dstat/, accessed 2018-01-29. (2009).Google ScholarGoogle Scholar
  25. Miguel Gomes Xavier, Marcelo Veiga Neves, and Cesar Augusto Fonticielha De Rose. 2014. A Performance Comparison of Container-Based Virtualization Systems for MapReduce Clusters. 2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (2014), 299--306. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    SAC '19: Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing
    April 2019
    2682 pages
    ISBN:9781450359337
    DOI:10.1145/3297280

    Copyright © 2019 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 April 2019

    Check for updates

    Qualifiers

    • poster

    Acceptance Rates

    Overall Acceptance Rate1,650of6,669submissions,25%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader