Dynamic performance-Energy tradeoff consolidation with contention-aware resource provisioning in containerized clouds.
Ontology highlight
ABSTRACT: Containers have emerged as a more portable and efficient solution than virtual machines for cloud infrastructure providing both a flexible way to build and deploy applications. The quality of service, security, performance, energy consumption, among others, are essential aspects of their deployment, management, and orchestration. Inappropriate resource allocation can lead to resource contention, entailing reduced performance, poor energy efficiency, and other potentially damaging effects. In this paper, we present a set of online job allocation strategies to optimize quality of service, energy savings, and completion time, considering contention for shared on-chip resources. We consider the job allocation as the multilevel dynamic bin-packing problem that provides a lightweight runtime solution that minimizes contention and energy consumption while maximizing utilization. The proposed strategies are based on two and three levels of scheduling policies with container selection, capacity distribution, and contention-aware allocation. The energy model considers joint execution of applications of different types on shared resources generalized by the job concentration paradigm. We provide an experimental analysis of eighty-six scheduling heuristics with scientific workloads of memory and CPU-intensive jobs. The proposed techniques outperform classical solutions in terms of quality of service, energy savings, and completion time by 21.73-43.44%, 44.06-92.11%, and 16.38-24.17%, respectively, leading to a cost-efficient resource allocation for cloud infrastructures.
SUBMITTER: Canosa-Reyes RM
PROVIDER: S-EPMC8775309 | biostudies-literature |
REPOSITORIES: biostudies-literature
ACCESS DATA