Virtualization plays an important role in providing resources to the users efficiently in cloud environment. Virtualization can be done in various ways like server virtualization, memory virtualization, storage virtualization, etc. For efficiently achieving virtualization, virtual machines (VM) are designed. Although virtual machine (VM) migration has been used to avoid conflicts on traditional system resources like CPU and memory, micro-architectural resources such as shared caches, memory controllers, and non-uniform memory access (NUMA) affinity, have only relied on intra-system scheduling to reduce contentions on them.
Virtual Machine Scheduling: Overview
In cloud systems based on virtualization, virtual machines (VM) share physical resources. Although resource sharing can improve the overall utilization of limited resources, contentions on the resources often lead to significant performance degradation. To mitigate the effect of such contentions, cloud systems use dynamic rescheduling of VMs with live migration technique , changing the placement of running VMs. However, such VM migration has been used to resolve conflicts or balance load on traditional allocatable system resources such as CPUs, memory, and I/O sub-systems. VM migration can be triggered by monitoring the usages of these resources for VMs in a cloud system.
In the meantime, the advent of multi-cores has enabled the sharing of micro-architectural resources such as shared caches and memory controllers. Contention on such micro-architectural resources has emerged as a major reason for performance variance, as an application can be affected by co-running applications even though it receives the same share of CPU, memory, and I/O. For a single system, there have been several prior studies to mitigate the impact of contention on shared caches and memory controllers by carefully scheduling threads. The prior studies rely on the heterogeneity of memory behaviors of applications within a system boundary. The techniques group applications to share a cache to minimize the overall cache misses for a system. However, if a single system runs applications with similar cache behaviors, such intra-system scheduling cannot mitigate contentions.
VM Scheduling Building
VM is a logical instance of a computer system that can operate similarly to a system. The request of a user to access the physical resource in a cloud environment is first received by the VM and then VM assign that resource to the user based on the suitable policy or constraints specified. For this, VM schedulers are used. VM schedulers are used to dynamically assign the VM to users through which they can perform their specific operation in the cloud environment. By this, the utilization of the resources is improved as well as load balancing of all the systems is also managed. So, all the servers are equally sharing the load or services requested by the users. Each cloud environment incorporates a suitable Virtual Machine Scheduling policy for efficient utilization of resources . Virtual Machine Scheduling is necessary to maintaining the Quality of Service (QoS) and Service Level Agreements (SLA) specified by the cloud service provider to customers when the customer wants to take the cloud services.
Figure 1: Virtual Machine Scheduling in Cloud Computing Environment
VM Scheduling is a three step process, as:
- Resource Discovering and Filtering: Data Center Broker looks for the resources present in the system and take information about those resources.
- Resource Selection: It is a deciding state in which the specific resource is selected based on certain constraints.
- Task Submission: Task is provided to the selected resource for completion.
Virtualization techniques mainly aim at achieving scalability, availability, throughput and optimal resource utilization. During operation, sometimes VM’s are transferred from one system to another without disturbing the operation of other VM’s operating in parallel. This is called VM migration. Migrations are performed in order to improve the utilization of the resources and to reduce the down time.
 Wood, T., Shenoy, P., and Venkataramani, “Black-box and gray-box strategies for virtual machine migration”, In Proceedings of the 4th USENIX conference on Networked systems design and implementation (2007).
 Blagodurov, S., Zhuravlev, S., Mohammad, D., and Fedorova, A., “A case for NUMA-aware contention management on multicore processors”. In Proceedings of the USENIX Annual Technical Conference (2011)
 Zhuravlev, S., Blagodurov, S., and Fedorova, A., “Addressing shared resource contention in multicore processors via scheduling”, In Proceedings of the 15th International Conference on Architectural support for programming languages and operating systems (2010).