libvirtVirtualization is a technology that provides a way for a machine (Host) to run another operating system (guest virtual machines) on top of the host operating system.
Included with SUSE Linux Enterprise are the latest open-source virtualization technologies, Xen and KVM. With these Hypervisors SUSE Linux Enterprise can be used to provision, de-provision, install, monitor and manage multiple virtual machines (VM Guests) on a single physical system. Out of the box, SUSE Linux Enterprise can create virtual machines running both modified, highly tuned, paravirtualized operating systems and fully virtualized unmodified operating systems. Full virtualization allows the guest OS to run unmodified and requires the presence of either Intel* Virtualization Technology (Intel VT) or AMD* Virtualization (AMD-V).
The primary component of the operating system that enables virtualization is a Hypervisor (or virtual machine manager), which is a layer of software that runs directly on server hardware. It controls platform resources, sharing them among multiple VM Guests and their operating systems by presenting virtualized hardware interfaces to each VM Guest.
SUSE Linux Enterprise is an enterprise-class Linux server operating system that offers two types of Hypervisors: Xen and KVM. Both Hypervisors support virtualization on 64-bit x86-based hardware architectures. Both Xen and KVM support full virtualization mode. In addition, Xen supports paravirtualized mode. SUSE Linux Enterprise with Xen or KVM acts as a virtualization host server (VHS) that supports VM Guests with its own guest operating systems. The SUSE VM Guest architecture consists of a Hypervisor and management components that constitute the VHS, which runs many application-hosting VM Guests.
In Xen, the management components run in a privileged VM Guest often referred to as Dom0. In KVM, where the Linux kernel acts as the hypervisor, the management components run directly on the VHS.
Virtualization design provides a large number of capabilities to your organization. Virtualization of operating systems is used in many different computing areas. For example, it finds its applications in:
Server consolidation: Many servers can be replaced by one big physical server, so hardware is consolidated, and Guest Operating Systems are converted to virtual machine. It provides the ability to run older software on new hardware.
Isolation: guest operating system can be fully isolated from the Host running it. So if the virtual machine is corrupted, the Host system is not harmed.
Migration: A process to move a running virtual machine to another physical machine. Live migration is an extended feature that allows this move without disconnection of the client or the application.
Disaster recovery: Virtualized guests are less dependent on the hardware, and the Host server provides snapshot features to be able to restore a known running system without any corruption.
Dynamic load balancing: A migration feature that brings a simple way to load-balance your service across your infrastructure.
Virtualization brings a lot of advantages while providing the same service as a hardware server.
First, it reduces the cost of your infrastructure. Servers are mainly used to provide a service to a customer, and a virtualized operating system can provide the same service, with:
Less hardware: You can run several operating system on one host, so all hardware maintenance will be reduced.
Less power/cooling: Less hardware means you don't need to invest more in electric power, backup power, and cooling if you need more service.
Save space: Your data center space will be saved because you don't need more hardware servers (less servers than service running).
Less management: Using a VM Guest simplifies the administration of your infrastructure.
Agility and productivity: Virtualization provides migration capabilities, live migration and snapshots. These features reduce downtime, and bring an easy way to move your service from one place to another without any service interruption.
Guest operating systems are hosted on virtual machines in either full virtualization (FV) mode or paravirtual (PV) mode. Each virtualization mode has advantages and disadvantages.
Full virtualization mode lets virtual machines run unmodified operating systems, such as Windows* Server 2003, but requires the computer running as the VM Host Server to support hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology.
Some guest operating systems hosted in full virtualization mode can be configured to run the Novell* Virtual Machine Drivers instead of drivers originating from the operating system. Running virtual machine drivers improves performance dramatically on guest operating systems, such as Windows Server 2003. For more information, see Appendix A, Virtual Machine Drivers.
Paravirtual mode does not require the host computer to support hardware-assisted virtualization technology, but does require the guest operating system to be modified for the virtualization environment. Typically, operating systems running in paravirtual mode enjoy better performance than those requiring full virtualization mode.
Operating systems currently modified to run in paravirtual mode are referred to as paravirtualized operating systems and include SUSE Linux Enterprise Server and NetWare® 6.5 SP8.
VM Guests not only share CPU and memory resources of the host system, but also the I/O subsystem. Because software I/O virtualization techniques deliver less performance than bare metal, hardware solutions that deliver almost “native” performance have been developed recently. SUSE Linux Enterprise Server supports the following I/O virtualization techniques:
Fully Virtualized (FV) drivers emulate widely supported real devices, which can be used with an existing driver in the VM Guest. Since the physical device on the VM Host Server may differ from the emulated one, the hypervisor needs to process all I/O operations before handing them over to the physical device. Therefore all I/O operations need to traverse two software layers, a process that not only significantly impacts I/O performance, but also consumes CPU time.
Paravirtualization (PV) allows direct communication between the hypervisor and the VM Guest. With less overhead involved, performance is much better than with full virtualization. However, paravirtualization requires either the guest operating system to be modified to support the paravirtualization API or paravirtualized drivers. See Section 7.1.1, “Availability of Paravirtualized Drivers” for a list of guest operating systems supporting paravirtualization.
Directly assigning a PCI device to a VM Guest (PCI pass-through)
avoids performance issues caused by avoiding any emulation in
peformance critical paths. With PCI pass-through, a VM Guest can
directly access the real hardware using a native driver, getting almost
native performance. This method does not allow to share
devices—each device can only be assigned to a single VM Guest.
PCI-Passthrough needs to be supported by the VM Host Server CPU, chipset and
the BIOS/EFI. The VM Guest needs to be equipped with drivers for the
device. See Section 13.6, “Adding a PCI Device with Virtual Machine Manager” or
Section 13.7, “Adding a PCI Device with virsh” for setup
instructions.
The latest I/O virtualization technique, Single Root I/O Virtualization SR-IOV combines the benefits of the aforementioned techniques—performance and the ability to share a device with several VM Guests. SR-IOV requires special I/O devices, that are capable of replicating resources so they appear as multiple separate devices. Each such “pseudo” device can be directly used by a single guest. However, for network cards for example the number of concurrent queues that can be used is reduced, potentially reducing performance for the VM Guest compared to paravirtualized drivers. On the VM Host Server, SR-IOV must be supported by the I/O device, the CPU and chipset, the BIOS/EFI and the hypervisor—see Section 13.8, “Adding SR-IOV Devices” for setup instructions.
VFIO stands for Virtual Function I/O. It allows safe, non-privileged user space drivers. The VFIO driver is a device-agnostic framework to expose direct device access in a protected environment. Its primary purpose is to replace the KVM PCI-specific device assignment. For more information, see Section 28.3.5, “VFIO: Secure Direct Access to Devices”.