What is virtualization?
Virtualization is a technology that allows you to create virtual, simulated environments from a single, physical machine. Through this process, IT professionals can make use out of their previous investments and optimize a physical machine’s full capacity by distributing resources that are traditionally bound to hardware across many different environments.
Used for decades, virtualization is a powerful technology within IT infrastructure that can be used to increase efficiency, retain flexibility, and improve scalability. Because multiple operating systems can share the same physical hardware, virtualization can improve resource use, reduce costs associated with physical maintenance, and boost security through isolated systems.
Whether you’re a virtualization administrator running test environments on your workstation or a large organization running a multitude of virtual machines (VMs) across your hybrid cloud platform, virtualization plays a key role in modern IT infrastructure and workloads.
How does virtualization work?
Virtualization depends on 2 important concepts: virtual machines and hypervisors.
Virtual machines
A virtual machine (VM) is a computing environment that functions as an isolated system with its own CPU, operating system (OS), memory, network interface, and storage, created from a pool of hardware resources. A VM can be defined by a single data file. As an isolated environment, it can be moved from 1 computer to another, opened in either, and be expected to work the same.
Virtualization allows virtual machines with multiple different operating systems to run simultaneously on a single physical device—like running a MacOS or Windows environment on a Linux® system. Each operating system runs in the same way an OS or application normally would on the host hardware, so the end user’s experience is nearly identical to a real-time operating system experience running on a physical machine.
Hypervisors
Sometimes called a virtual machine monitor (VMM), a hypervisor is software that separates a system’s physical resources and divides those resources so that virtual environments can use them as needed. A hypervisor takes physical resources (such as CPU, memory, and storage) from the hardware and allocates them to multiple VMs at once, enabling the creation of new VMs and the management of existing ones. Hypervisors can sit on top of an operating system (like on a laptop) or be installed directly onto hardware (like a server). The physical hardware, when used as a hypervisor, is called the host, while the many VMs that use its resources are guests.
When the virtual environment is running and a user or program issues an instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and stores the changes in a cache—which all happens at close to native speed.
There are 2 different types of hypervisors that allow virtualization to happen based on need.
Type 1: Also referred to as a native or bare-metal hypervisor, it runs directly on the host’s hardware to manage guest operating systems. It takes the place of a host operating system, and VM resources are scheduled directly to the hardware by the hypervisor. This type of hypervisor is most common in an enterprise datacenter or other server-based environments.
Type 2: Also known as a hosted hypervisor, it runs on a conventional operating system as a software layer or application. It works by abstracting guest operating systems from the host operating system. VM resources are scheduled against a host operating system, which is then executed against the hardware. This type is better for individual users who want to run multiple operating systems on a personal computer.
What is KVM?
KVM (Kernel-based Virtual Machine) is an open source type 1 hypervisor that’s a component of modern Linux distributions. VMs running with KVM benefit from the performance features of Linux, and users can take advantage of the fine-grained control provided by the OS.
Red Hat resources
The benefits of virtualization
Virtualization allows hardware systems to function at their highest capacity. With virtualization, multiple operating systems can run alongside each other and share the same virtualized hardware resources for optimized efficiency. Teams can make more use of their computing resources to support important applications and workloads. Some benefits of virtualization include:
- Server consolidation: By virtualizing servers, many virtual servers can be placed on each physical server to improve hardware utilization. Server consolidation leads to improved resource utilization when resources are allocated to where they are needed because a host machine can be divided into multiple VMs. This approach takes full advantage of the hardware’s capacity. You can maximize space, power consumption, and maintenance by hosting multiple VMs on a single piece of physical hardware.
- Cost savings: Improved hardware utilization can mean savings on additional physical resources, like hard drives or hard disks, as well as reducing the need for power, space, and cooling in the datacenter.
- Isolated environments: Because they’re separated from the rest of a system, VMs won’t interfere with what’s running on the host hardware, and they are a good option for testing new applications or setting up a production environment.
- Faster application migration: Administrators no longer have to wait for every application to be certified on new hardware. Because VM configurations are defined by software, VMs can be quickly created, removed, cloned, and migrated. You can control a VM remotely, and you can automate the management of VMs.
- Efficient environments: During regression tests, teams can create or copy a test environment, eliminating the need for dedicated testing hardware or redundant development servers. With the right training and knowledge, teams can optimize environments to gain greater capabilities and density.
- Disaster recovery: VMs provide additional disaster recovery options by enabling failover that could previously only be achieved through additional hardware. Disaster recovery options reduce the time it takes to repair and set up the impacted server, leading to greater adaptability.
Types of virtualization
Server virtualization
Server virtualization is 1 of the most common types of virtualization, especially in enterprise IT environments. Made possible by hypervisors that separate and distribute physical resources, server virtualization is a process that involves partitioning a server so that the resources can be used to serve multiple functions.
Desktop virtualization
Desktop virtualization allows a central administrator (or automated administration tool) to deploy desktop environments to multiple physical machines at once. Desktop virtualization allows admins to perform mass configurations, updates, and security checks on all virtual desktops.
Data virtualization
Data virtualization, more commonly known as a data federation or a global namespace, allows data that’s been distributed to be consolidated into a single source. It brings together data from multiple sources, easily accommodates new data sources, and transforms data according to user needs. A global namespace sits in front of multiple data sources and allows them to be treated as a single source, delivering the needed data—in the required form—at the right time to any application or user.
Storage virtualization
Storage virtualization allows you to manage and access storage in a single storage device. All storage devices on a network can have their storage pooled into 1 location. Storage virtualization improves efficiency for storage actions like archiving and recovery and maximizes storage usage available in an infrastructure.
Application virtualization
Application virtualization provides a way for applications to be deployed and used by making them available outside of the OS on which they were originally installed. By separating the application from its OS, it can be used remotely by running it in a virtual environment. This approach allows greater flexibility for management and deployment. Application virtualization differs from desktop virtualization because the application runs virtually while the OS on the user’s device runs as normal.
Network functions virtualization
Used by telecommunications service providers, network functions virtualization (NFV) separates a network’s key functions (like directory services, file sharing, and IP configuration) so they can be distributed among environments. Once software functions are independent of the physical machines they once lived on, specific functions can be packaged together into a new network and assigned to an environment. Virtualizing networks reduces the number of physical components—like switches, routers, and cables—that are needed to create multiple, independent networks.
Virtualization vs. containerization
Virtualization and containerization are 2 approaches to computing environments that isolate IT components from the rest of the physical system. However, each approach works differently.
As previously explained, virtualization allows VMs to function at full capacity apart from their physical hardware with differing operating systems. Alternatively, containerization allows software or applications to be packaged in a container that shares the host OS and can be moved and run in any environment for increased flexibility.
With virtualization, VMs are created to run their own OS and applications. Virtualization allows multiple operating systems to run alongside each other and share the same virtualized hardware resources from a single physical machine.
Containerization packages software code together into its own container. This process allows apps within the container to be moved and then run in any environment and on any infrastructure..
Red Hat® OpenShift® includes a feature that allows you to easily migrate your virtual machines to OpenShift and manage VMs alongside containers for maximum visibility.
Virtualization and cloud computing
Both virtualization and containerization are technologies that make cloud computing possible. Cloud computing is the act of running workloads within clouds—which are IT environments that abstract, pool, and share scalable resources across a network.
Public and private clouds virtualize resources into shared pools, add a layer of administrative control, and deliver those resources with automated self-service functions. The virtualization, management, and automation software that creates clouds all sit on top of the operating system, which maintains the connections among physical resources, virtual data pools, management software, automation scripts, and customers.
Virtualization allows workloads running within cloud environments to access resources across a network, making it possible to deliver scalable and flexible IT resources to users over the internet.
What is virtual machine migration?
VM migration refers to the transfer of a VM from 1 host or 1 platform to another. The goal of VM migration is to improve resource utilization, optimize performance, increase flexibility, and improve scalability. Migrating your VMs is the solution to having the right combination of consistency, efficiency, and support for their future operations and cloud-based applications.
VM migration can happen for various reasons, and there are many types of migration. The 2 main migrations are live migration and cold migration. Live migration is when a VM continues to run on the source host machine while its memory pages are being transferred to the destination host, and then a scheduled cutover event to the new host allows the VM to appear running non-disruptively. Cold migration is when a VM is turned off during the process before transitioning from the source host to the destination host, and it is often used when moving entirely between platforms or regions.
A migration strategy might favor 1 type of migration over another based on need and the specific platforms involved.
Why choose Red Hat for virtualization?
Red Hat’s trusted products and partner ecosystem come together in 1 comprehensive virtualization solution. Whether you have virtual workloads, containerized workloads, or a mix of both, Red Hat OpenShift Virtualization provides tools to build, operate, and scale with confidence. Migrate your virtual machines now to Red Hat OpenShift Virtualization, a modern app platform–based on KVM and KubeVirt–that integrates virtual and containerized workloads to provide flexibility without added complexity. The included migration toolkit for virtualization provides the tools you need to start your migration in a few simple steps.
Together with Red Hat OpenShift Virtualization, you can use automation to accelerate delivery with Red Hat Ansible® Automation Platform–from migration at scale to Day 2 operations and remediation. With this flexible approach, you can automate tasks to improve speed and efficiency of IT operations, while preserving your investment in virtualization technology and the applications that depend on it.
With Red Hat, you can also manage and modernize at your own pace. You can monitor the performance of your VMs from a single console with Red Hat Advanced Cluster Management for Kubernetes. With additional support options and partner integrations for storage, backup, disaster recovery, and networking, you can rely on Red Hat to keep your virtual infrastructure running smoothly throughout the hybrid cloud–and help you modernize when you’re ready.
The official Red Hat blog
Get the latest information about our ecosystem of customers, partners, and communities.