Unit 3
Unit 3
(rather than physical) versions of computing resources, such as servers, storage devices, and
networks. It enables more efficient use of hardware and is a key enabler of cloud environments.
Here’s a deeper look at what it means:
1. What is Virtualization?
● Server Virtualization: This involves partitioning a physical server into multiple virtual
servers, each capable of running its own operating system (OS). This is often done using
hypervisors, which are software tools that manage VMs.
● Storage Virtualization: Combining multiple storage devices into a single virtual unit.
This allows easier management, scalability, and more efficient storage allocation in cloud
environments.
● Network Virtualization: This involves creating a virtualized network that abstracts the
physical network infrastructure, allowing the cloud environment to scale easily, support
flexible networking, and improve resource utilization.
● Desktop Virtualization: Allows virtual desktop instances to be created on remote
servers, providing users with access to a desktop environment from anywhere, which is
key for cloud-based applications.
In the cloud, virtualization works on top of physical hardware using a hypervisor or virtual
machine monitor (VMM). The hypervisor enables multiple virtual machines (VMs) to run
simultaneously on a physical server by abstracting the hardware from the virtual machines.
● Type 1 Hypervisor (Bare-metal): This type of hypervisor runs directly on the physical
hardware, providing greater performance. It’s typically used in large-scale data centers
and enterprise environments.
● Type 2 Hypervisor (Hosted): This type runs on top of an existing operating system. It is
often used for smaller-scale or less resource-intensive cloud services.
● Public Cloud: Virtualized resources are owned and managed by a third-party cloud
provider (e.g., AWS, Microsoft Azure, Google Cloud).
● Private Cloud: Virtualized resources are used by a single organization, often for security
or regulatory reasons.
● Hybrid Cloud: Combines both public and private clouds, utilizing virtualization for
resource scalability and flexibility.
Need of Virtualization
There are five major needs of virtualization which are described below:
The operating system can interrupt or preempt a running process to allocate CPU time to another
process, typically based on priority or time-sharing policies. Mainly a process is switched from
the running state to the ready state.
Round Robin (RR) , Shortest Remaining Time First (SRTF) , Priority (preemptive)
Non-Preemptive Scheduling
A hypervisor is a software layer that allows multiple virtual machines (VMs) to run on a single
physical machine. It essentially abstracts the physical hardware and manages the execution of
virtual environments. There are two main types of hypervisors: Type 1 and Type 2. Here’s an
in-depth look into hypervisors:
Types of Hypervisors
1. VM Creation and Management: Hypervisors create and manage virtual machines,
allowing users to allocate resources like CPU, memory, storage, and network bandwidth
to each VM.
2. Hardware Abstraction: It abstracts physical hardware for the VMs, providing them with
virtualized hardware, so they can run different operating systems or applications without
knowing the underlying hardware.
3. Resource Allocation and Scheduling: Hypervisors manage resource allocation for VMs.
They prioritize resource distribution to ensure optimal performance and ensure fairness
among VMs.
4. Isolation: Each virtual machine is isolated from others, providing security and stability. If
one VM crashes, others remain unaffected.
Challenges of Hypervisors
1. Performance Overhead: Since hypervisors manage virtualization, there may be some
performance overhead compared to running directly on the hardware.
2. Complexity in Management: Large environments with many VMs can become complex
to manage, requiring robust orchestration and monitoring tools.
3. Resource Contention: Multiple VMs running on the same hardware can lead to resource
contention if not managed properly, affecting performance.
● VMware ESXi: A leading Type 1 hypervisor used for server virtualization, with
extensive enterprise support and features.
● Microsoft Hyper-V: A Type 1 hypervisor that is widely used in Windows Server
environments.
● Xen: An open-source hypervisor that supports both Type 1 and Type 2 configurations.
● KVM (Kernel-based Virtual Machine): A Type 1 hypervisor that is part of the Linux
kernel, used in Linux-based environments.
● Oracle VirtualBox: A Type 2 hypervisor suitable for personal use and development
environments.
Sensor virtualization creates a digital representation of physical sensors and the data they
generate, which can then be manipulated and used in simulated or virtual systems. By
virtualizing sensors, users can simulate sensor behavior in a controlled environment, enabling a
variety of applications such as testing sensor networks, developing software for sensor-based
systems, or running simulations where real sensor data might not be available.
This concept is often used in fields like Internet of Things (IoT), smart cities, autonomous
vehicles, robotics, and sensor networks, where large numbers of sensors are used to gather data
in real time.
1. Data Modeling: In sensor virtualization, real-world sensor data is modeled. This can
include data such as temperature, humidity, pressure, motion, or other environmental
factors captured by sensors. The system must replicate the data streams that would
normally come from physical sensors.
2. Abstraction Layer: An abstraction layer is created between the virtual sensor and the
system interacting with it. The software will simulate sensor data that would behave
similarly to a real-world sensor, but without needing the physical hardware.
3. Simulation or Emulation: In some cases, sensor virtualization involves simulating
real-world conditions and sensor outputs based on predefined algorithms. Alternatively, it
could involve real-time emulation, where sensor data is generated dynamically based on
various factors, like changes in environment or interactions with the system.
4. Virtual Sensor Network: Multiple virtual sensors can be connected to form a network,
mimicking a real-world setup. This allows developers to test how sensors interact with
each other, and how data flows in a network, without the need to deploy a large number
of physical devices.
LPAR stands for Logical Partition in the context of cloud computing, and it refers to a method
of partitioning a physical server into multiple independent and isolated logical units. These
logical partitions behave like separate physical machines but share the same underlying hardware
resources. LPARs are commonly used in mainframe computing, but the concept is also
applicable in cloud environments.
1. Partitioning of Resources:
● In cloud computing, LPARs allow for the efficient division of physical resources (CPU,
memory, storage) of a physical server.
● Each logical partition runs its own operating system, which can be completely different
from other LPARs on the same physical server.
● It ensures that each LPAR is isolated from others, providing resource allocation and
management flexibility.
2. Virtualization:
● LPARs are often part of a virtualization strategy used to create multiple isolated
environments on a single physical host, similar to how virtual machines (VMs) work.
● In cloud environments, LPARs may be managed by a hypervisor, which allocates
resources dynamically to each partition.
● Each LPAR is independent, meaning that one LPAR cannot affect the operations of
another, even if they share the same physical hardware.
● This isolation improves security and allows for better resource management in a shared
environment, making it useful in cloud infrastructures.
● The term LPAR is particularly associated with IBM Power Systems, where it is widely
used to partition physical machines into smaller, independent environments.
● IBM's PowerVM is the hypervisor technology that facilitates the creation of LPARs on
IBM hardware.
Sensor Virtualization is a concept that involves abstracting physical sensors from their
hardware and creating virtual representations of them. This allows multiple applications,
systems, or devices to access and interact with these virtual sensors as if they were physical
sensors, even though the data may be coming from a variety of sources or virtualized hardware.
It’s particularly relevant in IoT (Internet of Things) and cloud-based systems, where sensors are
used to gather real-time data, and sensor virtualization provides more flexibility and scalability in
managing these sensors.
1. Scalability:
o Virtualizing sensors enables systems to scale easily. Instead of dealing with a
large number of physical sensors, you can create and manage virtual sensors that
can aggregate data from multiple sources.
o It allows for flexible and dynamic scaling in cloud or distributed environments
without the need for additional physical infrastructure.
2. Simplified Sensor Management:
o With sensor virtualization, managing sensors becomes easier because the
underlying physical sensors are abstracted away. Administrators can manage
virtual sensors using centralized platforms, reducing complexity.
o It becomes easier to integrate new sensors into the system since they can be added
virtually, without needing to modify each application that consumes the sensor
data.
3. Interoperability:
o Virtualization allows sensors with different protocols, interfaces, and types to be
accessed in a standardized way. This makes it easier to integrate various sensor
networks into larger, more complex systems.
o For example, sensors that use different communication protocols (e.g., Zigbee,
Bluetooth, or Wi-Fi) can be virtualized and accessed using a uniform API,
improving interoperability between different systems and devices.
4. Reduced Dependency on Physical Sensors:
o Virtual sensors can function independently of the physical hardware they
represent. This is especially useful in environments where sensors are subject to
failure or where sensor maintenance is challenging.
o If a physical sensor goes offline, the virtual sensor can still provide access to
previously collected data or synthesize data from other sources.
5. Cost Efficiency:
o By abstracting sensor data and using virtualization, organizations can reduce the
need for large amounts of physical sensors. Sensor virtualization allows for more
efficient use of existing sensor data and systems, leading to cost savings in
hardware and maintenance.
o Additionally, sensor virtualization can optimize resource allocation in large-scale
systems, leading to better utilization of available infrastructure.
6. Faster Development and Prototyping:
o Sensor virtualization allows developers to quickly create and test applications that
interact with sensor data, even before physical sensors are deployed. This speeds
up the development process and helps with rapid prototyping.
SAN stands for Storage Area Network. It is a specialized network that provides block-level
access to data storage. SANs are designed to enhance storage capabilities, performance, and
scalability by connecting servers to disk arrays or other storage devices through a high-speed
network, rather than relying on direct-attached storage (DAS) or network-attached storage
(NAS).
Types of SAN:
Benefits of SAN:
NAS stands for Network-Attached Storage. It is a storage solution that connects to a network,
allowing multiple devices (such as computers, servers, or virtual machines) to access shared
storage via standard network protocols, such as TCP/IP. Unlike SAN (Storage Area Network),
which provides block-level storage, NAS provides file-level access to data.
● Data Sharing: NAS allows multiple users to access files over the network, making it
ideal for collaboration and file sharing.
● File-Level Protocols: It supports standard file-level protocols like SMB (Server
Message Block), NFS (Network File System), and AFP (Apple Filing Protocol) for
file access.
● Remote Access: Some NAS devices offer features like remote access or cloud
integration, so users can access files from anywhere with an internet connection.
● Redundancy and Data Protection: Many NAS devices support RAID (Redundant
Array of Independent Disks) configurations for data redundancy, helping protect
against drive failures.
Types of NAS:
Benefits of NAS:
● NAS is ideal for file-level access and provides shared storage over a network. It is great
for environments where file sharing is key (e.g., collaboration, media storage, backups).
● SAN provides block-level access to storage, making it ideal for performance-intensive
applications (like databases and virtual machines). It is more complex to set up and
manage but offers better performance and scalability in large enterprise environments.
Cloud server virtualization is a core concept in cloud computing that allows multiple virtual servers (also
called virtual machines, or VMs) to run on a single physical server. This enables better resource
utilization, scalability, and isolation between different environments. Here's a breakdown:
🔧 What is Virtualization?
Virtualization uses software (a hypervisor) to create a layer between physical hardware and operating
systems, allowing multiple independent VMs to run on a single physical server.
Types of Hypervisors:
1. Type 1 (Bare-metal) – Runs directly on hardware (e.g., VMware ESXi, Microsoft Hyper-V, KVM).
2. Type 2 (Hosted) – Runs on top of an existing OS (e.g., VirtualBox, VMware Workstation).
● IaaS (Infrastructure as a Service): You get virtualized servers to run your applications.
● Rapid provisioning: Spin up servers in minutes.
● Resource efficiency: Better CPU, memory, and storage utilization.
● Isolation: Each VM is isolated, improving security and stability.
🔐 Security Considerations:
● VM sprawl (too many unused VMs)
● Hypervisor vulnerabilities
● Proper isolation and network segmentation
A virtualized data center refers to a data center where traditional physical hardware (such as
servers, storage, and networking) is abstracted and virtualized through software. This allows for
the creation and management of virtual machines (VMs), virtual storage, and virtual
networks all running on the same physical infrastructure. The goal of virtualizing a data center is
to improve resource utilization, flexibility, scalability, and operational efficiency.
1. Complexity:
o While virtualization provides many benefits, it can also add complexity in terms
of management, monitoring, and troubleshooting.
2. Performance Overhead:
o Virtualization introduces some overhead, as resources are being abstracted and
shared between VMs. This can potentially impact performance, although this
overhead is often minimal with modern technologies.
3. Security Risks:
o The hypervisor can be a target for attacks. Proper security measures must be in
place to protect the virtualized environment.
o Misconfiguration can lead to vulnerabilities that expose multiple VMs or data to
risks.
4. Licensing & Compliance:
o Virtualization may introduce licensing challenges, as software licenses are often
tied to physical hardware or specific configurations.
o Compliance and regulatory requirements may need to be adjusted when migrating
to a virtualized environment.