0% found this document useful (0 votes)
18 views10 pages

Transforming: Physical

Uploaded by

Hessa Alneaimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views10 pages

Transforming: Physical

Uploaded by

Hessa Alneaimi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Virtualized Data Center : convented from CDC

Classic Data Center (CDC) into Virtualized Data Center (VDC) requires virtualizing the (application storage compute
*
Transforming a a core elements , , ,
network) ofthe data center

each virtualization of component is different


.

Using a phased approach => No need to do all of them (just what required) .
to a virtualized
infrastructure enables
smoother transition to
virtualize core elements.

Compute Virtualization Enable multiple : OS to run on the same machine at the same time given an instance to run each machine .

creation of multiple VMs each Os & application


>
-
running an

Chypervior) >
- resides between hardware & VMs .

> Creates illusion that there are multiple layers of the physical resources
(compute
,

Network ,
Storage) .

[1 hardware

Jargon (Synonym) : a
physical machine - host machine -

compute-Server

* Virtual machine -

guest machine -
Virtual compute - Virtual Server

Need For Compute Virtualization :

>
- specifically designedI
tested for a specific type
of hardware

&
Save CAPEX

Con the same hardware)

fightly

of
since we have multiple OS of different type running on the same type
hardware
-hardware there but NOT
running ↓
(CAPEX)
MAC Windows Linux
, , > runs on
- the same

Software
> Application must run on that type of OS which that must run on that type of hardware
=> No different software for each
which requires CPU that runs on some os which has binary machine code specifically
generated to this kind of processor .
Hypervisor :
software that allows multiple operating systems to run concurrently on a physical machine and interact directly with the physical hardware.

Components : -Kernel: Creates multiple instances.

-Virtual Machine Monitor (VMM): Acts as a descriptor for the virtual machine.

Types :
-Bare-metal hypervisor: runs directly on the hardware of the physical machine (base metal)
- It allows for different types of VMs to run on it.
-Hosted hypervisor: runs as an application on top of the operating system.
- VMs can be created and managed through this host layer, often using VM software.
VM Scheduling: The hypervisor kernel handles the scheduling of VMs, allowing one VM to run for a certain time and then switching to another.

-The OS has the kernel —> do scheduling, access demand, do network functionality, and manage storage.

-App: encapsulated. It is the process that runs.


-To manage these processes, we must keep track of what these processes are doing, who is the owner, what state they are in, what they are doing, what they
are accessing.
-Process management: where we keep information about processes in the process control block (PCB); process name, ID, priority, physical state, physical stack.

virtual machine monitor: keep the info about the VM, the type of OS it is using, how much memory is allocated to it, storage, etc.
—> The one that must know what the VM is doing is the hypervisor.

illusion
of hardware

1 -
Bare-Metal Hypervisor : VMware ESX EESXi ,
Oracle VM

-
Act as an OS >
if it crashes - others will still work because
they are working on isolation manner .

-
Installs & runs on x86 bare-metal hardware
process
a (process boot in here
<

-
Fast (short process process 1

process
- more efficient than hosted hypervisor

-more
performing
-
Found mostly in cloud data centre

2- Hosted
Hypervisor : VMware Workstation Eserver Oracle ,
VM VirtualBox -
3 Both Bare-Metal I Hosted :

- Acts as an Application . KVM & KM Hyper-V

Installs & runs application


processo
process
- as an

process 2f
Relies physical Machine.
- on OS
running
on
processi & process

Use it if do NOT want boot environment


-

you
> used for debugging , Software development testing
,

-Slow
(long process)

- Not efficient >


- Not suitable for real-time application , serious applications ,
commerical applications .

- Good for Cloud Data Centre


to application from machine type to another (windows
ways run one to MAC OS)
using Hosted Hypervisor :

D Install MAC hypervisor (parallel hypervisor) -> run MAC OS -> run
the MAC application within MAC OS VM .

2) Dual boot When the bias boots up (machine powers up) -> we
:
can boot in windows Cor another OS) - we can choose which one is default which one to configure .

have
, we OS installed -> it has all
physical storage ,
disk drives RAMs
, >
-
Storage can be overwritten to os -
> host hypervisor will run .

How do you boot/get the OS


running ?

1- Turn on the machine, we have the BIOS (basic input/output system)


We run the BIOS before Windows.
Has basic functionality (how it refers to the keyboard) —> It has minimal functionality.
2- When you turn on the machine, the CPU kicks in —> The first instruction the CPU must execute is the instruction from where? The BIOS.
Where does this code of the BIOS exist? In the motherboard (RAM chip).
3- This BIOS starts running —> it does monitoring, checking, discovering the system itself (testing RAM, input/output, discovering devices).
4- If you want to move to the BIOS mode, this is not an OS so far (you are just running the code in the BIOS for testing, discovering, monitoring).
5- How do you get to BIOS? What happens when you boot? Depends on the laptop you have —> it tells you as it is booting.
6- It shows the screen that shows us the BIOS —> You can boot the OS from different devices, pass forward.
We have several configurations in there —> you get different features, set passwords, etc.

7- After doing all these checking and monitoring configurations, the BIOS will start, then it will run the OS.
8- It will try to find where the executable is.
The BIOS will find where the image for the OS is (image meaning executable because the OS is a code or application that is executable).
9- it loads that OS in the RAM memory, then starts transferring control to it and starts executing.

Flow: CPU → BIOS (do monitoring, check errors) → Finds OS to load (OS starts running).
ROM

BIOS

Hard Disk Drive


Boot OS1 OS2 Swap Directory Find the boot loader (in some physical location on the hard disk) —>
Loader image .
exe
image .
exe Hypervisor space files read it —> find the image (OS image that is executable) —> loads it in
&S itself RAM —> transfers control to it —> OS starts running.
Boots

RAM —> You load the hypervisor —> it will run with its interface —> you
can load VMs.
BIOS boot an executable in RAM

Boot
loader appexerimage executable
& OS
There could be multiple OS os loader
L
running
. boots the
image
From app . exe

into
memory

Benefits of Compute Virtualization:


-Server Consolidation: Running multiple virtual machines on a physical server —> Different hardware for different things.
-Isolation: If an app within virtual machines crashes, it is fine because you crash this one, but everything else remains running.
-Encapsulation, Portability, and HW Independence
A VM is a package with App, OS, and HW resources —> One physical machine that we can run multiple OS, different types of apps.
VM can be moved and copied from one location to another just like a file —> Guaranteed to run on any type of hardware.
-Reduced Cost: One VM but different applications running on it —> Consolidate power, cooling, maintenance, hardware
Not possible to do
X86 Hardware Virtualization : we must know where to place OS
hypervisor leta
a

we ad
>
-
& instruction
privilege
subset of privilege

3
instructions
OS is designed to run on a bare-metal hardware and to fully own the hardware: Not used

—> x86 architecture offer four levels of privilege: Ring 0, 1, 2, and 3


- User apps run in Ring 3 - OS run in Ring 0 (most privileged)
>
- have full control

—> Challenges of virtualizing x86 hardware:


have total
control of
a
amost
of it since it is in

- Requires placing the virtualization layer below the OS layer Ring O


everything
- difficult to capture and translate privileged OS instructions at runtime

The most important > coordinates the


feature - access of resources >
- if one process does something to the resource
the other process it (can't overwrite it) disk storage allocated to this process
>
- Can NOT use -ex :
only .

What is a privilege example that runs on ring 0 but not ring 3?


• Read/write
• Anything that has to do with hardware (interrupt, program status, configuring RAM, configuring SCSI drive, reset instruction)
• Opening files, accessing files, creating files

What are non-privilege? The things applications can do


• Add, move operations
• Nothing to do with hardware
• Output something to a screen
Problem: The hypervisor must run on top of the hardware —> it must have total control of the machine. But the OS has the most control over the
machine. So where do we run both?
• If we put HW down and OS above in ring 1, the OS will have less privilege —> if you want to disable the interrupt, the processor will not allow
you because it is in ring 1, not ring 0 (we have conflict).
• If we put them both in the same ring, there will be a conflict —> How can both access at the same time?
• If we put the OS in ring 1, it will have less privilege
P .
Pr - -

Pr

OS
>
hypervisor must run on top of hardware
HW it must have total control ofthe

Where can we place the hypervisor and OS? hypervisor must run on ring 0, but to place the OS we have 3 techniques. Vi

Techniques to virtualize compute:

1- Full Virtualization: hypervisor at Ring 0 and OS at Ring 1

We load Guest OS in ring 1 (we scan the executable

3 original machine instruction


"machine instruction of OS").—> we load the HV —> boot in & we overwrite the
HV —> load the guests in ring 1. I we save it
- s get another executable
that bypass all checks

VirtualMachineMonitas a

Each VM is assigned a VMM


-Provides virtual components to each VM
- Performs Emulation using Hooks or Binary Translation (BT) of non-virtualizable OS instructions
- Binary Translation provides ‘Full Virtualization’ because the hypervisor completely decouples the guest operating system from the underlying hardware.
- Binary Translation: simple, non-virtualized instructions: add,move —> you translate executable —> Every instruction that is privileged but is recent,
old, we replace it by a function call that will trap into HV.
- Guest OS requires no modifications & not aware of being virtualized.
Os executableWe scan Privileged Instructions —> for every privileged instruction we encounter, we replace it with a function call that goes into a VMM.
& a privileged instruction —> I replace it with JMP to some address (1234), and this address is already in VMM (in HV).

RST JMP 1734


- hooking/trampoline: When you jump back and forth, replace privileged instructions with calls that
76784 will direct us to a code in VMM. We execute the code, then return back to the next instruction.

VMM-HV
>

1234 ----- L 32 OS

JMP6784

The hypervisor must scan every code, every assembly instruction within the OS executable —> a lot of work and takes a lot of time.

- Why does Full Virtualization take a lot of time? Because it does a lot of binary translation —> We have to scan every instruction line by line, and
sometimes it misses.

Do we have to change the original executable of the OS? No, the changes are all made by the HV itself. The guest OS requires no modifications.

Nonvirtualized instructions (privilege) include sensitive kernel operations (CPU ops, memory management, interrupt handling and time keeping) —> If a
guest OS accesses CPU flags, the binary translation program replaces these with calls to the hypervisor or specific opcodes to trap into VMM.

Hello.c—>Hello.exe: when you compile Hello.c, you will get the executable Hello.exe which contains the assembly instructions.

2- Paravirtualization: both hypervisor and OS at Ring 0.

-Para: alongside.
-Paravirtualization allows communication between the guest OS and hypervisor to improve performance.
-Guest OS knows that it is virtualized
-Modified guest OS kernel is used —> The source code of the guest OS is modified —> All system HW resource access related code is modified with
Hypervisor APIs.
-Unmodified guest OS is not supported —> Compatibility and portability are poor
-Paravirtualization must have special version of OS that runs on top of HV to not have a problem if you run both OS and HV at the same ring
—> The OS code is written for virtualization (in the previous one there was no modification—> the modification was only by HV).
—>All these privileged/non-virtualized instructions (called hyper calls —> calls to the hypervisor) are replaced by calls that will trap in HV code
—> here, every privileged instruction in this modified OS version already has calls to the HV.

Modified code of OS, where for every privileged code, it is written specifically to run on top of HV.
—> Instead of having a sys call, we have a hyper call.

76784
os executable

RST JMP 1234


↑ Call HU-RST

VMM-HV

V-RST S

32 OS
>
-
having special version of the OS that
1234 L HURAM
designed to top of HV .
----

is
-

run on

JMP6784

3- Hardware Assisted Virtualization: design CPU to speed up


-Automatically done by hardware/processor.
-It replaces the privilege instructions (fullV) or hypercalls (paraV) on the block
—> you don’t have to do it ahead of time with binary translation or paravirtualization.
-Reduces virtualization overhead caused due to full and paravirtualization —> speed up CPU.
-The guest VMM state is stored in Virtual Machine Control Structures (VT-x) or Virtual Machine Control Blocks (AMD-V)

Process Switching:
If one PCB is running and we want to run the other one (one is in running and other is in ready state) —>
we copy the CPU register of the process running into the PCB, then load the CPU register from PCB into
CPU.
—> Rather than doing all this, when switching from one OS to another, it is done automatically by the
CPU (CPU does the copying) in Hardware Assisted Virtualization.
Xen Hypervisor: A bare-metal HV that employees paravirtualization —> free open source software for hypervisor.

domain (host system &


provides console management
user
to Xen UMM &
guest system
Hypervisor)
CCLI GUI
, , User commands) -

uses back-end drivers to allocate-


resources to Dom U Virtual machines.

handles all hardware


>
- access
using
the
hypervisor to pass requests to the host (DomO) .

with full to DomO also has active


a
Any user access complete control over
every Dom U
.

If want to access hardware ask DomO >


- DonO will ask the Kernel processor who will start this service
you
· - .

Oracle VM Server (based on Xen


Technology) :

domain
to interact hypervisor a User

with
>
-

it if you want
go through
to access the hardware
>
- Kernel :
Scheduling creation, ,
management

At the user level you must initiate the hyper call through Oracle VM Agent then can call the hypervisor
- ,
you
-
.

Dom 0 takes these hypercalls (from DomU1 and DomU2), then passes it to the kernel (hypervisor down).
It also provides console management (CLI) —> If you want to manage, run, start all in Dom 0.
Dom 0 gets all hypercalls from the virtual machine running at the user domain and provides console management (CLI, GUI).

Xen architecture better than hardware-assisted virtualization architecture because:


• You don't do mode switching from user to kernel —> cheaper.
• we are running things in the user domain —> we don’t have to do switching mode from user to kernel (this has an interrupt). So if you call a
hypercall and make it from user domain to user domain, no need for kernel switching.
• We need to have user-to-kernel switching, which is an interrupt that involves CPU overhead, registers, etc.

Virtual Machine:
-From a User's Perspective: VM is a logical compute system.
Runs an OS and applications like a physical machine.
Contains virtual components such as CPU, RAM, disk, and NIC (all stored in an image "AMI").
—> Give me a machine with OS —>within that OS I run an app on top of it.

-From a Hypervisor’s Perspective: VM is a discrete set of files including Configuration, Virtual disk, Virtual BIOS, VM swap, and Log files.
Each file contains information about different aspects of the VM.
—> I am a hypervisor, I need to give information about the VM (how much storage, RAM, CPU it has, configuration of BIOS and hard disk).
—> All this information we keep in an image (AMI).
—> AMI has a package with all these files —> One of these files has all information about the VMs.
Virtual Machine Files:

• Virtual BIOS File:


Basic functions —> Stores the state of the virtual machine’s (VM’s) BIOS.

• Virtual Swap File:


Acts as a VM's paging file, backing up the VM RAM contents.
exists only when the VM is running.
When you want to increase virtual memory of OS.
When you want to increase physical RAM —> we create how much we want to increase.
If you want to increase from 4 GB to 8 GB —> create how much virtual memory you need for the OS.
—> The swap file size is the virtual paging.
—> The swap file must be as big as the virtual memory to start paging.
—> If a required page isn't in RAM, it's fetched from the swap file and placed into RAM.

• Virtual Disk File:


Stores the contents of the VM’s disk drive.
Appears as a physical disk drive to the VM.
VMs can have multiple disk drives.
D Drive / C Drive —> this drive must be stored in the hard disk to store VM disk drive contents for every OS.

• Log File:
Keeps a log of VM activity (events, errors) and is useful for troubleshooting.
Captures events, errors, and anything related to OS (like event viewer) —> Every event (including entering a password) is recorded.

• Virtual Configuration File:


Stores configuration information chosen during VM creation.
Includes details such as virtual machine name, guest OS, virtual disk parameters, number of CPUs and memory sizes, number of adaptors, MAC
addresses, network adapters, SCSI controller type, and disk type.
Contains information about the OS that enables the hypervisor to run —> Specifies details such as virtual machine name, guest OS version (Linux,
Windows), virtual disk parameters, allocated CPUs and memory, adapters, MAC and IP addresses, SCSI controller type, disk type, and network
parameters.

Virtual Machine Hardware Components:

(Virtual (PU) -logical or core CPUs


Virtual Machine Console:
• Provides mouse, keyboard, and screen functionality.
• used when installing an OS.
• Allows access to BIOS of the VM.
• Offers the ability to power the VM on/off and to reset it.
• Used for virtual hardware configuration and troubleshooting issues.
• Hypervisor has console (CLI, GUI).

Hypervisors support multi-core, hyper-threading, and CPU load- balancing features to optimize CPU resources.
- Multi-core processors: multiple processing units (cores) in a single CPU.
- Hyper-threading: physical CPU appear as two or more logical CPUs

1) Multi-core Processors: processors with different types of architecture (one processor with multiple cores)
-Socket (processor): CPUs that combine two or more cores into a single integrated circuit —> each socket has its own power
-Virtual machines can be configured with one or more virtual CPUs.
-Virtual CPUs in virtual machines run on a physical CPU by the hypervisor.
-Hypervisor scheduler: optimizes the placement of virtual CPUs onto different sockets/processors to maximize the overall utilization and performance.

VM that runs at multiple cores —> better performance—> By running the OS using timesharing.
give each app or process a time slot to run —> one process is given to a core, and then another process to another core —> At any instance of
time, we can have multiple apps/processes running at the same time.

expensive

each CPU has its
own ALU , Cache ,

registers
d
but all packaged
in one socket (processor(

1 2

messor processor 1 processor 2

Allows only one Cpu Allows 2 CPUs to Allows U CDUs


execute at thesame to execute atthe same
to execute at a time than
time -> better performance single time - better performance than single dual
two processors (dual socket system
One processor (Singal socket system One processor (Singal socket system
core for each
- single -
double (dual) cores for each >
- multiple (quad) cores for each

2) Hyper-threading: Makes a physical CPU appear as two Logical CPUs (LCPUs) —> Enables OS to schedule two or more threads simultaneously.
- Two LCPUs share the same physical resources —> While the current thread is stalled, CPU can execute another thread (Due to cache miss or data
dependency).
- Hypervisor running on a hyper-threading-enabled CPU provides improved performance and utilization.
- Every CPU has its own core/ALU —> If you want to do load balancing → increase performance by increasing the number of cores.

Why do we have duplicates of hardware CPUs (two sets of hardware registers)?


1- Manual Saving and Restoring:
- When switching processes, the CPU registers must be saved manually (copied into memory).
- Upon returning to the original process, the registers must be restored manually by copying them back.
LCPS
one
2- Time and Resource Cost:

X- delete this to - The process of saving and restoring registers manually is expensive in terms of time and resources.
Terminatea Save
energy

energy - This context-switching overhead can slow down the system.


3- Independent Registers for Each VM:
- By having separate sets of CPU registers for each Virtual Machine (VM), the system can switch contexts quickly.
- This eliminates the need for manual copying and restoration, thus saving time and improving efficiency.
HT (Hyper-Threading) Technology:
- Allows a single physical CPU core to execute two software threads simultaneously, creating multiple logical cores —> at a time, only one thread is using
the ALU.
- Makes context switching more efficient by adding additional hardware resources (interrupt controllers, general, control, and special registers).
- Enhances resource utilization and improves performance.
- Minimizes context-switching overhead compared to manual copying.
- Enabled if the number of threads is higher than the number of cores —> total thread count equals twice the number of cores (2 cores = 4 threads).
- Mechanism: - Adds a second set of core components to maintain thread states for both threads.
- Allows effortless switching between threads by toggling register sets.

When a hypervisor is running on multi-processor and hyper-threading-enabled compute systems —> need to balance the load across CPUs to achieve
performance —> do it by migrating a thread from one logical CPU (over utilized) to another (under utilized) to keep the load balanced.
1 2 3

VM2 is communicating with VM3, VM1 is not communicating with any. How will you access the hypervisor

- Power/Energy Saving: Hypervisor shuts down unused cores (core connected with VM1) to save power by
holding their state —> ex: matrix multiplication

- Load Balancing: Hypervisor distributes tasks across logical cores dynamically to prevent overloading and
improve performance.

logical CPU

each 2 translated to
one
physicalpu

VM Affinity: link a VM to certain hardware / hypervisor —> each hardware has different characteristics (licenses) —> assign the VM to the hardware /
hypervisor with the specific characteristics they need to be launched on.
- When sharing hardware —> performance is reduced.
- Affinity can be assigned freely unless if you have a VM that requires a special hardware —> it must be assigned to a special hardware.

Types:

1- VM to CPU Affinity: All threads of the same VM run on a specific CPU.

2- VM to VM Affinity: A selected group of VMs is affinitized to the same hypervisor to improve performance when VMs are communicating with each other
heavily and for licensing reasons.
- If a group of VMs communicate heavily —> they are placed under the same hypervisor or close hypervisors to avoid delays.
3- Anti-Affinity: Ensures selected VMs are not together on a hypervisor for availability, green computing, or load balancing.
- allows VMs to migrate to different hypervisors in a cluster.
- used when there is no communication between VMs and when the exact location of a VM does not matter.
- helps with load balancing or consolidating VMs, especially in cloud datacenters.
- If a VM’s hardware is going crazy, we can’t affinitize it under the same hypervisor since that hardware acts strangely —> Anti-affinity allows us to
monitor, move the VM, and reassign it elsewhere.

Q: When is affinity a good option?

When there is no heavy communication, and we want to save power and consolidate resources, balance the load, or ensure availability.

Q: Why is anti-affinity good for security?

If two VMs share the same hardware under the same hypervisor, they also share RAM, disk, and internal cache, making them vulnerable to attacks.
—> Anti-affinity avoids this by separating the VMs, ensuring they don’t share resources and reducing the risk of security breaches.
- Hypervisor must manage the memory by assigning it to cores and CPUs.
- If you have a physical RAM of 4 GB, when you run a VM it requires 2 GB —> you ca n run 1 VM not 2 because the hypervisor has to run, so it will
occupy part of the RAM before we bring the OS of VM.
- Hypervisor has limited ability of the physical RAM → we make the physical RAM that is available for the VM not processes look more than what is really
allocated physically.
—> Hypervisor must do memory optimization and memory commitment using 3 techniques:

1- Transparent Page Sharing:


- allows multiple VMs to share identical memory pages by mapping them to the same physical page.
- used to save physical memory by avoiding redundancy.
- VMs with identical OS can share up to 90% of code and data if the pages are identical.
If VMs run the same OS type, the OS code is not duplicated in memory —> all VMs point to the same physical memory for shared pages, reducing the
need for additional physical RAM.
- Pages are shared as read-only until modification.
- Modification (Copy-On-Write - COW): When a VM attempts to modify a shared page (page content differs), the hypervisor:
- Generates a page fault.
- Creates a separate private copy for the modifying VM.
- Every VM will have its own page without disrupting other VMs.
—> ensures data integrity without disrupting other VMs.
- Optimizes memory usage by reducing physical RAM requirements.

-data
B A Virtual Memory

Linux Windows

physical RAM
2 1

separat copysincehaea
requirements.

2- Memory Ballooning:

- A way to manage memory by letting the hypervisor take memory from one VM and give it to another.

- Hypervisor uses a balloon driver (BD) installed in the Guest OS of the VM with lower priority to reclaim memory.

- Hypervisor reallocates the reclaimed memory to a high-priority VM that needs more resources.

- Process: - When memory is needed:


1- The balloon driver inflates and reclaims memory from the lower-priority VM.
2- The Guest OS pushes unused pages to the disk.
3- The hypervisor takes the reclaimed memory and assigns it to the high-priority VM.
- When the memory shortage ends:
1- The balloon deflates, releasing memory back to the lower-priority VM.
2- The driver relinquishes the reclaimed memory to the Guest OS.
3- The Guest OS can then use the memory pages again.
3- Memory Swapping
- A process where the hypervisor moves the VM's memory contents to disk when memory resources are insufficient.
- Each VM requires a swap file: created when the VM powers-on and deleted when the VM powers-off.
- Process:
1- Hypervisor swaps out the entire memory for VMs with lower activity levels (least active VM) to free up resources.
2- Hypervisor copies the VM's physical memory pages to the corresponding swap file and reallocates memory resources to more active VMs.
- It is resource-intensive and introduces significant performance degradation.
- Retrieving swapped data from disk takes time, causing delays in VM processes.
- Swapping is used as a last resort due to its high latency and impact on system efficiency.

You might also like