0% found this document useful (0 votes)
246 views7 pages

Mrigank's: The Need For x86 Virtualization

Virtualization was first implemented by IBM in the 1970s to logically partition mainframe computers into separate virtual machines, allowing mainframes to run multiple applications simultaneously and fully utilize expensive hardware resources. In the following decades, distributed computing on inexpensive x86 servers became prevalent, establishing new challenges around low server utilization, increasing infrastructure costs, complex management, and lack of failover protection. Virtualization allows multiple operating systems to run concurrently on a single computer managed by a hypervisor, converting one physical server into multiple isolated virtual machines. This improves hardware utilization while reducing costs and complexity.

Uploaded by

Saumya Dubey
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
246 views7 pages

Mrigank's: The Need For x86 Virtualization

Virtualization was first implemented by IBM in the 1970s to logically partition mainframe computers into separate virtual machines, allowing mainframes to run multiple applications simultaneously and fully utilize expensive hardware resources. In the following decades, distributed computing on inexpensive x86 servers became prevalent, establishing new challenges around low server utilization, increasing infrastructure costs, complex management, and lack of failover protection. Virtualization allows multiple operating systems to run concurrently on a single computer managed by a hypervisor, converting one physical server into multiple isolated virtual machines. This improves hardware utilization while reducing costs and complexity.

Uploaded by

Saumya Dubey
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

History

Virtualization was first implemented more than 30 years ago by IBM as a


way to logically partition mainframe computers into separate virtual
machines. These partitions allowed mainframes to “multitask”: run multiple
applications and processes at the same time. Since mainframes were
expensive resources at the time, they were designed for partitioning as a
way to fully leverage the investment.
The Need for x86 Virtualization.

The Need for x86 Virtualization

Virtualization was effectively abandoned during the 1980s and 1990s when client-server applications and inexpensive
x86 servers and desktops led to distributed computing. The broad adoption of Windows and the emergence of Linux
as server operating systems in the 1990s established x86 servers as the industry standard. The growth in x86 server
and desktop deployments led to new IT infrastructure and operational challenges. These challenges include:

 Low Infrastructure Utilization. Typical x86 server deployments achieve an average utilization of only 10% to
15% of total capacity, according to International Data Corporation (IDC), a market research firm.
Organizations typically run one application per server to avoid the risk of vulnerabilities in one application
affecting the availability of another application on the same server.

 Increasing Physical Infrastructure Costs. The operational costs to support growing physical infrastructure
have steadily increased. Most computing infrastructure must remain operational at all times, resulting in
power consumption, cooling and facilities costs that do not vary with utilization levels.

 Increasing IT Management Costs. As computing environments become more complex, the level of
specialized education and experience required for infrastructure management personnel and the associated
costs of such personnel have increased. Organizations spend disproportionate time and resources on
manual tasks associated with server maintenance, and thus require more personnel to complete these
tasks.

 Insufficient Failover and Disaster Protection. Organizations are increasingly affected by the downtime of
critical server applications and inaccessibility of critical end user desktops. The threat of security attacks,
natural disasters, health pandemics and terrorism has elevated the importance of business continuity
planning for both desktops and servers.

 High Maintenance end-user desktops. Managing and securing enterprise desktops present numerous
challenges. Controlling a distributed desktop environment and enforcing management, access and security
policies without impairing users’ ability to work effectively is complex and expensive. Numerous patches and
upgrades must be continually applied to desktop environments to eliminate security vulnerabilities.

WHAT IS VIRTUALIZATION?

Virtualization is the creation of a virtual (rather than actual) version of something, such as an operating
system, a server, a storage device or network resources.

Mrigank’s
Virtualization allows multiple operating system instances to run
concurrently on a single computer; it is a means of separating hardware
from a single operating system. Each “guest” OS is managed by
a Virtual Machine Monitor (VMM), also known as a hypervisor. Because
the virtualization system sits between the guest and the hardware, it can
control the guests’ use of CPU, memory, and storage, even allowing a
guest OS to migrate from one machine to another.

By using specially designed software, an administrator can convert one


physical server into multiple virtual machines. Each virtual server acts like
a unique physical device, capable of running its own operating system
(OS

Virtualization is not a new concept, but its complexity has been growing,
and a number of new paradigms are rising. I will try to demystify some of
the concepts behind virtualization, briefly explain some of its basics,
and finally look at some of the products and solutions out there.

To begin, let me introduce three very simple concepts regarding


virtualization: the host operating system, the hypervisor, and the guest
operating system.

AS U CAN SEE IN THIS SLIDE…EXPLAIN THE DIAGRAM.

Virtualization Components

The host operating system provides a host to one or more virtual machines
(or partitions) and shares physical resources with them. It’s where the
virtualization product or the partitioning product is installed.

Saumya

The guest operating system is the operating system installed inside a


virtual machine (or a partition). In a virtualization solution the guest
OS can be completely different from the host OS. In a partitioning
solution the guest OS must be identical to the host OS.

A hypervisor, also called a virtual machine manager (VMM), is a program


that allows multiple operating systems to share a single hardware host.
Each operating system appears to have the host’s processor, memory, and
other resources all to itself. The task of this hypervisor is to handle
resource and memory allocation for the virtual machines, ensuring they
cannot disrupt each other, in addition to providing interfaces for higher
level administration and monitoring tools.
They are of 2 types

a) bare-metal hypervisor- Type 1 hypervisors, also known as bare-


metal, are software systems that run directly on the host’s
hardware as a hardware control and guest operating system
monitor. Bare-metal virtualization is the current enterprise
data center leader.
b)hosted hypervisor-Type 2 hypervisors, also known as hosted,
are software applications running within a conventional
operating system environment. This type of hypervisor is
typically used in client side virtualization solutions such as
Microsoft´s Virtual PC
 
What is protection Rings?!!!

Protection Rings, are a mechanism to protect data and functionality


from faults (fault tolerance) and malicious behavior (computer
security). This approach is diametrically opposite to that of
capability-based security.
Computer operating systems provide different levels of access to
resources.
A protection ring is one of two or more hierarchical levels or
layers of privilege within the architecture of a computer system.
This is generally hardware-enforced by some CPU architectures that
provide different CPU modes at the firmware level. Rings are
arranged in a hierarchy from most privileged (most trusted, usually
numbered zero) to least privileged (least trusted, usually with the
highest ring number). On most operating systems, Ring 0 is the
level with the most privileges and interacts most directly with the
physical hardware such as the CPU and memory.

navin

Types of virtualizations

) Hardware Virtualization:

This is the most common and is used in IT departments in a company as well


as in the data centers. The server’s hardware is virtualized thus
allowing us to be able to run different OS and different applications
simultaneously on the same hardware. This allows us to do server
consolidation. And the benefits are obvious (only listing the critical
ones here…and less cost is a major advantage across all of these):

a) Less number of servers required for the same number of applications.


b) Less power consumption.
c) Less maintenance overhead for the IT staff.
d) More resource utilization.
e) Easier (and faster) to add more capacity.
f) Patch management and upgrades become easier.
g) DRP (Disaster Recovery Planning) becomes easier. Without any
interruption to the service, one can backup and even migrate entire
virtual environments.

2) Desktop Virtualization:

We have gotten into the habit of calling this VDI since that is the term
that VMWare uses for desktop virtualization. VDI stands for virtual
desktop infrastructure. But this is not limited to just VMWare. Citrix
systems has a similar offering called the XenDesktop. What this means is
that your end user’s computer’s data – their OS, their applications, their
downloads, their preferences etc. are all stored in a VM in a hosted
environment which could be hosted either by the company’s IT in-house or
hosted in a data center. Som the VMs are then managed in one single place
for all the users in a department/company and the computing environment is
delivered remotely to the end users. The one reason why the adoption has
been a bit slow on this front is because unlike server consolidation
(hardware virtualization), desktop virtualization requires working across
a lot of different organizations within the company and it impacts the end
users a lot more during the stages of putting the plan in place and
executing it. Benefits are obvious:

1) Easier upgrades and patch management.


2) IT Desktop support process becomes much more easier.
3) You can easily add more users as your organization grows and
provisioning of new applications and VMs takes minutes and not days/weeks.
4) Better resource utilization and less power consumption.
5) Easier recovery management.

aditi

3) Storage Virtualization:

So, consolidating servers as well as the desktops is all great but what
happens to the storage requirements then? Won’t the storage requirements
also grow by leaps and bounds? This is the next question that you are
going to get from your clients – internal or external. This also means
that since everything is in one place, one also needs to have a proper
plan for disaster recovery and business continuity. So what does storage
virtualization mean then? It means we would then need to make multiple
storage devices appear as a common shared media. A proper back-up and
restore strategy needs to be formed as well then and a proper DRP needs to
be done – both local and site failures need to be accounted for. We will
present a more detailed DRP analysis in one of our whitepapers that we are
working on.

Slide

the five most significant advantages that can help enterprises enhance their overall performance:
Server unification and infrastructure maximization

Virtualization enables server unification that integrates various servers spread across an enterprise, enabling these
servers to work by sharing applications and databases along with other infrastructure and resources, which leads to
the maximizing of infrastructure and efficient resource usage for enterprises.

Reduction in cost of infrastructure

Virtualization enables administrators to reduce the number of servers, hardware devices and data centers, which not
only downsizes the infrastructure and maintenance costs but also reduces power consumption.

Enhances flexible functionality of management

Virtualization requires fewer servers to perform the routine, repetitive, time consuming tasks such as configuration,
upgrades, monitoring and system scheduling.

High accessibility of applications and data

Virtualization so transforms the applications and data in the virtual environment that high accessibility without any
time delays and disruption results. This makes it possible to fulfill the needs of both customers and staff in a secure
environment.

Improves system compliance

Even with customers using different OS and networks, virtualization makes it possible to form a virtual enhanced
system unrestricted by different OS and network limitations, which makes it easy for users and clients to remain
linked with the enterprise.

Disadvantages-

Bearing in mind all the advantages of virtualization, one could easily conclude that virtualization
is the perfect technology for any enterprise. Well, virtualization does have many benefits but it
also has some drawbacks as well.

Knowing these drawbacks allows to adopt a realistic approach to virtualization and to make a
fair judgment if virtualization is suitable in a given scenario or not. Here are some of the most
notable drawbacks of virtualization:

Virtualization Solutions Have a Single Point of Failure

One of the greatest drawbacks of virtualization is that there is a single point of failure. When the
machine, on which all the virtualized solutions run, fails or when the virtualization solution itself
fails, this crashes everything.

This might sound scary but actually this risk is relatively easy to provide for. Redundant capacity
and regular backups of the virtualized operating systems (together with the virtualized
applications) are a warranty against data loss and downtime due to the single point of failure.
Virtualization Demands Powerful Machines

Virtualization might save money because thanks to it less hardware is required and this allows to
decrease the physical number of machines in an enterprise but this does not mean that it is
possible to use archaic computers to run top-notch virtualization solutions.

Virtualization solutions are hardware monsters and they require a really powerful machine. If the
machines used are not powerful, it is still possible to deploy virtualization solutions but when
there is no enough RAM and CPU power for them, this will seriously disrupt their work.
Anyway, it is still cheaper to add 4GB of RAM to a machine to make it more powerful than to
buy a new machine, right?

Virtualization Might Lead to Lower Performance

Even if the machines on which virtualized operating systems and virtualized applications are run
are powerful enough, performance issues are still possible. What is more, one of the most
unpleasant facts is that very often there is no problem with a particular application when it is not
virtualized but when it is deployed in a virtualized environment, all sorts of issues start to
surface.

For instance, stress tests in virtualized environments yield very different (and misleading) results
in comparison to stress tests on a dedicated machine.

Application Virtualization Is not Always Possible

While in most cases it is not possible to predict if a particular application will misbehave when
virtualized or not, there are also many applications, which are known to experience performance
degradation when virtualized. Databases are one of the most common examples of such
applications. Databases require frequent disk operations and when there is a delay in reading
from or writing to the disk because of virtualization, this might render the whole application
useless.

Still, nobody says that it is impossible to virtualize a database application - even real-time
financial applications successfully run on virtualized environments. However, if given the
chance, it is much safer to avoid virtualizing such critical and demanding applications.

18-19-video liza

Examples of situations in which virtualization can help you realize quick return on investment include:
Environments which require availability of a library of servers, with different setups, for purposes such
as software development (i.e., test scenarios), quality assurance testing, software support (quickly and
easily reproduce a relatively large number of client environments) and demo centers.
This is a common starting point for virtualization for most companies.
Environments which need to deploy selected business applications to lightly used servers or to servers
which have predictable resource consumption profiles. Applications that require different resources or
which require the same resources at different times can typically be deployed on the same physical
server .

Major Players:

So, who are the major players in this market? If you have been following
the virtualization world, you already know that VMWare, Citrix and
Microsoft are the three leading players in this market right now. There
are other players as well who have entered the market – Red Hat, HP, IBM,
Oracle and Virtual Iron Software are some of the other names. For end
consumers like us, the more the competition in the virtualization
industry, the more affordable it is going to be for us.

Thrs a video after this.

You might also like