0% found this document useful (0 votes)
76 views12 pages

Infrastructure As A Service (IAAS)

The document discusses virtual machines (VMs), including what they are, how virtualization works using hypervisors, the main types of hypervisors, advantages of VMs, common use cases, and virtual machine images. VMs allow a single physical machine to run multiple virtual environments by allocating resources like memory and storage. Hypervisors manage resource allocation between the physical hardware and VMs.

Uploaded by

userdemo12334
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views12 pages

Infrastructure As A Service (IAAS)

The document discusses virtual machines (VMs), including what they are, how virtualization works using hypervisors, the main types of hypervisors, advantages of VMs, common use cases, and virtual machine images. VMs allow a single physical machine to run multiple virtual environments by allocating resources like memory and storage. Hypervisors manage resource allocation between the physical hardware and VMs.

Uploaded by

userdemo12334
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

3.

1 Infrastructure as a Service (IAAS)


3.1.1 Introduction to Virtualization
3.1.1.1 Hyperwiser, Virtual Machine, Machine image

Virtual Machines (VMs)


An introduction to Virtual Machines (VMs), technology for building virtualized computing environments
and the foundation of the first generation of cloud computing.

What is a virtual machine (VM)?


A virtual machine is a virtual representation, or emulation, of a physical computer. They are often
referred to as a guest while the physical machine they run on is referred to as the host.
virtulization makes it possible to create multiple virtual machines, each with their own operating system
(OS) and applications, on a single physical machine. A VM cannot interact directly with a physical
computer. Instead, it needs a lightweight software layer called a hypervisor to coordinate between it
and the underlying physical hardware. The hypervisor allocates physical computing resources—such as
processors, memory, and storage—to each VM. It keeps each VM separate from others so they don’t
interfere with each other.

While this technology can go by many names, including virtual server, virtual server instance (VSI) and
virtual private server (VPS), this article will simply refer to them as virtual machines.

How virtualization works

When a hypervisor is used on a physical computer or server, (also known as bare metal server), it allows
the physical computer to separate its operating system and applications from its hardware. Then, it can
divide itself into several independent “virtual machines.”

Each of these new virtual machines can then run their own operating systems and applications
independently while still sharing the original resources from the bare metal server, which the hypervisor
manages. Those resources include memory, RAM, storage, etc.

There are two primary types of hypervisors.

Type 1 hypervisors run directly on the physical hardware (usually a server), taking the place of the OS.
Typically, you use a separate software product to create and manipulate VMs on the hypervisor. Some
management tools, like VMware’s vSphere, let you select a guest OS to install in the VM.

You can use one VM as a template for others, duplicating it to create new ones. Depending on your
needs, you might create multiple VM templates for different purposes, such as software testing,
production databases, and development environments.

Type 2 hypervisors run as an application within a host OS and usually target single-user desktop or
notebook platforms. With a Type 2 hypervisor, you manually create a VM and then install a guest OS in
it. You can use the hypervisor to allocate physical resources to your VM, manually setting the amount of
processor cores and memory it can use. Depending on the hypervisor’s capabilities, you can also set
options like 3D acceleration for graphics.
Advantages and benefits of VMs

VMs offer several benefits over traditional physical hardware:

Resource utilization and improved ROI: Because multiple VMs run on a single physical
computer, customers don’t have to buy a new server every time they want to run another OS,
and they can get more return from each piece of hardware they already own.

Scale: With cloud computing, it’s easy to deploy multiple copies of the same virtual machine
to better serve increases in load.

Portability: VMs can be relocated as needed among the physical computers in a network.
This makes it possible to allocate workloads to servers that have spare computing power.
VMs can even move between on-premises and cloud environments, making them useful
for hybrid cloud scenarios in which you share computing resources between your data center
and a cloud service provider.

Flexibility: Creating a VM is faster and easier than installing an OS on a physical server


because you can clone a VM with the OS already installed. Developers and software testers
can create new environments on demand to handle new tasks as they arise.

Security: VMs improve security in several ways when compared to operating systems running directly
on hardware. A VM is a file that can be scanned for malicious software by an external program. You
can create an entire snapshot of the VM at any point in time and then restore it to that state if it
becomes infected with malware, effectively taking the VM back in time. The fast, easy creation of VMs
also makes it possible to completely delete a compromised VM and then recreate it quickly, hastening
recovery from malware infections.

Use cases for VMs

VMs have several uses, both for enterprise IT administrators and users. Here are a few options:

Cloud computing: For the last 10+ years, VMs have been the fundamental unit of compute in cloud,
enabling dozens of different types of applications and workloads to run and scale successfully.

Support DevOps: VMs are a great way to support enterprise developers, who can configure VM
templates with the settings for their software development and testing processes. They can create
VMs for specific tasks such as static software tests, including these steps in an automated
development workflow. This all helps streamline the DevOps toolchain.

Test a new operating system: A VM lets you test-drive a new operating system on your desktop
without affecting your primary OS.

Investigate malware: VMs are useful for malware researchers that frequently need fresh machines on
which to test malicious programs.
Run incompatible software: Some users may prefer one OS while still needing a program that is only
available in another. One good example is the Dragon range of voice dictation software. Its vendor,
Nuance, has discontinued the macOS version of its product. However, running a desktop-focused
hypervisor—such as VMware Fusion or Parallels—enables you to run Windows in a VM, giving you
access to that version of the software.

Browse securely: Using a virtual machine for browsing enables you to visit sites without worrying
about infection. You can take a snapshot of your machine and then roll back to it after each browsing
session. This is something that a user could set up themselves, using a Type 2 desktop hypervisor.
Alternatively, an admin could provide a temporary virtual desktop located on the server.

Virtual Machine Images

A virtual machine image is a template for creating new instances. You can choose images from a catalog
to create images or save your own images from running instances. Specialists in those platforms often
create catalog images, making sure that they are created with the proper patches and that any software
is installed and configured with good default settings. The images can be plain operating systems or can
have software installed on them, such as databases, application servers, or other applications. Images
usually remove some data related to runtime operations, such as swap data and configuration files with
embedded IP addresses or host names.

Image development is becoming a larger and more specialized area. One of the outstanding features of
the IBM SmartCloud Enterprise is the image asset catalog. The asset catalog stores a set of additional
data about images, including a “Getting Started” page, a parameters file that specifies additional
parameters needed when creating an instance, and additional files to inject into the instance at startup.
It also hosts forums related to assets, to enable feedback and questions from users of images to the
people who created those images. Saving your own images from running instances is easy, but making
images that other people use requires more effort; the IBM SmartCloud Enterprise asset catalog
provides you with tools to do this.

Because many users share clouds, the cloud helps you track information about images, such as
ownership, history, and so on. The IBM SmartCloud Enterprise knows what organization you belong to
when you log in. You can choose whether to keep images private, exclusively for your own use, or to
share with other users in your organization. If you are an independent software vendor, you can also
add your images to the public catalog.

Some differences between Linux and Windows exist. The filelike description of the Linux operating
system makes it easy to prepare for virtualization. An image can be manipulated as a file system even
when the instance is not running. Different files, such as a user’s public SSH key and runtime
parameters, can be injected into the image before booting it. Cloud operators take advantage of this for
ease of development and to make optimizations. The same method of manipulating files systems
without booting the OS cannot be done in Windows.
Hyperwiser

Types of Hypervisior: The hypervisor, also known as a Virtual Machine Monitor (VMM) is the software
layer which enables virtualization. It is responsible for creating the virtual environment on which the
guest virtual machines operate. It supervises the guest systems and makes sure resources are allocated
to the guests as necessary. Generally hypervisors are classified into one of two categories; Type 1 and
Type 2.

1) Type 1:-
The Type 1 hypervisor is considered a native or bare metal hypervisor. This type of hypervisor is
the lowest level hypervisors, running directly on the host hardware. It is responsible for
allocation of all resources (disk, memory, CPU, and peripherals) to its guests. These hypervisors
typically have a rather small footprint and do not, themselves, require extensive resources.
Occasionally, they have very limited driver databases limiting the hardware on which they can
be installed. Some also require a privileged guest virtual machine, known as a Domain-0 or
Dom0, to provide access to the management and control interface to the hypervisor itself. The
Type 1 hypervisor is what is most typically seen in the server virtualization environment.
2) Type 2:-
Hypervisor. This model (shown below) is also known as a hosted hypervisor. The software is not
installed on bare-metal, but loaded on top of an already live operating system. This has some
advantages in that it typically has fewer hardware issues as the host operating system is
responsible for interfacing with the hardware. Type 2 hypervisor can be used for application
portability such is the case for the Java Virtual Machine (JVM) or to run operating systems. The
downside being additional overhead can cause a hit on performance compared to Type 1.

3.1.2 Resource Virtualization


3.1.2.1 Server, Storage, Network
1. What is Server Virtualization?
Server virtualization is used to mask server resources from server users. This can include the
number and identity of operating systems, processors, and individual physical servers.

Server Virtualization Definition


Server virtualization is the process of dividing a physical server into multiple unique and isolated virtual
servers by means of a software application. Each virtual server can run its own operating systems
independently.

Key Benefits of Server Virtualization:


Higher server ability
Cheaper operating costs
Eliminate server complexity
Increased application performance
Deploy workload quicker

Three Kinds of Server Virtualization:


Full Virtualization: Full virtualization uses a hypervisor, a type of software that directly
communicates with a physical server's disk space and CPU. The hypervisor monitors the physical server's
resources and keeps each virtual server independent and unaware of the other virtual servers. It also
relays resources from the physical server to the correct virtual server as it runs applications. The biggest
limitation of using full virtualization is that a hypervisor has its own processing needs. This can slow
down applications and impact server performance.
Para-Virtualization: Unlike full virtualization, para-virtualization involves the entire network
working together as a cohesive unit. Since each operating system on the virtual servers is aware of one
another in para-virtualization, the hypervisor does not need to use as much processing power to
manage the operating systems.
OS-Level Virtualization: Unlike full and para-virtualization, OS-level visualization does not use a
hypervisor. Instead, the virtualization capability, which is part of the physical server operating system,
performs all the tasks of a hypervisor. However, all the virtual servers must run that same operating
system in this server virtualization method.

Why Server Virtualization?


Server virtualization is a cost-effective way to provide web hosting services and effectively utilize
existing resources in IT infrastructure. Without server virtualization, servers only use a small part of their
processing power. This results in servers sitting idle because the workload is distributed to only a portion
of the network’s servers. Data centers become overcrowded with underutilized servers, causing a waste
of resources and power.

By having each physical server divided into multiple virtual servers, server virtualization allows each
virtual server to act as a unique physical device. Each virtual server can run its own applications and
operating system. This process increases the utilization of resources by making each virtual server act as
a physical server and increases the capacity of each physical machine.

2. Storage virtulization
In computer science, storage virtualization is "the process of presenting a logical view of the
physical storage resources to"[1] a host computer system, "treating all storage media (hard disk, optical
disk, tape, etc.) in the enterprise as a single pool of storage."[2]
A "storage system" is also known as a storage array, disk array, or filer. Storage systems typically use
special hardware and software along with disk drives in order to provide very fast and reliable storage
for computing and data processing. Storage systems are complex, and may be thought of as a special
purpose computer designed to provide storage capacity along with advanced data protection features.
Disk drives are only one element within a storage system, along with hardware and special purpose
embedded software within the system.
Storage systems can provide either block accessed storage, or file accessed storage. Block access is
typically delivered over Fibre Channel, iSCSI, SAS, FICON or other protocols. File access is often provided
using NFS or SMB protocols.

Within the context of a storage system, there are two primary types of virtualization that can occur:

Block virtualization used in this context refers to the abstraction (separation) of logical
storage (partition) from physical storage so that it may be accessed without regard to physical storage or
heterogeneous structure. This separation allows the administrators of the storage system greater
flexibility in how they manage storage for end users.

File virtualization addresses the NAS challenges by eliminating the dependencies between the data
accessed at the file level and the location where the files are physically stored. This provides
opportunities to optimize storage use and server consolidation and to perform non-disruptive file
migrations.
3. Network virtulization
In computing, network virtualization or network virtualisation is the process of combining
hardware and software network resources and network functionality into a single, software-based
administrative entity, a virtual network. Network virtualization involves platform virtualization, often
combined with resource virtualization.
Network virtualization is categorized as either external virtualization, combining many networks
or parts of networks into a virtual unit, or internal virtualization, providing network-like functionality to
software containers on a single network server.
In software testing, software developers use network virtualization to test software which are
under development in a simulation of the network environments in which the software is intended to
operate. As a component of application performance engineering, network virtualization enables
developers to emulate connections between applications, services, dependencies, and end users in a
test environment without having to physically test the software on all possible hardware or system
software. The validity of the test depends on the accuracy of the network virtualization in emulating real
hardware and operating systems.

Components

Various equipment and software vendors offer network virtualization by combining any of the following:

• Network hardware, such as switches and network adapters, also known as network interface cards
(NICs)
• Network elements, such as firewalls and load balancers
• Networks, such as virtual LANs (VLANs) and containers such as virtual machines (VMs)
• Network storage devices
• Network machine-to-machine elements, such as telecommunications devices
• Network mobile elements, such as laptop computers, tablet computers, and smart phones
• Network media, such as Ethernet and Fibre Channel

External virtualization
External network virtualization combines or subdivides one or more local area networks (LANs) into
virtual networks to improve a large network's or data center's efficiency. A virtual local area network
(VLAN) and network switch comprise the key components. Using this technology, a system
administrator can configure systems physically attached to the same local network into separate virtual
networks. Conversely, an administrator can combine systems on separate local area networks (LANs)
into a single VLAN spanning segments of a large network.

Interrnal virtualization
Internal network virtualization configures a single system with software containers, such
as Xen hypervisor control programs, or pseudo-interfaces, such as a VNIC, to emulate a physical network
with software. This can improve a single system's efficiency by isolating applications to separate
containers or pseudo-interfaces.

3.1.3 Amazon EC2, Eucalyptus


Amazon EC2
Eucalyptus:
( Elastic Utility Computing Architecture Linking Your Programs To Useful
Systems)
Eucalyptus is a paid and open-source computer software for building Amazon Web
Services (AWS)-compatible private and hybrid cloud computing environments, originally developed by
the company Eucalyptus Systems. Eucalyptus is an acronym for Elastic Utility Computing Architecture for
Linking Your Programs To Useful Systems. Eucalyptus enables pooling compute, storage, and network
resources that can be dynamically scaled up or down as application workloads change. Mårten
Mickos was the CEO of Eucalyptus.In September 2014, Eucalyptus was acquired by Hewlett-Packard and
then maintained by DXC Technology. After DXC stopped developing the product in late 2017, AppScale
Systems forked the code and started supporting Eucalyptus customers.

Cloud Roles

Managers
 Availability of cloud resources
 Quality of cloud services
 Cloud usage billing and costing
 Establishing IT processes and best practices

Administrators
 Daily production and operational support of cloud platform
 Continuous monitoring and status reporting of cloud platform
 Maintaining service level agreements

Application Architects
 Developing and adapting applications to cloud deployments
 Information management and adapting data management to cloud deployments
 Cloud Service design, implementation, and lifecycle support

Users
 On-demand provisioning of compute, network, and storage resources
 Self-service configuration of cloud resources
 Transparency on service costs and levels

Eucalyptus Components
 Cloud controller (CLC)
 Walrus
 Storage controller
 Cluster controller
 VMBroker (optional)
 Node controller

Cloud Controller (CLC)


The Cloud Controller (CLC) is the entry-point into the cloud for administrators, developers,
project managers, and end-users.
Functions:
• Monitor the availability of resources on various components of the
cloud infrastructure, including hypervisor nodes that are used to actually provision the
instances and the cluster controllers that manage the hypervisor nodes
 Resource arbitration { Deciding which clusters will be used for provisioning the instances }
 Monitoring the running instances

Cluster Controller(CC)
The Cluster Controller (CC) generally executes on a cluster front-‐end machine, or any
machine that has network
 Connectivity to both the nodes running NCs and to the machine running the CLC. CCs gather
information about a set of VMs and schedules VM execution on specific NCs. The CC also
manages the virtual instance network and participates in the enforcement of
 All nodes served by a single CC must be in the same broadcast domain (Ethernet).
Functions:
 To receive requests from CLC to deploy instances
 To decide which NCs to use for deploying the instances on
 To control the virtual network available to the instances
 To collect information about the NCs registered with it and report it to the CLC.

Node Controller (NC)


 The Node Controller (NC) is executed on every node that is designated for hosting VM instances.
 NCs control the execution, inspection, and termination of VM instances on the host where it
runs, fetches and cleans up local copies of instance images (the kernel, the root file system, and
the ramdisk image), and queries and controls the system software on its node (host OS and the
hypervisor) in response to queries and control requests from the cluster controller. The Node
controller is also responsible for the management of the virtual network endpoint.
Functions:
 Collection of data related to the resource availability and utilization
 on the node and reporting the data to CC
 Instance life cycle management

Storage Controller
 The Storage Controller (SC) provides functionality similar to the Amazon Elastic Block Store
(Amazon EBS). The SC is capable of interfacing with various storage systems (NFS, iSCSI, SAN
devices, etc.).
 Elastic block storage exports storage volumes that can be attached by a VM and mounted or
accessed as a raw block device

Walrus
 Walrus allows users to store persistent data, organized as buckets and objects. You can use
Walrus to create, delete, and list buckets, or to put, get, and delete objects, or to set access
control policies.
 Walrus is interface compatible with Amazon’s Simple Storage Service (S3), providing a
mechanism for storing and accessing virtual machine images and user data

VMware Broker
 VMware Broker (Broker or VB) is an optional Eucalyptus component, which is available if you are
a Eucalyptus Subscriber.
 VMware Broker enables Eucalyptus to deploy virtual machines (VMs) on VMware infrastructure
elements. VMware Broker mediates all interactions between the CC and VMware hypervisors
(ESX/ESXi) either directly or through VMware vCenter.

Benefits of The Eucalyptus


 Scalable data center infrastructure. Eucalyptus clouds are highly scalable, which enables an
organization to efficiently scale-up or scale-down data center resources according to the needs
of the enterprise.
 Elastic resource configuration. The elasticity of a Eucalyptus cloud allows users to flexibly
reconfigure computing resources as requirements change. This helps the enterprise workforce
remain adaptable to sudden changes in business needs.
 Open source innovation. Highly transparent and extensible, Eucalyptus’ open source core
architecture remains entirely open and available for value- adding customizations and
innovations provided by the open source development community. The Eucalyptus open source
software core is available for free download at www.eucalyptus.com.
Verifying Component Disk Space

You might also like