0% found this document useful (0 votes)
17 views

Cloudcomputing - Unit-I Notes

The document defines cloud computing and discusses its history, features, advantages, evolution, components, types of clouds, and major cloud service providers. Cloud computing delivers computing services over the internet and enables on-demand access to shared computing resources that can be rapidly provisioned with minimal management effort.

Uploaded by

Chinna Bangaram
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Cloudcomputing - Unit-I Notes

The document defines cloud computing and discusses its history, features, advantages, evolution, components, types of clouds, and major cloud service providers. Cloud computing delivers computing services over the internet and enables on-demand access to shared computing resources that can be rapidly provisioned with minimal management effort.

Uploaded by

Chinna Bangaram
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CLOUD COMPUTING

Definitions:

Def1: Cloud computing is the delivery of computing services—including servers, storage,


databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer
faster innovation, flexible resources, and economies of scale.

Def2: Cloud computing allocates virtualized computing resources which are completely provisioned
and managed over the internet.

Def3: Cloud computing is an emerging computing technology that uses the internet and control
remote servers to maintain data and application.

According to NIST (National Institute of Standards and Technology):

 Cloud computing is a model for enabling convenient , on demand network access to shared
pool of configurable computing resources ( networks, servers, storage, database,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction.

History :

 The term "cloud computing" itself didn't emerge until the late 1990s. In 1997, the first
known usage of the term "cloud computing" appeared in a Compaq internal document.
However, the concept of cloud computing as we know it today didn't take off until the mid-
2000s.

 In 2002, Amazon Web Services (AWS) launched its first service, which provided developers
with access to Amazon's infrastructure. In 2006, Amazon launched Amazon Elastic Compute
Cloud (EC2), a web service that allowed users to rent computing power on demand. This
marked the beginning of the era of cloud computing, and many other companies soon
followed suit.

 In 2008, Google launched Google App Engine, a platform for building and hosting web
applications. Microsoft followed in 2010 with the launch of Windows Azure, which allowed
users to build, deploy, and manage applications on Microsoft's infrastructure.

 Since then, cloud computing has grown rapidly, with more and more businesses moving their
operations to the cloud. Today, cloud computing is an essential part of the IT landscape, with
major players like Amazon Web Services, Microsoft Azure, and Google Cloud Platform
dominating the market.

Features:

 On demand self- service: A consumer can set up computing capabilities such as server tie ,
storage as needed automatically

 Broad Network access: Capabilities are available over the network and accessed through
normal mechanisms that are used by various devices.

 Resource Pooling: The provider’s computing resources , such as storage , processing,


memory, and network bandwidth, are pooled to serve multiple consumers by using a multi-
tenant model.
 Rapid elasticity: Cloud computing capabilities can be systematically provisioned to meet
demand and load requirements.

 Measured services: Cloud systems can automatically control and optimise the use of
resources by leveraging a metering capability for the type of service ( storage, processing,
bandwidth, and active user accounts)

Advantages:

 Easy backup and restore data.

 Excellent Accessibility

 Low maintenance cost

 Pay – Per – Use model

 Unlimited Storage Capacity

Evolution of Cloud computing

 Cloud computing is all about renting computing services. This idea first came in the 1950s. In
making cloud computing what it is today, five technologies played a vital role. These are
distributed systems and its peripherals, virtualization, web 2.0, service orientation, and utility
computing.

Cloud Service Provider:

 In the mid-1990s, this concept was technically introduced. It became popular with the
introduction of Cloud Service Provider like Google Cloud Platform, Microsoft Azure, and
Amazon Web Services.

 A cloud service provider is a third-party company offering a cloud-based platform,


infrastructure, application or storage services

 The first cloud provider is Salesforce.com.


List of cloud service providers (CSP):

 Kamareta  Serverspace  Linode-2003  Amazon Web


Services
 Scala Hosting  Cloudways  OVHcloud  Liquidwed
 Digitalocean  Vultr  Cloudsigma  Microsoft
Azure
 GoogleCloudPlatform  Salesforce  Oraclecloud  Verizon cloud
 Navisite  IBM cloud  Open Nebula  Dell cloud
 There are various computing models in use since 1960s.

 Some of these were Peer – to –peer , client server, Grid computing.

Cloud computing Vs Peer to Peer Architecture:

 Peer to Peer architecture is a network of hosts inn which resource sharing,


processing and communications control are completely decentralized.

 Each host act as a sever or Provider o certain services.

 All clients on the network are equal in terms of providing and using resources .

 Peer to Peer architecture is easy and inexpensive to implement. However unlike


cloud computing , it is only practical for very small organisations because of the lack
of central data storage and administration.

 Cloud computing is easily scald to meet growth demands.

Cloud computing Vs Client server Architecture:

 Client server architecture is a form of distributed computing where clients depends


on a number of providers for various services or resources.

 When a user runs an application from the cloud , it is part of a client – server
application. However cloud computing can provide increased performance, flexibility
, and significant cost savings.

 In the client server architecture , additional investment is required for an accelerated


deployment of new resources to meet sudden changes during demand.

Cloud Computing Vs Grid Computing:

 In grid computing, systems were geographically distributed but worked


together to perform a common task. In a grid, a cluster of loosely coupled computers
work together to solve a single task

 While grid computing involves virtualizing computing resources to store


massive amounts of data, whereas cloud computing is where an application doesn't
access resources directly, rather it accesses them through a service over the internet.

 Cloud Computing is flexible compared to Grid Computing. Grid Computing


is less flexible compared to Cloud Computing. The user does not have to pay for any
usage. Cloud Computing is highly scalable than Grid Computing.
Components of Cloud computing

 Cloud computing consists of the following:

 Client

 Cloud network

 Cloud Application programming Interface (API)

Client : A client is an access device or software interface that a user can use to access cloud services.

There are different types of clients in terms of hardware and application software. These cloud
clients are divided into three broad categories, namely

 Mobile clients

 Thin Clients

 Thick clients

 Mobile clients : Mobile cloud computing centered are generally accessed via a mobile
browser from a remote webserver, typically without the need for installing a client
application on the recipient phone

 Thin clients : The thin client needs a server to function properly. It is heavily depended on
the central server for data processing or retrieving files. With a thin client, the server
performs sensitive functions like storage, retrieving the files and data processing. The perfect
example of a thin client is a web browser.

 Thick clients : A thick client performs the operation independent of the server. It
implements its own features

Client types can include computers , mobiles, smart phones, tablets, and servers. The client device
communicates with cloud services by using cloud APIs and browsers.

Cloud Network: A network is the communicating link between the user and cloud services. The
internet is the common choice for accessing the cloud. Employing advance network services , such
as encryption and compression , during transit will benefit both the service provider and the user.

Cloud Application Programming Interface: A cloud API is a set of programming instruction and tool
that provides abstractions over a specific provider cloud. APIs help programmers to have a common
mechanism for connecting to a particular cloud services.

Cloud Types

There are the following 4 types of cloud that you can deploy according to the organization's needs

 Public Cloud

 Private Cloud

 Hybrid cloud

 Community cloud
Public cloud

 Public cloud is open to all to store and access information via the Internet using the pay-
per-usage method.
 Public Cloud provides a shared platform that is accessible to the general public through
an Internet connection.
 Public cloud operated on the pay-as-per-use model and administrated by the third party,
i.e., Cloud service provider.
 In the Public cloud, the same storage is being used by multiple users at the same time.
 In public cloud, computing resources are managed and operated by the Cloud Service
Provider (CSP).
Example:
Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft, Google App
Engine, Windows Azure Services Platform.

Advantages of Public Cloud:

 Public cloud is owned at a lower cost than the private and hybrid cloud.
 Public cloud is maintained by the cloud service provider, so do not need to worry about the
maintenance.
 Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.
 Public cloud is location independent because its services are delivered through the internet.
 Public cloud is highly scalable as per the requirement of computing resources.
 It is accessible by the general public, so there is no limit to the number of users

Disadvantages of Public Cloud


 Public Cloud is less secure because resources are shared publicly.
 Performance depends upon the high-speed internet network link to the cloud provider.
 The Client has no control of data.

Private cloud
 Private cloud is also known as an internal cloud or corporate cloud.
 It is used by organizations to build and manage their own data centers internally or by the
third party. It can be deployed using Opensource tools such as Openstack and Eucalyptus.
 Private cloud provides computing services to a private internal network (within the
organization) and selected users instead of the general public.
 Private cloud provides a high level of security and privacy to data through firewalls and
internal hosting. It also ensures that operational and sensitive data are not accessible to
third-party providers.
 HP Data Centers, Microsoft, Elastra-private cloud, and Ubuntu are the example of a private
cloud.
Based on the location and management, National Institute of Standards and Technology (NIST) divide
private cloud into the following two parts-

 On-premise private cloud

 Outsourced private cloud


Advantages of Private Cloud

 Private cloud provides a high level of security and privacy to the users.
 Private cloud offers better performance with improved speed and space capacity.
 It allows the IT team to quickly allocate and deliver on-demand IT resources.
 The organization has full control over the cloud because it is managed by the organization
itself. So, there is no need for the organization to depends on anybody.
 It is suitable for organizations that require a separate cloud for their personal use and data
security is the first priority.
Disadvantages of Private Cloud
 Skilled people are required to manage and operate cloud services.
 Private cloud is accessible within the organization, so the area of operations is limited.
 Private cloud is not suitable for organizations that have a high user base, and organizations
that do not have the prebuilt infrastructure, sufficient manpower to maintain and manage
the cloud.

Hybrid cloud
 Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
 Hybrid Cloud = Public Cloud + Private Cloud
 Hybrid cloud is partially secure because the services which are running on the public cloud
can be accessed by anyone, while the services which are running on a private cloud can be
accessed only by the organization's users.
 The main aim to combine these cloud (Public and Private) is to create a unified, automated,
and well-managed computing environment.
 In the Hybrid cloud, non-critical activities are performed by the public cloud and critical
activities are performed by the private cloud.
 Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS Office on
the Web and One Drive), Amazon Web Services.

Advantages of Hybrid Cloud:

 Hybrid cloud is suitable for organizations that require more security than the public cloud.
 Hybrid cloud helps you to deliver new products and services more quickly.
 Hybrid cloud provides an excellent way to reduce the risk.
 Hybrid cloud offers flexible resources because of the public cloud and secure resources
because of the private cloud.

Disadvantages of Hybrid Cloud:


 In Hybrid Cloud, security feature is not as good as the private cloud.
 Managing a hybrid cloud is complex because it is difficult to manage more than one type of
deployment model.
 In the hybrid cloud, the reliability of the services depends on cloud service providers.
Community cloud
Community cloud allows systems and services to be accessible by a group of several organizations to
share the information between the organization and a specific community. It is owned, managed,
and operated by one or more organizations in the community, a third party, or a combination of
them.

Advantages of Community Cloud

 Community cloud is cost-effective because the whole cloud is being shared by several
organizations or communities.
 Community cloud is suitable for organizations that want to have a collaborative cloud with
more security features than the public cloud.
 It provides better security than the public cloud.
 It provides collaborative and distributive environment.
 Community cloud allows us to share cloud resources, infrastructure, and other capabilities
among various organizations.

Disadvantages of Community Cloud


 Community cloud is not a good choice for every organization.
 Security features are not as good as the private cloud.
 It is not suitable if there is no collaboration.
 The fixed amount of data storage and bandwidth is shared among all community members.

Cloud computing Service Delivery Models

The following are the high level cloud service deliver models depending on what resources we use
and the benefits we get from the cloud.

1. Infrastructure – as – a – service:

In this model, we can either use servers or storage in the cloud. IN this model we don’t need have to
purchase and maintain our own IT hardware.

The user gets resources, such as processing power, storage, network, bandwidth, CPU and power.
Once the user acquire the infrastructure, he/she controls the OS, data, applications, services, host
based security.

2. Platform – as- a – service:

In this model , we can use the cloud as a platform to develop and sell software applications.

The user is provided the hardware infrastructure network, and operating system to form hosting
environment. The user can install their applications and active services from the hosting environment

3. Software – as – a – service:

In this model, we can use various software applications , such as CRM and ERP, and collaboration
tools on the web. We save by not having to buy or maintain IT hardware or applications.

The user is provided access to an application. User does not control the hardware, network, security,
or operating system. This is the largest public category of cloud services.
4.Business Process –as – a – service:

Business process as a service, or BPaaS, is a type of business process outsourcing (BPO) delivered
based on a cloud services model. BPaaS is connected to other services, including SaaS, PaaS and IaaS,
and is fully configurable

VIRTUALIZATION:

Virtualization is a key technology used in cloud computing to enable the creation of virtual
resources such as virtual machines, virtual networks, and virtual storage. Virtualization allows
multiple operating systems and applications to run on the same physical hardware without
interfering with each other.

In cloud computing, virtualization enables the creation of virtual machines (VMs) on top of physical
servers, allowing multiple users to share the same physical resources. Each VM can run its own
operating system, applications, and data, while appearing to the user as a separate physical machine.

Virtualization also allows for the creation of virtual networks and storage resources. Virtual networks
allow for the creation of logical networks that can span multiple physical locations, while virtual
storage allows for the creation of logical storage devices that can be dynamically provisioned and
scaled as needed.

Overall, virtualization is a key enabler of cloud computing, allowing for efficient resource utilization,
rapid provisioning of resources, and improved scalability and availability.

Levels of Virtualization Implementation :

 A traditional computer runs with a host operating system Levels of Virtualization


Implementation
 A traditional computer runs with a host operating system specially tailored for its hardware
architecture. After virtualization, different user applications managed by their own operating
systems (guest OS) can run on the same hardware, independent of the host OS. This is often
done by adding additional software, called a virtualization layer This virtualization layer is
known as hypervisor or virtual machine monitor (VMM) The VMs are shown in the upper
boxes, where applications run with their own guest OS over the virtualized CPU, memory,
and I/O resources.
 The main function of the software layer for virtualization is to virtualize the physical
hardware of a host machine into virtual resources to be used by the VMs, exclusively. This
can be implemented at various operational levels.
 The virtualization software creates the abstraction of VMs by interposing a virtualization
layer at various levels of a computer system.
Common virtualization layers include the
 Instruction set architecture (ISA) level,

 Hardware level,

 Operating system level,

 Library support level,

 Application level.
Instruction Set Architecture Level :

 Virtualisation is implemented at the level of instruction set architecture by transforming the


physical architecture of the system’s instruction set completely into software. On the host
machine , the VMM installs guests systems. These guest systems issue instructions for the
emulator to process and execute.
 The instructions are received by the emulator, which transforms them into a native
instructions set. These instructions are run on the host machine’s hardware.
 The instructions include both the processor – oriented instructions and the I/O specific ones.
 For an emulator to be successful, it needs to emulate all the tasks that a real can perform.
 The basic emulation, though, requires an interpreter. This interpreter interprets the source
code and converts it to a hardware readable format for processing

Virtualisation at OS level :
Virtualisation at OS level , includes sharing of both the hardware and the OS. The physical
machine is separated from the logical structure by a separate virtualization layer. This layer is built on
top of the OS to enable the user to have access to multiple machines each being isolated from others
and running independently.
The virtualisation technique at the level of the OS keeps the environment required for proper
running of application intact. It keeps the OS, the application specific data structures, the user level
libraries, the environmental settings and the other requisites, separately. Thus the application is
unable to distinguish between the real and Virtual environments.

The virtualisation layer replicates the operating environment , which is established on the
physical system, whenever demanded.

The following are the techniques to implement virtualisation at the OS level :

1. Jail : The Jail, is a FreeBSD (Berkeley Software Distribution) –based software , is capable of
portioning the OS environment while the simple root structure of the UNIX system is maintained.

In this implementation , the scope of the requests made from users with privilege is limited to jail
itself. The process that runs in a partition is called “ in – jail process”.

No process would be an in jail process on a system boot after installing a system afresh. However, a
process and all its descendants would be “ in –jail” after you place the process in jail. More than jail
does not access the same process.

A privileged process creates the jail by invoking a special system called jail(2).

2. Linux Kernel – Mode Virtualization:

A work similar to jail is the Linux – VE system. The aim of this system is allowing a computer to have
multiple application environments run by the administrators, while proper boundaries are
maintained with in the environments. This virtualisation technique aims to improve the security of
the system and enables application hosting.

3. Ensim : To consolidate servers, reduce costs , and increase efficiency in managing and selling
websites , a similar type of technique is used by the Ensim Virtual Private Server (VPS). The native OS
of a server is virtualised by the Ensim VPS with the objective of partitioning the OS into separate
environments that can be used for computational purposes.

These separate environments are known as virtual private servers , and the independent
operation of these servers makes the complete Ensim VPS.

The OS views the VPS as an application , whereas the application view the VPS as the native
OS resulting into the VPS. The Ensim VPS is implemented strongly than the other two virtualization
techniques

Virtualisation at Library Level:

Programming the applications in most systems requires an extensive list of Application programming
interface. Virtualisation at the user level library implementation , a different VE is provided in this
kind of abstraction. This VE created above the OS layer, which can expose a different class of binary
interfaces altogether.

This type of virtualisation defined as an implementation of a different set of Application Binary


Interfaces (ABIs) and /or APIs being implemented through the base system and performing of
ABI/API emulation
Virtualisation at Application level :

An application may be taken simply as a block of instruction being executed on a machine. The arrival
of JVM bought a new dimension to virtualization, which is known as application level virtualisation.

The core concept behind this type of virtualization is to create a virtual machine that works
separately at the application level and operates in a manner similar as a anormal machine does to a
set of applications.

Virtualisation Structure

Virtualisation is achieved through a software known as VMM ( Virtual Machine Monitor) or the
Hypervisor.

The Software used in two ways – thus forming two different structure of virtualisation.

1. Hosted Virtualisation Structure

2. Bare – Metal Virtualisation Structure

Hosted Structure:

This structure enables us to run various guest application windows of our own on top of a base OS
with the help of VMM.

Most popular base OSs is the x86 OS of windows, VMware workstation and Mac Parallel Desktops.

I/O Access:

The virtual OS in this virtualisation have limited access to the devices.

We can use only definite set of I/O devices

The I/O connection to a given physical system are owned by the host system only.

Non generic devices do not update the VMM about themselves.

The I/O requests must pass through the host OS to obtain the pass through facilities in the
hosted structure.

Advantages:

 Multiple guest systems are easily installed, configured and run.

 After VMM is installed , we can run several guest systems on various platforms with out extra
physical resource environment.

Drawbacks:

 Hosted structure is incapable of providing a pass through to many I/O devices.

 Performance of the host system may be downgraded , because the I/O request made by the
guest systems must be pass through host OS.
Bare- Metal Structure :

In this virtualisation , VMM is installed to establish direct communication with the hardware that is
being used by the base system.

VMM does not rely on the host system for pass through permission.

I/O Access:

 VMM can have direct communication with I/O device

 Shared usage of I/O devices between virtual systems requires hypervisor to have a low level
driver that will connect with the device.

 The Hypervisor have the capability of emulating the shared devices for the guest VMs.

 Partitioning involves assigning individual I/O devices to particular VMs and helps largely to
improve the performance of I/O system.

 VM intervention is also kept at a minimum.

Advantages:

 I/O performance improvement ( By I/O device partitions)

 Real time OS and general purpose OS can run in parallel

 Used by binding the interrupt latency

Drawbacks:

 The Hypervisor must include supporting driver for hardware platform apart from holding the
drivers required for sharing I/O devices.

 Harder to install the VMMs in Bare metal structure

Virtualization mechanisms

There are primarily three mechanisms used for virtualization of systems, which are as follows:

1. Binary Translation

2. Hardware Assist

3. Paravirtualization

Binary Translation

 Binary translation is a system virtualization technique. The sensitive instructions in the binary
of Guest OS are replaced by either Hypervisor calls which safely handle such sensitive
instructions or by some undefined opcodes which result in a CPU trap. Such a CPU trap is
handled by the Hypervisor.

 On most modern CPUs, context sensitive instructions are Non-Virtualizable. Binary


translation is a technique to overcome this limitation.

 The privileged instructions are translated into other instructions, which access the virtual
BIOS, memory management, and devices provided by the Virtual Machine Monitor, instead
of executing directly on the real hardware.
 Binary Translation mainly used with a hosted virtualization.

 Switching frequently takes place in control between virtual machines and VMMs.

 This results in degrade the performance. To overcome this, the virtualization software
processes a group of instructions simultaneously.

Hardware Assist

 As an alternative approach to binary translation and in an attempt to enhance performance


and compatibility, hardware providers (e.g., Intel and AMD) started supporting virtualization
at the hardware level.

 In hardware-assisted virtualization (e.g., Intel VT-x, AMD-V), privileged and sensitive calls are
set to automatically trap to the hypervisor.

 This eliminates the need for binary translation or paravirtualization. Moreover, since the
translation is done on the hardware level, it significantly improves performance.

 The VMM interrupts the execution of the VM code every time it finds a privileged instruction
and hence causes severe impacts on the performance. The hardware assisted VMMs
interrupt the execution of the VM code only when the interruption is extremely necessary or
cannot be avoided.

PARAVIRTUALISATION

 Paravirtualization involves modifying the OS kernel.


 The OS kernel acts as a bridge between the applications and the processing done at the
hardware level.
 Paravirtualization replaces nonvirtualizable instructions with hypercalls ( Calling the
hypervisor by the OS is known as hypercalls) that communicate directly with
the virtualization layer hypervisor.
 A hypercall is based on the same concept as a system call. System calls are used by an
application to request services from the OS and provide the interface between the
application or process and the OS.
 Hypercalls work the same way, except the hypervisor is used. The hypervisor also provides
hypercall interfaces for other kernel operations including memory management and
interrupt handling..
Difference between Full virtualization and ParaVirtualization

Full Virtualization Paravirtualization

In Full virtualization, virtual machines In paravirtualization, a virtual machine does not


permit the execution of the instructions implement full isolation of OS but rather provides a
with the running of unmodified OS in an different API which is utilized when OS is subjected to
entirely isolated way. alteration.

While the Paravirtualization is more secure than the


Full Virtualization is less secure.
Full Virtualization.

Full Virtualization uses binary translation


While Paravirtualization uses hypercalls at compile
and a direct approach as a technique for
time for operations.
operations.

Full Virtualization is slow than Paravirtualization is faster in operation as compared to


paravirtualization in operation. full virtualization.

Full Virtualization is more portable and


Paravirtualization is less portable and compatible.
compatible.

Examples of full virtualization are Examples of paravirtualization are Microsoft Hyper-V,


Microsoft and Parallels systems. Citrix Xen, etc.

It supports all guest operating systems The guest operating system has to be modified and
without modification. only a few operating systems support it.

The guest operating system will issue Using the drivers, the guest operating system will
hardware calls. directly communicate with the hypervisor.

It is less streamlined compared to para-


It is more streamlined.
virtualization.

It provides less isolation compared to full


It provides the best isolation.
virtualization.
Open Source Virtualisation technologies

 Linux KVM (Kernel virtual Machine)

 Xen Project

 Oracle Virtual box

 oVirt (Open Virtual Data center)

 Microsoft Hyper- V

 RedHat Virtualization

 Genome Boxes

 Prox Max
KVM (Kernel Virtual Machine)

 KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86
hardware containing virtualization extensions (Intel VT or AMD-V).
 It consists of a loadable kernel module, kvm.ko, that provides the core virtualization
infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
 Using KVM, one can run multiple virtual machines running unmodified Linux or Windows
images.
 Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter,
etc

KVM features

Security : KVM uses a combination of security-enhanced Linux (SELinux) and secure virtualization
(sVirt) for enhanced VM security and isolation.

Storage : KVM is able to use any storage supported by Linux, including some local disks and network-
attached storage (NAS). Multipath I/O may be used to improve storage and provide redundancy.
KVM also supports shared file systems so VM images may be shared by multiple hosts.

Hardware support : KVM can use a wide variety of certified Linux-supported hardware platforms.
Because hardware vendors regularly contribute to kernel development, the latest hardware features
are often rapidly adopted in the Linux kernel.

Memory management : KVM inherits the memory management features of Linux, including non-
uniform memory access and kernel same-page merging.

Live migration : KVM supports live migration, which is the ability to move a running VM between
physical hosts with no service interruption. The VM remains powered on, network connections
remain active, and applications continue to run while the VM is relocated. KVM also saves a VM's
current state so it can be stored and resumed later.

Performance and scalability: KVM inherits the performance of Linux, scaling to match demand load
if the number of guest machines and requests increases. KVM allows the most demanding
application workloads to be virtualized and is the basis for many enterprise virtualization setups,
such as datacenters and private clouds.

Scheduling and resource control : In the KVM model, a VM is a Linux process, scheduled and
managed by the kernel. The Linux scheduler allows control of the resources allocated to a Linux
process and guarantees a quality of service for a particular process.

Lower latency and higher prioritization: The Linux kernel features real-time extensions that allow
VM-based apps to run at lower latency with better prioritization (compared to bare metal).

Xen Hypervsor

 Xen is the only bare- metal hypervisor available as open source.

 Through Xen, a VM (or host) can run a number of OS images or multiple different OSs in
parallel.

 Xen hypervisor provides server virtualization, desktop virtualization, security applications,


IaaS, and embedded and hardware appliances.
 The Xen hypervisor is the most widely used virtualization technique .

 Xen has been extended to compatible with full virtualization using hardware-assisted
virtualization.

Features

 Robustness and Security: It offers higher level of robustness and security to the applications
than other hypervisors.
 Scope for other Operating
ating system : Xen hypervisor run on the Linux OS working as the main
control stack but it can also be adjusted to other systems as well.
 Isolation of Drivers: The main device drivers can be allowed by Xen hypervisor to run inside a
VM , and in case the dr driver
iver suffers a crash or is compromised. It can be restarted by
rebooting the VM that contains the driver without causing any effect on the other parts of
the system.
 Support for Paravirtualization: The Xen hypervisor provides optimisation support for
paravirtualised
virtualised guests so that they can be run as VMs.

KVM Vs XEN
 Both are open source technologies used for providing virtualization support for Oss.

 Xen hypervisor called as type


type- 1 hypervisor, that provides isolation off the drivers from the
rest of the system.

 KVM, is a type -22 virtualisation mechanism in which the drivers cannot be isolated

XEN Architecture
 Xen Architecture and its mapping onto a classic x86 privilege model.
 A Xen based system is handled by Xen hypervisor, which is executed in the most privileged
mode and maintains the access of guest operating system to the basic hardware.
 Guest operating system are run between domains, which represents virtual machine
instances.
 In addition, particular control software, which has privileged access to the host and handles
all other guest OS, runs in a special domain called Domain 0.
 This the only one loaded once the virtual machine manager has fully booted, and hosts an
HTTP server that delivers requests for virtual machine creation, configuration, and
termination.
 This component establishes the primary version of a shared virtual machine manager
(VMM), which is a necessary part of Cloud computing system delivering Infrastructure-as-a-
Service (IaaS) solution.
 Xen is a microkernel hypervisor, which separates the policy from the mechanism.
 The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain 0, as shown in figure does not include any device drivers natively.
 It just provides a mechanism by which a guest OS can have direct access to the physical
devices. As a result, the size of the Xen hypervisor is kept rather small.
 Xen provides a virtual environment located between the hardware and the OS.

A number of vendors are in the process of developing commercial Xen hypervisors, among the mare
Citrix Xen Server and Oracle VM.The core components of a Xen system are the hypervisor, kernel, and
applications.

Various x86 implementation support four distinct security levels, termed as rings, i.e.,

Ring 0,

Ring 1,

Ring 2,

Ring 3

 Ring 0 represents the level having most privilege and Ring 3 represents the level having least
privilege.

 Almost all the frequently used Operating system, except for OS/2, uses only two levels i.e.
Ring 0 for the Kernel code and Ring 3 for user application and non-privilege OS program.

 This provides a chance to the Xen to implement paravirtualization. This enables Xen to
control unchanged the Application Binary Interface (ABI) thus allowing a simple shift to Xen-
virtualized solutions, from an application perspective.

 Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3
to switch to Ring 0 (Kernel mode). Such an operation is done at hardware level, and hence
between a virtualized environment, it will lead to a TRAP or a silent fault, thus preventing the
general operation of the guest OS as it is now running in Ring 1.

This condition is basically occurred by a subset of system calls. To eliminate this situation,
implementation in operating system requires a modification and all the sensitive system calls needs
re-implementation with hyper calls.

In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not reachable,
and concurrently, the primary hardware hasn’t any support to execute them in a more privileged
mode than Ring 0.

Open source OS like Linux can be simply modified as its code is openly available, and Xen delivers full
support to virtualization, while components of Windows are basically not compatible with Xen,
unless hardware-assisted virtualization is available.

As new releases of OS are designed to be virtualized, the problem is getting resolved and new
hardware supports x86 virtualization
Pros:

a) This tightly coupled collaboration between the operating system and virtualized platform enables
the system to develop lighter and flexible hypervisor that delivers their functionalities in an
optimized manner.

b) Xen supports balancing of large workload efficiently that capture CPU, Memory, disk input-output
and network input-output of data.

c) It also comes equipped with a special storage feature that we call Citrix storage link.

d) It also supports multiple processor, Iive migration one machine to another, physical server to
virtual machine or virtual server to virtual machine conversion tools, centralized multiserver
management, real time performance monitoring over window and linux.

Cons:

a) Xen is more reliable over linux rather than on window.

b) Xen relies on 3rd-party component to manage the resources like drivers, storage, backup, recovery
& fault tolerance.

c) Xen deployment could be a burden some on your Linux kernel system as time passes.

d) Xen sometimes may cause increase in load on your resources by high input-output rate and and
may cause starvation of other Vm’s.

Cloud Computing Services

1. Infrastructure as a Service (IaaS)

2. Platform as a Service (PaaS)

3. Software as a Service (SaaS)

These are collectively named as SPI model.

Other services include : Security as a Service (SecaaS), Identity Managemnet as a Service


(IdMaaS), Data center as a Service (DaaS), Database as a Service (DBaaS), Storage as a
Service, Hardware as a service, ERP as a Service.

Infrastructure as a Service (IAAS)

 Iaas is a model in which a customer , pay for the resources kept at the provider’s facility
or wherever the provider keeps its hardware.

 The providers owns the equipment and maintains it at a level specified in the previously
agreed upon SLA ( Service Level Agreement).

To commercially successful , the IaaS service must include the following:

 Utility style computing service with pay per use billing

 Superior , world class IT infrastructure and support

 Virtualised servers, storage and network to form a shared pool of resources.

 Dynamic scalability of memory, bandwidth, storage and servers.


 Flexibility for users to add more or reduce the allocated resources.

 Automation of administrative tasks.

 Ability to view and manage resource utilisation.

As a user, have to ask the following questions to the provider.

 How the provider protect IT and Non IT infrastructure?

 How does it configure the security of the virtual machines?

 How does the provider validate the integrity of the virtual machine images?

 How does it protect data, applications, and infrastructure from attacks by other tenants in
the same cloud?

 What tools does the provider use to detect the security flaws?

 What are the physical locations where data will be stored?

 How and at what frequency are the backups provided? Is backup data encrypted?

Leveraging PaaS for Productivity

 Ubiquitous Access and Quick Deployment: PaaS enables rapid implementation , scalability,
and collaboration
 Caching : A PaaS environment that supports caching resources will boost application
performance
 Integrated Development Environment: A PaaS environment must have an IDE for application
development
 Database: Each Paas must provide a database for developers to store and access data. For
ex: for PaaS cloud, Force.com has a service called database.com
 Integration: Integration with external databases and web services and their compatibility is
ensured with leading cloud providers.
 Logging: The Paas environment must have APIs to open and close log files , write event logs,
examine entries and send alerts.
 Identity Management Developers in PaaS need to authenticate and manage users within
their applications
 Messaging: The PaaS cloud must provide ability to APIs to manage messages, such as the
ability the ability to post message to any queue, consume messages, and examine messages.
 Job Processing : The PaaS must provide ability to APIs to allow developers to start , monitor,
pause and stop large processing jobs.
 Session Management: PaaS must provide the ability to view access or change user sessions.
 Service discovery: PaaS platform must give developers a convenient way to discover available
services and the ability to search the cloud by service types.
Guidelines for selecting a PaaS Provider
1. Compatibility with other clouds : PaaS providers will claim portability to all other clouds. We need
to aware of providers who claim they have everything for everyone.

2. Target customers : PaaS providers have certain target customers and architect their environment o
appeal to particular group of users.
3.Avoid vendor lock-in : We must select a provider who facilitates cloud interoperability for our
application. It must be easily ported to another public or hybrid cloud or even to a non virtualised
internal infrastructure.

4. Platform management: Make sure that the PaaS provider can manage and maintain the
environment.

5.Lack of Visibility : It is difficult to know if we are running in secure , robust environment.

6.Portability/Interoperability with application on another cloud: Unlike IaaS, where OS images can be
moved between couds , applications developed on a PaaS involve cloud provider’s APIs and
customised language extensions. This makes porting of applications difficult.

7.Security : End user has no information on the implemented security mechanism

8.Security for development code: since the development code resides on third party , shared
infrastructure , the customer are wary of security and privacy of the code.

Software as a Service

 In the SaaS cloud, the vendor supplies the hardware infrastructure , software and
applications. The customer interacts with the application through a portal . As the service
provider hosts the application as well as stores the user data , the end user is free to use the
service from anywhere.

 Users get to use the application over the internet without the onus of buying implementing
or managing the software .

 SaaS and ASP (Application Service Provider) may appear to be the same, but they are
different.

 There are large number of SaaS providers , such as Microsoft Live CRM, Google Apps, Trend
Micro, Symantec and Zoho.

List of Questions need to ask the SaaS provider

 How does the provider make sure that the users who sign up are not fraudsters and will not
start malicious activity?

 How and to what extent is security integrated with the SDC at different phases, such as
architecture, coding, testing and deployment.

 What are the design and coding standards?

 What web security standards are being followed?

 How is customers data is protected?

You might also like