Cloudcomputing - Unit-I Notes
Cloudcomputing - Unit-I Notes
Definitions:
Def2: Cloud computing allocates virtualized computing resources which are completely provisioned
and managed over the internet.
Def3: Cloud computing is an emerging computing technology that uses the internet and control
remote servers to maintain data and application.
Cloud computing is a model for enabling convenient , on demand network access to shared
pool of configurable computing resources ( networks, servers, storage, database,
applications and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction.
History :
The term "cloud computing" itself didn't emerge until the late 1990s. In 1997, the first
known usage of the term "cloud computing" appeared in a Compaq internal document.
However, the concept of cloud computing as we know it today didn't take off until the mid-
2000s.
In 2002, Amazon Web Services (AWS) launched its first service, which provided developers
with access to Amazon's infrastructure. In 2006, Amazon launched Amazon Elastic Compute
Cloud (EC2), a web service that allowed users to rent computing power on demand. This
marked the beginning of the era of cloud computing, and many other companies soon
followed suit.
In 2008, Google launched Google App Engine, a platform for building and hosting web
applications. Microsoft followed in 2010 with the launch of Windows Azure, which allowed
users to build, deploy, and manage applications on Microsoft's infrastructure.
Since then, cloud computing has grown rapidly, with more and more businesses moving their
operations to the cloud. Today, cloud computing is an essential part of the IT landscape, with
major players like Amazon Web Services, Microsoft Azure, and Google Cloud Platform
dominating the market.
Features:
On demand self- service: A consumer can set up computing capabilities such as server tie ,
storage as needed automatically
Broad Network access: Capabilities are available over the network and accessed through
normal mechanisms that are used by various devices.
Measured services: Cloud systems can automatically control and optimise the use of
resources by leveraging a metering capability for the type of service ( storage, processing,
bandwidth, and active user accounts)
Advantages:
Excellent Accessibility
Cloud computing is all about renting computing services. This idea first came in the 1950s. In
making cloud computing what it is today, five technologies played a vital role. These are
distributed systems and its peripherals, virtualization, web 2.0, service orientation, and utility
computing.
In the mid-1990s, this concept was technically introduced. It became popular with the
introduction of Cloud Service Provider like Google Cloud Platform, Microsoft Azure, and
Amazon Web Services.
All clients on the network are equal in terms of providing and using resources .
When a user runs an application from the cloud , it is part of a client – server
application. However cloud computing can provide increased performance, flexibility
, and significant cost savings.
Client
Cloud network
Client : A client is an access device or software interface that a user can use to access cloud services.
There are different types of clients in terms of hardware and application software. These cloud
clients are divided into three broad categories, namely
Mobile clients
Thin Clients
Thick clients
Mobile clients : Mobile cloud computing centered are generally accessed via a mobile
browser from a remote webserver, typically without the need for installing a client
application on the recipient phone
Thin clients : The thin client needs a server to function properly. It is heavily depended on
the central server for data processing or retrieving files. With a thin client, the server
performs sensitive functions like storage, retrieving the files and data processing. The perfect
example of a thin client is a web browser.
Thick clients : A thick client performs the operation independent of the server. It
implements its own features
Client types can include computers , mobiles, smart phones, tablets, and servers. The client device
communicates with cloud services by using cloud APIs and browsers.
Cloud Network: A network is the communicating link between the user and cloud services. The
internet is the common choice for accessing the cloud. Employing advance network services , such
as encryption and compression , during transit will benefit both the service provider and the user.
Cloud Application Programming Interface: A cloud API is a set of programming instruction and tool
that provides abstractions over a specific provider cloud. APIs help programmers to have a common
mechanism for connecting to a particular cloud services.
Cloud Types
There are the following 4 types of cloud that you can deploy according to the organization's needs
Public Cloud
Private Cloud
Hybrid cloud
Community cloud
Public cloud
Public cloud is open to all to store and access information via the Internet using the pay-
per-usage method.
Public Cloud provides a shared platform that is accessible to the general public through
an Internet connection.
Public cloud operated on the pay-as-per-use model and administrated by the third party,
i.e., Cloud service provider.
In the Public cloud, the same storage is being used by multiple users at the same time.
In public cloud, computing resources are managed and operated by the Cloud Service
Provider (CSP).
Example:
Amazon elastic compute cloud (EC2), IBM SmartCloud Enterprise, Microsoft, Google App
Engine, Windows Azure Services Platform.
Public cloud is owned at a lower cost than the private and hybrid cloud.
Public cloud is maintained by the cloud service provider, so do not need to worry about the
maintenance.
Public cloud is easier to integrate. Hence it offers a better flexibility approach to consumers.
Public cloud is location independent because its services are delivered through the internet.
Public cloud is highly scalable as per the requirement of computing resources.
It is accessible by the general public, so there is no limit to the number of users
Private cloud
Private cloud is also known as an internal cloud or corporate cloud.
It is used by organizations to build and manage their own data centers internally or by the
third party. It can be deployed using Opensource tools such as Openstack and Eucalyptus.
Private cloud provides computing services to a private internal network (within the
organization) and selected users instead of the general public.
Private cloud provides a high level of security and privacy to data through firewalls and
internal hosting. It also ensures that operational and sensitive data are not accessible to
third-party providers.
HP Data Centers, Microsoft, Elastra-private cloud, and Ubuntu are the example of a private
cloud.
Based on the location and management, National Institute of Standards and Technology (NIST) divide
private cloud into the following two parts-
Private cloud provides a high level of security and privacy to the users.
Private cloud offers better performance with improved speed and space capacity.
It allows the IT team to quickly allocate and deliver on-demand IT resources.
The organization has full control over the cloud because it is managed by the organization
itself. So, there is no need for the organization to depends on anybody.
It is suitable for organizations that require a separate cloud for their personal use and data
security is the first priority.
Disadvantages of Private Cloud
Skilled people are required to manage and operate cloud services.
Private cloud is accessible within the organization, so the area of operations is limited.
Private cloud is not suitable for organizations that have a high user base, and organizations
that do not have the prebuilt infrastructure, sufficient manpower to maintain and manage
the cloud.
Hybrid cloud
Hybrid Cloud is a combination of the public cloud and the private cloud. we can say:
Hybrid Cloud = Public Cloud + Private Cloud
Hybrid cloud is partially secure because the services which are running on the public cloud
can be accessed by anyone, while the services which are running on a private cloud can be
accessed only by the organization's users.
The main aim to combine these cloud (Public and Private) is to create a unified, automated,
and well-managed computing environment.
In the Hybrid cloud, non-critical activities are performed by the public cloud and critical
activities are performed by the private cloud.
Mainly, a hybrid cloud is used in finance, healthcare, and Universities.
Example: Google Application Suite (Gmail, Google Apps, and Google Drive), Office 365 (MS Office on
the Web and One Drive), Amazon Web Services.
Hybrid cloud is suitable for organizations that require more security than the public cloud.
Hybrid cloud helps you to deliver new products and services more quickly.
Hybrid cloud provides an excellent way to reduce the risk.
Hybrid cloud offers flexible resources because of the public cloud and secure resources
because of the private cloud.
Community cloud is cost-effective because the whole cloud is being shared by several
organizations or communities.
Community cloud is suitable for organizations that want to have a collaborative cloud with
more security features than the public cloud.
It provides better security than the public cloud.
It provides collaborative and distributive environment.
Community cloud allows us to share cloud resources, infrastructure, and other capabilities
among various organizations.
The following are the high level cloud service deliver models depending on what resources we use
and the benefits we get from the cloud.
1. Infrastructure – as – a – service:
In this model, we can either use servers or storage in the cloud. IN this model we don’t need have to
purchase and maintain our own IT hardware.
The user gets resources, such as processing power, storage, network, bandwidth, CPU and power.
Once the user acquire the infrastructure, he/she controls the OS, data, applications, services, host
based security.
In this model , we can use the cloud as a platform to develop and sell software applications.
The user is provided the hardware infrastructure network, and operating system to form hosting
environment. The user can install their applications and active services from the hosting environment
3. Software – as – a – service:
In this model, we can use various software applications , such as CRM and ERP, and collaboration
tools on the web. We save by not having to buy or maintain IT hardware or applications.
The user is provided access to an application. User does not control the hardware, network, security,
or operating system. This is the largest public category of cloud services.
4.Business Process –as – a – service:
Business process as a service, or BPaaS, is a type of business process outsourcing (BPO) delivered
based on a cloud services model. BPaaS is connected to other services, including SaaS, PaaS and IaaS,
and is fully configurable
VIRTUALIZATION:
Virtualization is a key technology used in cloud computing to enable the creation of virtual
resources such as virtual machines, virtual networks, and virtual storage. Virtualization allows
multiple operating systems and applications to run on the same physical hardware without
interfering with each other.
In cloud computing, virtualization enables the creation of virtual machines (VMs) on top of physical
servers, allowing multiple users to share the same physical resources. Each VM can run its own
operating system, applications, and data, while appearing to the user as a separate physical machine.
Virtualization also allows for the creation of virtual networks and storage resources. Virtual networks
allow for the creation of logical networks that can span multiple physical locations, while virtual
storage allows for the creation of logical storage devices that can be dynamically provisioned and
scaled as needed.
Overall, virtualization is a key enabler of cloud computing, allowing for efficient resource utilization,
rapid provisioning of resources, and improved scalability and availability.
Hardware level,
Application level.
Instruction Set Architecture Level :
Virtualisation at OS level :
Virtualisation at OS level , includes sharing of both the hardware and the OS. The physical
machine is separated from the logical structure by a separate virtualization layer. This layer is built on
top of the OS to enable the user to have access to multiple machines each being isolated from others
and running independently.
The virtualisation technique at the level of the OS keeps the environment required for proper
running of application intact. It keeps the OS, the application specific data structures, the user level
libraries, the environmental settings and the other requisites, separately. Thus the application is
unable to distinguish between the real and Virtual environments.
The virtualisation layer replicates the operating environment , which is established on the
physical system, whenever demanded.
1. Jail : The Jail, is a FreeBSD (Berkeley Software Distribution) –based software , is capable of
portioning the OS environment while the simple root structure of the UNIX system is maintained.
In this implementation , the scope of the requests made from users with privilege is limited to jail
itself. The process that runs in a partition is called “ in – jail process”.
No process would be an in jail process on a system boot after installing a system afresh. However, a
process and all its descendants would be “ in –jail” after you place the process in jail. More than jail
does not access the same process.
A privileged process creates the jail by invoking a special system called jail(2).
A work similar to jail is the Linux – VE system. The aim of this system is allowing a computer to have
multiple application environments run by the administrators, while proper boundaries are
maintained with in the environments. This virtualisation technique aims to improve the security of
the system and enables application hosting.
3. Ensim : To consolidate servers, reduce costs , and increase efficiency in managing and selling
websites , a similar type of technique is used by the Ensim Virtual Private Server (VPS). The native OS
of a server is virtualised by the Ensim VPS with the objective of partitioning the OS into separate
environments that can be used for computational purposes.
These separate environments are known as virtual private servers , and the independent
operation of these servers makes the complete Ensim VPS.
The OS views the VPS as an application , whereas the application view the VPS as the native
OS resulting into the VPS. The Ensim VPS is implemented strongly than the other two virtualization
techniques
Programming the applications in most systems requires an extensive list of Application programming
interface. Virtualisation at the user level library implementation , a different VE is provided in this
kind of abstraction. This VE created above the OS layer, which can expose a different class of binary
interfaces altogether.
An application may be taken simply as a block of instruction being executed on a machine. The arrival
of JVM bought a new dimension to virtualization, which is known as application level virtualisation.
The core concept behind this type of virtualization is to create a virtual machine that works
separately at the application level and operates in a manner similar as a anormal machine does to a
set of applications.
Virtualisation Structure
Virtualisation is achieved through a software known as VMM ( Virtual Machine Monitor) or the
Hypervisor.
The Software used in two ways – thus forming two different structure of virtualisation.
Hosted Structure:
This structure enables us to run various guest application windows of our own on top of a base OS
with the help of VMM.
Most popular base OSs is the x86 OS of windows, VMware workstation and Mac Parallel Desktops.
I/O Access:
The I/O connection to a given physical system are owned by the host system only.
The I/O requests must pass through the host OS to obtain the pass through facilities in the
hosted structure.
Advantages:
After VMM is installed , we can run several guest systems on various platforms with out extra
physical resource environment.
Drawbacks:
Performance of the host system may be downgraded , because the I/O request made by the
guest systems must be pass through host OS.
Bare- Metal Structure :
In this virtualisation , VMM is installed to establish direct communication with the hardware that is
being used by the base system.
VMM does not rely on the host system for pass through permission.
I/O Access:
Shared usage of I/O devices between virtual systems requires hypervisor to have a low level
driver that will connect with the device.
The Hypervisor have the capability of emulating the shared devices for the guest VMs.
Partitioning involves assigning individual I/O devices to particular VMs and helps largely to
improve the performance of I/O system.
Advantages:
Drawbacks:
The Hypervisor must include supporting driver for hardware platform apart from holding the
drivers required for sharing I/O devices.
Virtualization mechanisms
There are primarily three mechanisms used for virtualization of systems, which are as follows:
1. Binary Translation
2. Hardware Assist
3. Paravirtualization
Binary Translation
Binary translation is a system virtualization technique. The sensitive instructions in the binary
of Guest OS are replaced by either Hypervisor calls which safely handle such sensitive
instructions or by some undefined opcodes which result in a CPU trap. Such a CPU trap is
handled by the Hypervisor.
The privileged instructions are translated into other instructions, which access the virtual
BIOS, memory management, and devices provided by the Virtual Machine Monitor, instead
of executing directly on the real hardware.
Binary Translation mainly used with a hosted virtualization.
Switching frequently takes place in control between virtual machines and VMMs.
This results in degrade the performance. To overcome this, the virtualization software
processes a group of instructions simultaneously.
Hardware Assist
In hardware-assisted virtualization (e.g., Intel VT-x, AMD-V), privileged and sensitive calls are
set to automatically trap to the hypervisor.
This eliminates the need for binary translation or paravirtualization. Moreover, since the
translation is done on the hardware level, it significantly improves performance.
The VMM interrupts the execution of the VM code every time it finds a privileged instruction
and hence causes severe impacts on the performance. The hardware assisted VMMs
interrupt the execution of the VM code only when the interruption is extremely necessary or
cannot be avoided.
PARAVIRTUALISATION
It supports all guest operating systems The guest operating system has to be modified and
without modification. only a few operating systems support it.
The guest operating system will issue Using the drivers, the guest operating system will
hardware calls. directly communicate with the hypervisor.
Xen Project
Microsoft Hyper- V
RedHat Virtualization
Genome Boxes
Prox Max
KVM (Kernel Virtual Machine)
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86
hardware containing virtualization extensions (Intel VT or AMD-V).
It consists of a loadable kernel module, kvm.ko, that provides the core virtualization
infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows
images.
Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter,
etc
KVM features
Security : KVM uses a combination of security-enhanced Linux (SELinux) and secure virtualization
(sVirt) for enhanced VM security and isolation.
Storage : KVM is able to use any storage supported by Linux, including some local disks and network-
attached storage (NAS). Multipath I/O may be used to improve storage and provide redundancy.
KVM also supports shared file systems so VM images may be shared by multiple hosts.
Hardware support : KVM can use a wide variety of certified Linux-supported hardware platforms.
Because hardware vendors regularly contribute to kernel development, the latest hardware features
are often rapidly adopted in the Linux kernel.
Memory management : KVM inherits the memory management features of Linux, including non-
uniform memory access and kernel same-page merging.
Live migration : KVM supports live migration, which is the ability to move a running VM between
physical hosts with no service interruption. The VM remains powered on, network connections
remain active, and applications continue to run while the VM is relocated. KVM also saves a VM's
current state so it can be stored and resumed later.
Performance and scalability: KVM inherits the performance of Linux, scaling to match demand load
if the number of guest machines and requests increases. KVM allows the most demanding
application workloads to be virtualized and is the basis for many enterprise virtualization setups,
such as datacenters and private clouds.
Scheduling and resource control : In the KVM model, a VM is a Linux process, scheduled and
managed by the kernel. The Linux scheduler allows control of the resources allocated to a Linux
process and guarantees a quality of service for a particular process.
Lower latency and higher prioritization: The Linux kernel features real-time extensions that allow
VM-based apps to run at lower latency with better prioritization (compared to bare metal).
Xen Hypervsor
Through Xen, a VM (or host) can run a number of OS images or multiple different OSs in
parallel.
Xen has been extended to compatible with full virtualization using hardware-assisted
virtualization.
Features
Robustness and Security: It offers higher level of robustness and security to the applications
than other hypervisors.
Scope for other Operating
ating system : Xen hypervisor run on the Linux OS working as the main
control stack but it can also be adjusted to other systems as well.
Isolation of Drivers: The main device drivers can be allowed by Xen hypervisor to run inside a
VM , and in case the dr driver
iver suffers a crash or is compromised. It can be restarted by
rebooting the VM that contains the driver without causing any effect on the other parts of
the system.
Support for Paravirtualization: The Xen hypervisor provides optimisation support for
paravirtualised
virtualised guests so that they can be run as VMs.
KVM Vs XEN
Both are open source technologies used for providing virtualization support for Oss.
KVM, is a type -22 virtualisation mechanism in which the drivers cannot be isolated
XEN Architecture
Xen Architecture and its mapping onto a classic x86 privilege model.
A Xen based system is handled by Xen hypervisor, which is executed in the most privileged
mode and maintains the access of guest operating system to the basic hardware.
Guest operating system are run between domains, which represents virtual machine
instances.
In addition, particular control software, which has privileged access to the host and handles
all other guest OS, runs in a special domain called Domain 0.
This the only one loaded once the virtual machine manager has fully booted, and hosts an
HTTP server that delivers requests for virtual machine creation, configuration, and
termination.
This component establishes the primary version of a shared virtual machine manager
(VMM), which is a necessary part of Cloud computing system delivering Infrastructure-as-a-
Service (IaaS) solution.
Xen is a microkernel hypervisor, which separates the policy from the mechanism.
The Xen hypervisor implements all the mechanisms, leaving the policy to be handled by
Domain 0, as shown in figure does not include any device drivers natively.
It just provides a mechanism by which a guest OS can have direct access to the physical
devices. As a result, the size of the Xen hypervisor is kept rather small.
Xen provides a virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen hypervisors, among the mare
Citrix Xen Server and Oracle VM.The core components of a Xen system are the hypervisor, kernel, and
applications.
Various x86 implementation support four distinct security levels, termed as rings, i.e.,
Ring 0,
Ring 1,
Ring 2,
Ring 3
Ring 0 represents the level having most privilege and Ring 3 represents the level having least
privilege.
Almost all the frequently used Operating system, except for OS/2, uses only two levels i.e.
Ring 0 for the Kernel code and Ring 3 for user application and non-privilege OS program.
This provides a chance to the Xen to implement paravirtualization. This enables Xen to
control unchanged the Application Binary Interface (ABI) thus allowing a simple shift to Xen-
virtualized solutions, from an application perspective.
Due to the structure of x86 instruction set, some instructions allow code execution in Ring 3
to switch to Ring 0 (Kernel mode). Such an operation is done at hardware level, and hence
between a virtualized environment, it will lead to a TRAP or a silent fault, thus preventing the
general operation of the guest OS as it is now running in Ring 1.
This condition is basically occurred by a subset of system calls. To eliminate this situation,
implementation in operating system requires a modification and all the sensitive system calls needs
re-implementation with hyper calls.
In fact, these are not possible to modify to be run in Ring 1 safely as their codebase is not reachable,
and concurrently, the primary hardware hasn’t any support to execute them in a more privileged
mode than Ring 0.
Open source OS like Linux can be simply modified as its code is openly available, and Xen delivers full
support to virtualization, while components of Windows are basically not compatible with Xen,
unless hardware-assisted virtualization is available.
As new releases of OS are designed to be virtualized, the problem is getting resolved and new
hardware supports x86 virtualization
Pros:
a) This tightly coupled collaboration between the operating system and virtualized platform enables
the system to develop lighter and flexible hypervisor that delivers their functionalities in an
optimized manner.
b) Xen supports balancing of large workload efficiently that capture CPU, Memory, disk input-output
and network input-output of data.
c) It also comes equipped with a special storage feature that we call Citrix storage link.
d) It also supports multiple processor, Iive migration one machine to another, physical server to
virtual machine or virtual server to virtual machine conversion tools, centralized multiserver
management, real time performance monitoring over window and linux.
Cons:
b) Xen relies on 3rd-party component to manage the resources like drivers, storage, backup, recovery
& fault tolerance.
c) Xen deployment could be a burden some on your Linux kernel system as time passes.
d) Xen sometimes may cause increase in load on your resources by high input-output rate and and
may cause starvation of other Vm’s.
Iaas is a model in which a customer , pay for the resources kept at the provider’s facility
or wherever the provider keeps its hardware.
The providers owns the equipment and maintains it at a level specified in the previously
agreed upon SLA ( Service Level Agreement).
How does the provider validate the integrity of the virtual machine images?
How does it protect data, applications, and infrastructure from attacks by other tenants in
the same cloud?
What tools does the provider use to detect the security flaws?
How and at what frequency are the backups provided? Is backup data encrypted?
Ubiquitous Access and Quick Deployment: PaaS enables rapid implementation , scalability,
and collaboration
Caching : A PaaS environment that supports caching resources will boost application
performance
Integrated Development Environment: A PaaS environment must have an IDE for application
development
Database: Each Paas must provide a database for developers to store and access data. For
ex: for PaaS cloud, Force.com has a service called database.com
Integration: Integration with external databases and web services and their compatibility is
ensured with leading cloud providers.
Logging: The Paas environment must have APIs to open and close log files , write event logs,
examine entries and send alerts.
Identity Management Developers in PaaS need to authenticate and manage users within
their applications
Messaging: The PaaS cloud must provide ability to APIs to manage messages, such as the
ability the ability to post message to any queue, consume messages, and examine messages.
Job Processing : The PaaS must provide ability to APIs to allow developers to start , monitor,
pause and stop large processing jobs.
Session Management: PaaS must provide the ability to view access or change user sessions.
Service discovery: PaaS platform must give developers a convenient way to discover available
services and the ability to search the cloud by service types.
Guidelines for selecting a PaaS Provider
1. Compatibility with other clouds : PaaS providers will claim portability to all other clouds. We need
to aware of providers who claim they have everything for everyone.
2. Target customers : PaaS providers have certain target customers and architect their environment o
appeal to particular group of users.
3.Avoid vendor lock-in : We must select a provider who facilitates cloud interoperability for our
application. It must be easily ported to another public or hybrid cloud or even to a non virtualised
internal infrastructure.
4. Platform management: Make sure that the PaaS provider can manage and maintain the
environment.
6.Portability/Interoperability with application on another cloud: Unlike IaaS, where OS images can be
moved between couds , applications developed on a PaaS involve cloud provider’s APIs and
customised language extensions. This makes porting of applications difficult.
8.Security for development code: since the development code resides on third party , shared
infrastructure , the customer are wary of security and privacy of the code.
Software as a Service
In the SaaS cloud, the vendor supplies the hardware infrastructure , software and
applications. The customer interacts with the application through a portal . As the service
provider hosts the application as well as stores the user data , the end user is free to use the
service from anywhere.
Users get to use the application over the internet without the onus of buying implementing
or managing the software .
SaaS and ASP (Application Service Provider) may appear to be the same, but they are
different.
There are large number of SaaS providers , such as Microsoft Live CRM, Google Apps, Trend
Micro, Symantec and Zoho.
How does the provider make sure that the users who sign up are not fraudsters and will not
start malicious activity?
How and to what extent is security integrated with the SDC at different phases, such as
architecture, coding, testing and deployment.