0% found this document useful (0 votes)
5 views

Notes 1

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Notes 1

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

CLOUD COMPUTING UNIT 2 NOTES

Architecture constraints of Web Services


1. Uniform Interface:
It is a key constraint that differentiate between a REST API and Non-REST API. It suggests that there
should be an uniform way of interacting with a given server irrespective of device or type of application
(website, mobile app).
There are four guidelines principle:
• Resource-Based
• Manipulation of Resources Through Representations
• Self-descriptive Messages
• Hypermedia as the Engine of Application State (HATEOAS)
2. Stateless:
It means that the necessary state to handle the request is contained within the request itself and server
would not store anything related to the session.
• Statelessness enables greater availability since the server does not have to maintain, update or
communicate that session state.
• There is a drawback when the client need to send too much data to the server so it reduces the scope
of network optimization and requires more bandwidth.
3. Cacheable:
• Every response should include whether the response is cacheable or not and for how much duration
responses can be cached at the client side.
• Client will return the data from its cache for any subsequent request and there would be no need to
send the request again to the server.
4. Client-Server:
REST application should have a client-server architecture.
A Client is someone who is requesting resources and are not concerned with data storage, which
remains internal to each server, and server is someone who holds the resources and are not concerned
with the user interface or user state.
• They can evolve independently.
• Client doesn't need to know anything about business logic and server doesn't need to know anything
about frontend UI.
5. Layered system:
An application architecture needs to be composed of multiple layers.
• Each layer doesn't know any thing about any layer other than that of immediate layer and there can be
lot of intermediate servers between client and the end server.
• Intermediary servers may improve system availability by enabling load-balancing and by providing
shared caches.
Publish-Subscribe Model
⚫ Pub/Sub allows services to communicate asynchronously, with latencies on the order of 100
milliseconds.
⚫ Pub/Sub is used for streaming analytics and data integration pipelines to ingest and distribute data.
It's equally effective as a messaging-oriented middleware for service integration or as a queue to
parallelize tasks.
⚫ Pub/Sub enables you to create systems of event producers and consumers, called publishers and
subscribers. Publishers communicate with subscribers asynchronously by broadcasting events, rather
than by synchronous remote procedure calls (RPCs).

Basics of Virtualization
Virtualization is the "creation of a virtual version of something, such as a server, a desktop, a storage
device, an operating system or network resources".
• Creation of a virtual machine over existing operating system and hardware is known as Hardware
Virtualization.
• A Virtual machine provides an environment that is logically separated from the underlying hardware.
• The machine on which the virtual machine is going to create is known as Host Machine and that virtual
machine is referred as a Guest Machine
Virtualization needs
Money saving
Dramatic increase in control
Simplified disaster recovery
Virtualization initiatives
• Virtual CPU & Memory
• Virtual Networking
• Virtual disk
• Virtual machine
Types of Virtualization
1. Hardware Virtualization. 2.Operating system Virtualization.
3.Server Virtualization. 4. Storage Virtualization.
Hardware Virtualization
• When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.
• The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.
After virtualization of hardware system we can install different operating system on it and run different
applications on those OS.
• Usage:
• Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is
much easier than controlling a physical server.
Operating System Virtualization
• When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.
• Usage:
• Operating System Virtualization is mainly used for testing the applications on different platforms of OS.
Storage Virtualization
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.
• Storage virtualization is also implemented by using software applications.
• Usage:
• Storage virtualization is mainly done for back-up and recovery purposes.
Server Virtualization
• When the virtual machine software or virtual machine manager (VMM) is directly installed on the
Server system is known as server virtualization.
• Usage:
• Server virtualization is done because a single physical server can be divided into multiple servers on
the demand basis and for balancing the load.

Instruction Set Architecture Level (ISA)


SA virtualization can work through ISA emulation.
This is used to run many legacy codes written for a different hardware configuration.
These codes run on any virtual machine using the ISA.
• With this, a binary code that originally needed some additional layers to run is now capable of running
on the x86 machines.
• It can also be tweaked to run on the x64 machine.
• With ISA, it is possible to make the virtual machine hardware agnostic.

Hardware Abstraction Level (HAL)


True to its name HAL lets the virtualization perform at the level of the hardware.
The virtual machine is formed at this level, which manages the hardware using the virtualization
process.
• It allows the virtualization of each of the hardware components, which could be the input- output
device, the memory, the processor, etc.
• Multiple users will not be able to use the same hardware and also use multiple virtualization instances
at the very same time.
• This is mostly used in the cloud-based infrastructure.

Virtualization Structures
Th virtualization structure, the operating system manages the hardware
A virtualization layer is inserted between the hardware and the operating system. The virtualization
layer is responsible for converting portions of the real hardware into virtual hardware.
• Therefore, different operating systems such as Linux and Windows can run on the same physical
machine simultaneously.
• Depending on the position of the virtualization layer, there are several classes of VM architectures.
• Hypervisor architecture
• Paravirtualization
• Host based virtualization

Hypervisor architecture
The hypervisor supports hardware-level virtualization on bare metal devices like CPU, memory, disk and
network interfaces.
• The hypervisor software sits directly between the physical hardware and its OS.
• This virtualization layer is referred to as either the VMM or the hypervisor.
• The hypervisor provides hypercalls for the guest OSes and applications.
• The device drivers and other changeable components are outside the hypervisor.
• The size of the hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic
hypervisor
Tools of Virtualization
KVM(Kernel-based virtual machine):
• KVM is an open source virtualization tool for Linux and contains virtualization extension (AMD-V or
Intel VT).
• It can either be operated in emulation or hardware mode. However, without the CPU extensions the
overall performance will be poor.
• It was designed for command line.
• KVM has a decent management interface that enable users to perform actions like launching and
stopping virtual machines or taking screenshots with ease.
Ganeti:
Ganeti is a virtual machine cluster management tool originally developed by Google. • The solution stack
uses either Xen, KVM, or LXC as the virtualization platform.
• Ganeti was initially started as a VM Ware Alternative for managing networks, storage and virtual
machines.
• It was designed to handle cluster management of virtual servers and offer quick and easy recovery
after physical failures using commodity software.
Virtualization of CPU
PU Virtualization is one of the cloud-computing technology that requires a single CPU to work, which
acts as multiple machines working together.
• Virtualization got its existence since the 1960s that became popular with hardware virtualization or
CPU virtualization.
• Virtualization mainly focuses on efficiency and performance-related operations by saving time.
• CPU virtualization goes by different names depending on the CPU manufacturer.
• For Intel CPU's this feature is called Intel Virtualization technology or Intel VT.
• CPU virtualization is disabled by default in the by US. BIOS and needs B iOS and needs to
be enabled in order for an operating system to take advantage of it.
• CPU virtualization involves a single CPU acting as if it were multiple separate CPUs. The
most common reason for doing this is to run multiple different operating system on one machine.
• CPU virtualization emphasizes performance and run directly on the available CPUs whenever possible.

Types of cpu virtualisaion

Disaster Recovery
Disaster recovery is one of the important factors for cloud deployments.
Disaster recovery defines the factors to ensure service availability and trust and help to develop
credibility for the cloud vendor.
• The objective of the disaster recovery plan is to provide critical IT service within a state period of time
following the declaration of a disaster and perform the following activities. • Protect and maintain
currency of vital records.
• Select a site or vendor that is capable of supporting the requirements of the critical application
workload.
A disaster recovery plan includes procedures that will ensure the optimum availability of the critical
business function.
• When disaster recovery plan fail, the failures primarily result from lack of high availability, planning,
preparation, and maintenance prior to the occurrence of the disaster.
• To prevent gaps in disaster recovery plans, recovery procedures, technology platforms and disaster
recovery vendors, contracts must be updated concurrently with changes.

You might also like