CC Notes-2
CC Notes-2
SOA makes software components reusable with the help of common communication
standards (basic protocols, syntax and semantics, s/w & h/w architecture) in such a way
that they can be rapidly incorporated into new applications without any deep integration
(data applications, API & devices across IT organization).
Each service in SOA embodies the code & data integrations required to execute a
complete business function (e.g. checking of customer’s credit, calculating a monthly loan
payment).
Service interfaces in SOA provides loose coupling (having little or no knowledge of how
the integration is implemented).
Simple protocols like HTTP, SOAP (simple object access protocol) are used to send
requests to read or change data.
Benefits
Greater business agility.
Ability to leverage legacy functionality (to be able to lock functionality in one platform
and to extend it to the other).
Improved collaboration between business & IT.
Cloud Computing (KCS-713)
Examples of SOA
By 2010, SOA implementations were gainfully streamed in the following organizations-
Delaware electric turned to SOA to integrate systems that previously did not talk to each
other.
CISCO adopted SOA to make sure that its product ordering experience was consistent
across all products and channels by exposing ordering processes as services that CISCO
divisions & business partners could incorporate into their web sites.
While the defining concepts of Service-Oriented Architecture vary from company to company,
there are six key tenets that overarch the broad concept of Service-Oriented Architecture. These
core values include:
Business value
Strategic goals
Intrinsic inter-operability
Shared services
Flexibility
Evolutionary refinement
Each of these core values can be seen on a continuum from older format distributed computing to
Service-Oriented Architecture to cloud computing (something that is often seen as an offshoot of
Service-Oriented Architecture).
There are three roles in each of the Service-Oriented Architecture building blocks: service
provider; service broker, service registry, service repository; and service
requester/consumer.
The service provider works in conjunction with the service registry, debating the whys and
hows of the services being offered, such as security, availability, what to charge, and
more. This role also determines the service category and if there need to be any trading
agreements.
The service broker makes information regarding the service available to those requesting
it. The scope of the broker is determined by whoever implements it.
The service requester locates entries in the broker registry and then binds them to the
service provider. They may or may not be able to access multiple services; that depends
on the capability of the service requester.
REST architectural style, called RESTful Web services, provide interoperability between
computer systems on the internet. RESTful Web services allow the requesting systems to
access and manipulate textual representations of Web resources by using a uniform and
predefined set of stateless operations. Other kinds of Web services, such as SOAP Web
services, expose their own arbitrary sets of operations.
"Web resources" were first defined on the World Wide Web as documents or files
identified by their URLs. However, today they have a much more generic and abstract
definition that encompasses everything, entity, or action that can be identified, named,
addressed, handled, or performed, in any way whatsoever, on the Web. In a RESTful
Web service, requests made to a resource's URI will elicit a response with
a payload formatted in HTML, XML, JSON, or some other format. The response can
confirm that some alteration has been made to the resource state, and the response can
provide hypertext links to other related resources. When HTTP is used, as is most
common, the operations (HTTP methods) available are GET, HEAD, POST, PUT,
PATCH, DELETE, CONNECT, OPTIONS and TRACE.
By using a stateless protocol and standard operations, RESTful systems aim for fast
performance, reliability, and the ability to grow by reusing components that can be
managed and updated without affecting the system as a whole, even while it is running.
Cloud Computing (KCS-713)
Client-server architecture
Statelessness
Cache ability
As on the World Wide Web, clients and intermediaries can cache responses. Responses
must, implicitly or explicitly, define themselves as either cacheable or non-cacheable to
prevent clients from providing stale or inappropriate data in response to further requests.
Well-managed caching partially or completely eliminates some client-server interactions,
further improving scalability and performance.
Layered system
A client cannot ordinarily tell whether it is connected directly to the end server, or to an
intermediary along the way. If a proxy or load balancer is placed between the client and
server, it won't affect their communications and there won't be a need to update the client
or server code. Intermediary servers can improve system scalability by enabling load
balancing and by providing shared caches. Also, security can be added as a layer on top
of the web services, and then clearly separate business logic from security logic. Adding
Cloud Computing (KCS-713)
security as a separate layer enforces security policies. Finally, it also means that a server
can call multiple other servers to generate a response to the client.
Uniform interface
The uniform interface constraint is fundamental to the design of any RESTful system. It
simplifies and decouples the architecture, which enables each part to evolve
independently. The four constraints for this uniform interface are:
Virtualization
3. Virtualization technology
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a
server, a desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows sharing a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a
logical name to a physical storage and providing a pointer to that physical resource when
demanded.
Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. A Virtual machine provides an environment that is logically separated
from the underlying hardware.
Cloud Computing (KCS-713)
The machine on which the virtual machine is going to create is known as Host Machine and that
virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the hardware system is known as hardware virtualization. The main job of hypervisor is to
control and monitoring the processor, memory and other hardware resources. After virtualization
of hardware system we can install different operating system on it and run different applications
on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different platforms
of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple servers
on the demand basis and for balancing the load.
Cloud Computing (KCS-713)
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device. Storage virtualization is also
implemented by using software applications.
Usage:
Virtualization plays a very important role in the cloud computing technology, normally
in the cloud computing, users share the data present in the clouds like application etc, but
actually with the help of virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.
To overcome this problem we use basically virtualization technology, By using
virtualization, all severs and the software application which are required by other cloud
providers are maintained by the third party people, and the cloud providers has to pay the
money on monthly or annual basis.
Data Virtualization
Data virtualization is the process of retrieve data from various resources without knowing its
type and physical location where it is stored. It collects heterogeneous data from different
resources and allows data users across the organization to access this data according to their
work requirements. This heterogeneous data can be accessed using any application such as web
portals, web services, E-commerce, Software as a Service (SaaS), and mobile application.
We can use Data Virtualization in the field of data integration, business intelligence, and cloud
computing.
o It allows users to access the data without worrying about where it resides on the memory.
o It offers better customer satisfaction, retention, and revenue growth.
o It provides various security mechanisms that allow users to safely store their personal and
professional information.
o It reduces costs by removing data replication.
o It provides a user-friendly interface to develop customized views.
Cloud Computing (KCS-713)
1. Analyze performance
Data virtualization is used to analyze the performance of the organization compared to previous
years.
Data Virtualization (DV) provides a mechanism to easily search the data which is similar and
internally related to each other.
It is one of the most common uses of Data Virtualization. It is used in agile reporting, real-time
dashboards that require timely aggregation, analyze and present the relevant data from multiple
resources. Both individuals and managers use this to monitor performance, which helps to make
daily operational decision processes such as sales, support, finance, logistics, legal, and
compliance.
4. Data Management
Data virtualization provides a secure centralized layer to search, discover, and govern the unified
data and its relationships.
Red Hat virtualization is the best choice for developers and those who are using micro services
and containers. It is written in Java.
TIBCO helps administrators and users to create a data virtualization platform for accessing the
multiple data sources and data sets. It provides a builtin transformation engine to combine non-
relational and un-structured data sources.
It is a very popular and powerful data integrator tool which is mainly worked with Oracle
products. It allows organizations to quickly develop and manage data services to access a single
view of data.
SAS Federation Server provides various technologies such as scalable, multi-user, and standards-
based data access to access data from multiple data services. It mainly focuses on securing data.
5. Denodo
Denodo is one of the best data virtualization tools which allows organizations to minimize the
network traffic load and improve response time for large data sets. It is suitable for both small as
well as large organizations.
Hardware Virtualization
Previously, there was "one to one relationship" between physical servers and operating
system. Low capacity of CPU, memory, and networking requirements were available. So,
by using this model, the costs of doing business increased. The physical space, amount of
power, and hardware required meant that costs were adding up.
The hypervisor manages shared the physical resources of the hardware between the
guest operating systems and host operating system. The physical resources become
abstracted versions in standard formats regardless of the hardware platform. The
abstracted hardware is represented as actual hardware. Then the virtualized operating
system looks into these resources as they are physical entities.
Virtualization means abstraction. Hardware virtualization is accomplished by
abstracting the physical hardware layer by use of a hypervisor or VMM (Virtual Machine
Monitor).
When the virtual machine software or virtual machine manager (VMM) or hypervisor
software is directly installed on the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and
other hardware resources.
After virtualization of hardware system we can install different operating system on it
and run different applications on those OS.
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
The main benefits of hardware virtualization are more efficient resource utilization, lower overall
costs as well as increased uptime and IT flexibility.
Physical resources can be shared among virtual machines. Although the unused resources can be
allocated to a virtual machine and that can be used by other virtual machines if the need exists.
Now it is possible for multiple operating systems can co-exist on a single hardware platform, so
that the number of servers, rack space, and power consumption drops significantly.
The modern hypervisors provide highly orchestrated operations that maximize the abstraction of
the hardware and help to ensure the maximum uptime. These functions help to migrate a running
Cloud Computing (KCS-713)
virtual machine from one host to another dynamically, as well as maintain a running copy of
virtual machine on another physical host in case the primary host fails.
4) Increased IT Flexibility:
Hardware virtualization helps for quick deployment of server resources in a managed and
consistent ways. That results in IT being able to adapt quickly and provide the business with
resources needed in good time.
Software Virtualization
Copying a file to a workstation or linking a file in a network then we can easily install virtual
software.
2) Easy to manage:
To manage updates becomes a simpler task. You need to update at one place and deploy the
updated virtual application to the all clients.
3) Software Migration:
Without software virtualization, moving from one software platform to another platform takes
much time for deploying and impact on end user systems. With the help of virtualized software
environment the migration becomes easier.
Server Virtualization
Server Virtualization is the process of dividing a physical server into several virtual
servers, called virtual private servers. Each virtual private server can run independently.
The concept of Server Virtualization widely used in the IT infrastructure to minimizes the
costs by increasing the utilization of existing resources.
Cloud Computing (KCS-713)
1. Hypervisor
The hypervisor is mainly used to perform various tasks such as allocate physical hardware
resources (CPU, RAM, etc.) to several smaller independent virtual machines, called "guest" on
the host machine.
2. Full Virtualization
Full Virtualization uses a hypervisor to directly communicate with the CPU and physical server.
It provides the best isolation and security mechanism to the virtual machines.
The biggest disadvantage of using hypervisor in full virtualization is that a hypervisor has its
own processing needs, so it can slow down the application and server performance.
3. Para Virtualization
Para Virtualization is quite similar to the Full Virtualization. The advantage of using this
virtualization is that it is easier to use, Enhanced performance, and does not require
emulation overhead. Xen primarily and UML use the Para Virtualization.
The difference between full and pare virtualization is that, in para virtualization hypervisor does
not need too much processing power to manage the OS.
Linux OS Virtualization and Windows OS Virtualization are the types of Operating System
virtualization. FreeVPS, OpenVZ, and Linux Vserver are some examples of System-Level
Virtualization.
Cloud Computing (KCS-713)
Hardware Assisted Virtualization was presented by AMD and Intel. It is also known
as Hardware virtualization, AMD virtualization, and Intel virtualization. It is designed to
increase the performance of the processor. The advantage of using Hardware Assisted
Virtualization is that it requires less hypervisor overhead.
6. Kernel-Level Virtualization
User Mode Linux (UML) and Kernel-based virtual machine are some examples of kernel
virtualization.
1. Independent Restart
In Server Virtualization, each server can be restart independently and does not affect the working
of other virtual servers.
2. Low Cost
Server Virtualization can divide a single server into multiple virtual private servers, so it reduces
the cost of hardware components.
3. Disaster Recovery<
Server virtualization allows us to deploy our resources in a simpler and faster way.
5. Security
It allows users to store their sensitive data inside the data centers.
1. The biggest disadvantage of server virtualization is that when the server goes offline, all
the websites that are hosted by the server will also go down.
2. There is no way to measure the performance of virtualized environments.
3. It requires a huge amount of RAM consumption.
4. It is difficult to set up and maintain.
5. Some core applications and databases are not supported virtualization.
6. It requires extra hardware resources.
Storage Virtualization
As we know that, there has been a strong link between the physical host and the locally
installed storage devices. However, that paradigm has been changing drastically; almost
local storage is no longer needed.
As the technology progressing, more advanced storage devices are coming to the market
that provide more functionality, and obsolete the local storage.
Storage virtualization is a major component for storage servers, in the form of functional
RAID levels and controllers.
Operating systems and applications with device can access the disks directly by
themselves for writing.
The controllers configure the local storage in RAID groups and present the storage to the
operating system depending upon the configuration. However, the storage is abstracted
and the controller is determining how to write the data or retrieve the requested data for
the operating system.
Storage virtualization is becoming more and more important in various other forms:
File servers: The operating system writes the data to a remote location with no need to
understand how to write to the physical media.
WAN Accelerators: Instead of sending multiple copies of the same data over the WAN
environment, WAN accelerators will cache the data locally and present the re-requested
blocks at LAN speed, while not impacting the WAN performance.
SAN and NAS: Storage is presented over the Ethernet network of the operating system.
NAS presents the storage as file operations (like NFS). SAN technologies present the
storage as block level storage (like Fibre Channel). SAN technologies receive the
operating instructions only when if the storage was a locally attached device.
Cloud Computing (KCS-713)
Storage Tiering: Utilizing the storage pool concept as a stepping stone, storage tiering
analyzes the most commonly used data and places it on the highest performing storage
pool. The lowest one used data is placed on the weakest performing storage pool.
This operation is done automatically without any interruption of service to the data consumer.
CPU Virtualization
Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by
modern operating systems.
In a traditional execution environment, the operating system maintains mappings
of virtual memory to machine memory using page tables, which is a one-stage mapping
from virtual memory to machine memory.
All modern x86 CPUs include a memory management unit (MMU) and a translation look
aside buffer (TLB) to optimize virtual memory performance.
However, in a virtual execution environment, virtual memory virtualization involves
sharing the physical system memory in RAM and dynamically allocating it to
the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the
VMM, respectively: virtual memory to physical memory and physical memory to
machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest
OS. The guest OS continues to control the mapping of virtual addresses to the physical
memory addresses of VMs. But the guest OS cannot directly access the actual machine
memory. The VMM is responsible for mapping the guest physical memory to the actual
machine memory.
Cloud Computing (KCS-713)
I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware. At the time of this writing, there are three ways to implement
I/O virtualization: full device emulation, Para-virtualization, and direct I/O. Full device
emulation are the first approach for I/O virtualization. Generally, this approach emulates
well-known, real-world devices.
A virtual machine is effectively a single file that contains everything, including your
operating systems, programs, settings, and files. At the same time, you’ll be able to
use your virtual machine the same way you use a local desktop.
Virtualization greatly simplifies disaster recovery, since it does not require rebuilding
a physical server environment. Instead, you can move your virtual machines over to
another system and access them as normal.
Factor in cloud computing, and you have the complete flexibility of not having to
depend on in-house hardware at all. Instead, all you’ll need is a device with internet
access and a remote desktop application to get straight back to work as though
nothing happened.
Decide which systems and data are the most critical for recovery, and document them.
Get management support for the DR(disaster recovery) plan
Complete a risk assessment and business impact analysis to outline possible risks and
their potential impacts.
Cloud Computing (KCS-713)