0% found this document useful (0 votes)
92 views52 pages

PG Cloud Computing Unit II

Uploaded by

deepthi sanjeev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views52 pages

PG Cloud Computing Unit II

Uploaded by

deepthi sanjeev
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

CLOUD COMPUTING

UNIT II

Parallel Computing Vs Distributed Computing

There are mainly two computation types, including parallel


computing and distributed computing. A computer system may perform tasks
according to human instructions. A single processor executes only one task in
the computer system, which is not an effective way. Parallel computing solves
this problem by allowing numerous processors to accomplish tasks
simultaneously. Modern computers support parallel processing to improve
system performance. In contrast, distributed computing enables several
computers to communicate with one another and achieve a goal. All of these
computers communicate and collaborate over the network. Distributed
computing is commonly used by organizations such
as Facebook and Google that allow people to share resources.

In this article, you will learn about the difference between Parallel


Computing and Distributed Computing. But before discussing the differences,
you must know about parallel computing and distributed computing.

What is Parallel Computing?

It is also known as parallel processing. It utilizes several processors. Each of


the processors completes the tasks that have been allocated to them. In other
words, parallel computing involves performing numerous tasks simultaneously.
A shared memory or distributed memory system can be used to assist in parallel
computing. All CPUs in shared memory systems share the memory. Memory is
shared between the processors in distributed memory systems.

Parallel computing provides numerous advantages. Parallel computing helps to


increase the CPU utilization and improve the performance because several
processors work simultaneously. Moreover, the failure of one CPU has no
impact on the other CPUs' functionality. Furthermore, if one processor needs
instructions from another, the CPU might cause latency.

Advantages and Disadvantages of Parallel Computing

There are various advantages and disadvantages of parallel computing. Some of


the advantages and disadvantages are as follows:
Advantages

1. It saves time and money because many resources working together cut
down on time and costs.
2. It may be difficult to resolve larger problems on Serial Computing.
3. You can do many things at once using many computing resources.
4. Parallel computing is much better than serial computing for modeling,
simulating, and comprehending complicated real-world events.

Disadvantages

1. The multi-core architectures consume a lot of power.


2. Parallel solutions are more difficult to implement, debug, and prove right
due to the complexity of communication and coordination, and they
frequently perform worse than their serial equivalents.

What is Distributing Computing?

It comprises several software components that reside on different systems but


operate as a single system. A distributed system's computers can be physically
close together and linked by a local network or geographically distant and
linked by a wide area network (WAN). A distributed system can be made up of
any number of different configurations, such as mainframes, PCs, workstations,
and minicomputers. The main aim of distributed computing is to make a
network work as a single computer.

There are various benefits of using distributed computing. It enables scalability


and makes it simpler to share resources. It also aids in the efficiency of
computation processes.

Advantages and Disadvantages of Distributed Computing

There are various advantages and disadvantages of distributed computing. Some


of the advantages and disadvantages are as follows:

Advantages

1. It is flexible, making it simple to install, use, and debug new services.


2. In distributed computing, you may add multiple machines as required.
3. If the system crashes on one server, that doesn't affect other servers.
4. A distributed computer system may combine the computational capacity
of several computers, making it faster than traditional systems.

Disadvantages

1. Data security and sharing are the main issues in distributed systems due
to the features of open systems
2. Because of the distribution across multiple servers, troubleshooting and
diagnostics are more challenging.
3. The main disadvantage of distributed computer systems is the lack of
software support.

Key differences between the Parallel Computing and Distributed


Computing

Here, you will learn the various key differences between parallel computing and
distributed computation. Some of the key differences between parallel
computing and distributed computing are as follows:

1. Parallel computing is a sort of computation in which various tasks or


processes are run at the same time. In contrast, distributed computing is
that type of computing in which the components are located on various
networked systems that interact and coordinate their actions by passing
messages to one another.
2. In parallel computing, processors communicate with another processor
via a bus. On the other hand, computer systems in distributed computing
connect with one another via a network.
3. Parallel computing takes place on a single computer. In contrast,
distributed computing takes place on several computers.
4. Parallel computing aids in improving system performance. On the other
hand, distributed computing allows for scalability, resource sharing, and
the efficient completion of computation tasks.
5. The computer in parallel computing can have shared or distributed
memory. In contrast, every system in distributed computing has its
memory.
6. Multiple processors execute multiple tasks simultaneously in parallel
computing. In contrast, many computer systems execute tasks
simultaneously in distributed computing.
Head-to-head Comparison between the Parallel Computing and
Distributed Computing

Features Parallel Computing Distributed Computing

Definition It is a type of It is that type of computing in which


computation in which the components are located on
various processes runs various networked systems that
simultaneously. interact and coordinate their actions
by passing messages to one another.

Communication The processors The computer systems connect with


communicate with one one another via a network.
another via a bus.

Functionality Several processors Several computers execute tasks


execute various tasks simultaneously.
simultaneously in
parallel computing.

Number of It occurs in a single It involves various computers.


Computers computer system.

Memory The system may have Each computer system in distributed


distributed or shared computing has its own memory.
memory.

Usage It helps to improve the It allows for scalability, resource


system performance sharing, and the efficient completion
of computation tasks.

Elements of parallel computing


Silicon-based processor chips
are reaching their physical
limits. Processing speed is
constrained by the speed of
light, and the density of
transistors packaged in a
processor is
constrained by
thermodynamics limitations.
• A viable solution to
overcome this limitation is to
connect multiple processors
working in
coordination with each other to
solve “Grand Challenge”
problems.
• The first step in this
direction led
– To the development of
parallel computing, which
encompasses techniques,
architectures, and systems for
performing multiple activities
in parallel.
Silicon-based processor chips are reaching their physical limits. Processing
speed is constrained by the speed of light, and the density of transistors
packaged in a processor is constrained by thermodynamics limitations.
• A viable solution to overcome this limitation is to connect multiple processors
working in coordination with each other to solve “Grand Challenge” problems.
• The first step in this direction led
– To the development of parallel computing, which encompasses
techniques, architectures, and systems for performing multiple activities in
parallel.
Parallel Processing
 The primary goal of parallel computing is to increase the computational
power available to your essential applications.
 Typically, This infrastructure is where the set of processors are present on
a server, or separate servers are connected to each other to solve a
computational problem.
 In the earliest computer software, that executes a single instruction
(having a single Central Processing Unit (CPU)) at a time that has written
for serial computation.
 A Problem is broken down into multiple series of instructions, and that
Instructions executed one after another. Only one computational
instruction complete at a time.
Main Reasons to use Parallel Computing is that:
1. Save time and money.
2. Solve larger problems.
3. Provide concurrency.
4. Multiple execution units
What Is Parallel Processing?
Parallel processing is a computing technique when multiple streams of
calculations or data processing tasks co-occur through numerous central
processing units (CPUs) working concurrently. 

Pictorial Representation of Parallel Processing and its Inner Workings


Parallel processing uses two or more processors or CPUs simultaneously to
handle various components of a single activity. Systems can slash a program’s
execution time by dividing a task’s many parts among several processors.
Multi-core processors, frequently found in modern computers, and any system
with more than one CPU are capable of performing parallel processing.
For improved speed, lower power consumption, and more effective handling of
several activities, multi-core processors are integrated circuit (IC) chips with
two or more CPUs. Most computers can have two to four cores, while others
can have up to twelve. Complex operations and computations are frequently
completed in parallel processing.
This system demonstrated that one could attain high performance with
microprocessors available off the shelf in the general market. As the ASCI Red
supercomputer computer broke the threshold of one trillion floating point
operations per second in 1997, these massively parallel processors (MPPs)
emerged to dominate the upper end of computing. MPPs have since expanded in
number and influence.

Hardware architecture for parallel processing


 The core elements of parallel processing are CPU's.
 It is based on the number of instructions and data streams.
 It can be processed simultaneously.
Computing systems are classified into four categories
1. Single Instruction Single Data (SISD) System
2. Single Instruction Multiple Data  (SIMD) System
3. Multiple Instructions Single Data  (MISD) System
4. Multiple Instructions Multiple Data  (MIMD) System

SISD - Single Instruction Single Data


 It is a uni-processor machine capable of executing a single instruction which
operates on a single data stream
 Machine instructions are processed sequentially
 All instructions and data to be processed have to be stored in primary memory. 
 This is limited by the rate at which the computer can transfer information
internally.

SIMD - Single Instruction Multiple Data


 It is a multiprocessor machine capable of executing the same instructions on all
the CPU's but operating on different data streams
 this model is well suited for scientific computing which involves lots of vector
and matrix operations.
MISD - Multiple Instructions Single Data
 MISD computing system is also a multiprocessor machine so it is capable of
executing different instructions on different processors that operate as a single
data.
 Few machines are built but none of them are available commercially.

MIMD - Multiple Instructions Multiple Data


 It is a multiprocessor machine capable of executing multiple instructions on
multiple data sets.
 Each processor in the MIMD separates instruction and data streams.
MIMD machines are broadly classified into two types.
1. Shared memory MMD machines
2. Distributed-memory MIMD machines
1. Shared memory MIMD machines
 All processors are connected to a single global memory.
 This is also called a tightly coupled multiprocessor.
 Communication between processor through by shared memory.
 Modification of the data stored in the global memory by one processor is visible
to all other processors.

2. Distributed Memory MIMD Machines


 All processes have a local memory.
 Systems based on this model are also called loosely coupled multiprocessor
systems.
 Communication between processes through the interconnection network.
 Each processes are operated by asynchronously and also it exchanging
messages synchronously.
Approaches of parallelism:

Levels of Parallelism
There are different level of parallelism which are as follows −
 Instruction Level − At instruction level, a grain is consist of less than 20
instruction called fine grain. Fine-grain parallelism at this level may range
from two thousand depending on an individual program single instruction
stream parallelism is greater than two but the average parallelism at
instruction level is around fine rarely exceeding seven in an ordinary
program.
For scientific applications, the average parallel is in the range of 500 to
300 Fortran statements executing concurrently in an idealized
environment.
 Loop Level − It embraces iterative loop operations. A loop may contain
fewer than 500 instructions. Some loop independent operations can be
vectorized for pipelined execution or look step execution of SIMD
machines.
Loop level parallelism is the most optimized program generate to
implement on a parallel or vector computer. But recursive loops are
different to parallelize. Vector processing is mostly exploited at the loop
level by vectorizing compiler.
 Procedural Level − It communicates to medium grain size at the task,
procedure, subroutine levels. Grain at this level has less than 2000
instructions. Detection of parallelism at this level is much more difficult
than a finer grain level.
Communication obligation is much less as compared with that MIMD
execution model. But here major efforts are requisite by the programmer
to reorganize a program at this level.
 Subprogram Level − Subprogram level communicates to job steps and
related subprograms. Grain size here has less than 1000 instructions. Job
steps can overlap across diverse jobs. Multiprogramming a uniprocessor
or multiprocessor is conducted at this level.
 Job Level − It corresponds to parallel executions of independent tasks on
a parallel computer. Grain size here can be tens of thousands of
instructions. It is managed by the program loader and by the operating
framework. Time-sharing & space-sharing multiprocessors analyze this
level of parallelism.
What Is Distributed Computing?
Distributed computing is the method of making multiple computers work
together to solve a common problem. It makes a computer network appear as a
powerful single computer that provides large-scale resources to deal with
complex challenges.
For example, distributed computing can encrypt large volumes of data; solve
physics and chemical equations with many variables; and render high-quality,
three-dimensional video animation. Distributed systems, distributed
programming, and distributed algorithms are some other terms that all refer to
distributed computing. 
What are the advantages of distributed computing?
Distributed systems bring many advantages over single system computing. The
following are some of them.
Scalability
Distributed systems can grow with your workload and requirements. You can
add new nodes, that is, more computing devices, to the distributed computing
network when they are needed.
Availability
Your distributed computing system will not crash if one of the computers goes
down. The design shows fault tolerance because it can continue to operate even
if individual computers fail.
Consistency
Computers in a distributed system share information and duplicate data between
them, but the system automatically manages data consistency across all the
different computers. Thus, you get the benefit of fault tolerance without
compromising data consistency.
Transparency
Distributed computing systems provide logical separation between the user and
the physical devices. You can interact with the system as if it is a single
computer without worrying about the setup and configuration of individual
machines. You can have different hardware, middleware, software, and
operating systems that work together to make your system function smoothly.
Efficiency
Distributed systems offer faster performance with optimum resource use of the
underlying hardware. As a result, you can manage any workload without
worrying about system failure due to volume spikes or underuse of expensive
hardware.
What are some distributed computing use cases?
Distributed computing is everywhere today. Mobile and web applications are
examples of distributed computing because several machines work together in
the backend for the application to give you the correct information. However,
when distributed systems are scaled up, they can solve more complex
challenges. Let’s explore some ways in which different industries use high-
performing distributed applications.
Healthcare and life sciences
Healthcare and life sciences use distributed computing to model and simulate
complex life science data. Image analysis, medical drug research, and gene
structure analysis all become faster with distributed systems. These are some
examples:

 Accelerate structure-based drug design by visualizing molecular models in


three dimensions.
 Reduce genomic data processing times to get early insights into cancer, cystic
fibrosis, and Alzheimer’s.
 Develop intelligent systems that help doctors diagnose patients by processing a
large volume of complex images like MRIs, X-rays, and CT scans.
Engineering research
Engineers can simulate complex physics and mechanics concepts on distributed
systems. They use this research to improve product design, build complex
structures, and design faster vehicles. Here are some examples:

 Computation fluid dynamics research studies the behavior of liquids and


implements those concepts in aircraft design and car racing.
 Computer-aided engineering requires compute-intensive simulation tools to test
new plant engineering, electronics, and consumer goods. 
Financial services 
Financial services firms use distributed systems to perform high-speed
economic simulations that assess portfolio risks, predict market movements, and
support financial decision-making. They can create web applications that use
the power of distributed systems to do the following:

 Deliver low-cost, personalized premiums


 Use distributed databases to securely support a very high volume of financial
transactions.
 Authenticate users and protect customers from fraud 
Energy and environment 
Energy companies need to analyze large volumes of data to improve operations
and transition to sustainable and climate-friendly solutions. They use distributed
systems to analyze high-volume data streams from a vast network of sensors
and other intelligent devices. These are some tasks they might do:
 Streaming and consolidating seismic data for the structural design of power
plants
 Real-time oil well monitoring for proactive risk management
What are the types of distributed computing architecture?
In distributed computing, you design applications that can run on several
computers instead of on just one computer. You achieve this by designing the
software so that different computers perform different functions and
communicate to develop the final solution. There are four main types of
distributed architecture.
Client-server architecture
Client-server is the most common method of software organization on a
distributed system. The functions are separated into two categories: clients and
servers.
Clients
Clients have limited information and processing ability. Instead, they make
requests to the servers, which manage most of the data and other resources. You
can make requests to the client, and it communicates with the server on your
behalf.
Servers
Server computers synchronize and manage access to resources. They respond to
client requests with data or status information. Typically, one server can handle
requests from several machines.
Benefits and limitations
Client-server architecture gives the benefits of security and ease of ongoing
management. You have only to focus on securing the server computers.
Similarly, any changes to the database systems require changes to the server
only.
The limitation of client-server architecture is that servers can cause
communication bottlenecks, especially when several machines make requests
simultaneously.
Three-tier architecture
In three-tier distributed systems, client machines remain as the first tier you
access. Server machines, on the other hand, are further divided into two
categories:
Application servers
Application servers act as the middle tier for communication. They contain the
application logic or the core functions that you design the distributed system
for.
Database servers
Database servers act as the third tier to store and manage the data. They are
responsible for data retrieval and data consistency.
By dividing server responsibility, three-tier distributed systems reduce
communication bottlenecks and improve distributed computing performance.
N-tier architecture
N-tier models include several different client-server systems communicating
with each other to solve the same problem. Most modern distributed systems
use an n-tier architecture with different enterprise applications working together
as one system behind the scenes.
Peer-to-peer architecture
Peer-to-peer distributed systems assign equal responsibilities to all networked
computers. There is no separation between client and server computers, and any
computer can perform all responsibilities. Peer-to-peer architecture has become
popular for content sharing, file streaming, and blockchain networks.

Key Components of a Distributed System


The three basic components of a distributed system include primary system
controller, system data store, and database. In a non-clustered environment,
optional components consist of user interfaces and secondary controllers.
Main Components of a Distributed System
1. Primary system controller
The primary system controller is the only controller in a distributed system and
keeps track of everything. It’s also responsible for controlling the dispatch and
management of server requests throughout the system. The executive and
mailbox services are installed automatically on the primary system controller. In
a non-clustered environment, optional components consist of a user interface
and secondary controllers.
2. Secondary controller
The secondary controller is a process controller or a communications controller.
It’s responsible for regulating the flow of server processing requests and
managing the system’s translation load. It also governs communication between
the system and VANs or trading partners.
3. User-interface client
The user interface client is an additional element in the system that provides
users with important system information. This is not a part of the clustered
environment, and it does not operate on the same machines as the controller. It
provides functions that are necessary to monitor and control the system.
4. System datastore
Each system has only one data store for all shared data. The data store is usually
on the disk vault, whether clustered or not. For non-clustered systems, this can
be on one machine or distributed across several devices, but all of these
computers must have access to this datastore.
5. Database
In a distributed system, a relational database stores all data. Once the data store
locates the data, it shares it among multiple users. Relational databases can be
found in all data systems and allow multiple users to use the same information
simultaneously.
Architecture Styles:
To show different arrangement styles among computers Architecture styles are
proposed.

1. Layered Architecture:

In Layered architecture, different components are organised in layers. Each


layer communicates with its adjacent layer by sending requests and getting
responses.  The layered architecture separates components into units. It is an
efficient way of communication. Any layer can not directly communicate with
another layer. A layer can only communicate with its neighbouring layer and
then the next layer transfers information to another layer and so on the process
goes on.
In some cases, layered architecture is in cross-layer coordination. In a cross-
layer, any adjacent layer can be skipped until it fulfils the request and provides
better performance results. Request flow from top to bottom(downwards) and
response flow from bottom to top(upwards). The advantage of layered
architecture is that each layer can be modified independently without affecting
the whole system. This type of architecture is used in Open System
Interconnection (OSI) model.
To the layers on top, the layers at the bottom offer a service. While the response
is transmitted from bottom to top, the request is sent from top to bottom. This
method has the advantage that calls always follow a predetermined path and that
each layer is simple to replace or modify without affecting the architecture as a
whole.
 

2. Object-Oriented Architecture:

In this type of architecture, components are treated as objects which convey


information to each other. Object-Oriented Architecture contains an
arrangement of loosely coupled objects. Objects can interact with each other
through method calls. Objects are connected to each other through the Remote
Procedure Call (RPC) mechanism or Remote Method Invocation (RMI)
mechanism. Web Services and REST API are examples of object-oriented
architecture. Invocations of methods are how objects communicate with one
another. Typically, these are referred to as Remote Procedure Calls (RPC).
REST API Calls, Web Services, and Java RMI are a few well-known examples.
These characteristics apply to this.

3. Data Centred Architecture:

Data Centred Architecture is a type of architecture in which a common data


space is present at the centre. It contains all the required data in one place a
shared data space. All the components are connected to this data space and they
follow publish/subscribe type of communication. It has a central data repository
at the centre. Required data is then delivered to the components. Distributed file
systems, producer-consumer systems, and web-based data services are a few
well-known examples.
For example Producer-Consumer system. The producer produces data in
common data space and consumers request data.
 
 

4. Event-Based Architecture:

Event-Based Architecture is almost similar to Data centred architecture just the


difference is that in this architecture events are present instead of data. Events
are present at the centre in the Event bus and delivered to the required
component whenever needed. In this architecture, the entire communication is
done through events. When an event occurs, the system, as well as the receiver,
get notified. Data, URLs etc are transmitted through events. The components of
this system are loosely coupled that’s why it is easy to add, remove and modify
them. Heterogeneous components can communicate through the bus. One
significant benefit is that these heterogeneous components can communicate
with the bus using any protocol. However, a specific bus or an ESB has the
ability to handle any kind of incoming request and respond appropriately.
What is Interprocess Communication?

Interprocess communication is the mechanism provided by the operating


system that allows processes to communicate with each other. This
communication could involve a process letting another process know that some
event has occurred or the transferring of data from one process to another.
A diagram that illustrates interprocess communication is as follows −

Synchronization in Interprocess Communication


Synchronization is a necessary part of interprocess communication. It is either
provided by the interprocess control mechanism or handled by the
communicating processes. Some of the methods to provide synchronization are
as follows −
 Semaphore
A semaphore is a variable that controls the access to a common resource
by multiple processes. The two types of semaphores are binary
semaphores and counting semaphores.
 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the
critical section at a time. This is useful for synchronization and also
prevents race conditions.
 Barrier
A barrier does not allow individual processes to proceed until all the
processes reach it. Many parallel languages and collective routines
impose barriers.
 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a
loop while checking if the lock is available or not. This is known as busy
waiting because the process is not doing any useful operation even though
it is active.
Approaches to Interprocess Communication
The different approaches to implement interprocess communication are given
as follows −
 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to
create a two-way data channel between two processes. This uses standard
input and output methods. Pipes are used in all POSIX systems as well as
Windows operating systems.
 Socket
The socket is the endpoint for sending or receiving data in a network. This
is true for data sent between processes on the same computer or data sent
between different computers on the same network. Most of the operating
systems use sockets for interprocess communication.
 File
A file is a data record that may be stored on a disk or acquired on demand
by a file server. Multiple processes can access a file as required. All
operating systems use files for data storage.
 Signal
Signals are useful in interprocess communication in a limited way. They
are system messages that are sent from one process to another. Normally,
signals are not used to transfer data but are used for remote commands
between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by
multiple processes. This is done so that the processes can communicate
with each other. All POSIX systems, as well as Windows operating
systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without
being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of
interprocess communication is as follows −
Major Distributed Computing Technologies

Distributed Computing:
Distributed computing is a model in which components of a software system are
shared among multiple computers to improve efficiency and performance. It is a
field of computer science that studies distributed systems. In distributed
system components are located on different networked computers.

Distributed computing (source)


Three Major Distributed Computing Technologies:
There are three major distributed computing technology which are given below:

Mainframes:
Mainframes were the first example of large computing facilities which leverage
multiple processing units. They are powerful, highly reliable computers
specialized for large data movement and large I/O operations. Mainframes are
mostly used by large organizations for bulk data processing such as online
transactions, enterprise resource planning and other big data operations. They
are not considered as a distributed system; however they can perform big data
processing and operations due to their high computational power by multiple
processors. One of the most attractive features of mainframe was the ability to
be highly reliable computers that were always on and capable of tolerating
failures transparently. Furthermore, system shutdown is not required to change
its component. Batch processing is the important application of mainframes.
Their popularity has been reduced nowadays.

Mainfra
me
Clusters:
Clusters have started as the low-cost alternative to the mainframes and
supercomputer. Due to advancement of technology in mainframes and
supercomputers, other hardware’s and machines have become cheap which are
connected by high bandwidth networks controlled by specific software tools
that manage the messaging system. Since the 1980s cluster has become standard
technology for parallel and high-performance computing. Due to their low
investment cost different research institutions, companies, universities now a
day use clusters. This technology contributed to the evolution of tools and
framework for distributed computing like Condor, PVM, MP. One of the
attractive features of clusters is the cheap machines with high computational
power to solve the problem. And clusters are scalable. Example of a cluster is
amazon EC2 clusters to process data using Hadoop which has multiple
nodes(machines) with master nodes and data nodes and we can scale it if we
have a big volume of data.
Cluster
Grids:
They appeared in the early 1990’s as the evolution of cluster computing. Grid
computing can have an analogy with electric power grid which is an approach
to use high computational power, storage services and other variety of services.
Users can consume resources in the same way as use of utilities such as power,
gas and water. Grids initially developed aggregation of geographically dispersed
clusters by means of internet connections and clusters belonging to different
organizations and arrangement is made to share computational power between
those organizations. Grid is dynamic aggregation of heterogeneous computing
nodes which can be both nationwide and worldwide. Different development in
technology has made possible in diffusion of computing grids which are:

 becoming cluster as common resources


 underutilization
 Some problems with higher computational requirement and seems
impossible from single cluster
 high band network, long distance connectivity

Grid
Distributing technology has led to the development of cloud computing.
What is RPC
Remote Procedure Call (RPC) is a communication technology that is used by
one program to make a request to another program for utilizing its service on a
network without even knowing the network’s details. A function call or a
subroutine call are other terms for a procedure call.
It is based on the client-server concept. The client is the program that makes
the request, and the server is the program that gives the service. An RPC, like
a local procedure call, is based on the synchronous operation that requires the
requesting application to be stopped until the remote process returns its results.
Multiple RPCs can be executed concurrently by utilizing lightweight processes
or threads that share the same address space. Remote Procedure Call program
as often as possible utilizes the Interface Definition Language (IDL), a
determination language for describing a computer program component’s
Application Programming Interface (API). In this circumstance, IDL acts as an
interface between machines at either end of the connection, which may be
running different operating systems and programming languages.
Working Procedure for RPC Model:
 The process arguments are placed in a precise location by the caller when
the procedure needs to be called.
 Control at that point passed to the body of the method, which is having a
series of instructions.
 The procedure body is run in a recently created execution environment that
has duplicates of the calling instruction’s arguments.
 At the end, after the completion of the operation, the calling point gets back
the control, which returns a result.
 The call to a procedure is possible only for those procedures that
are not within the caller’s address space because both processes
(caller and callee) have distinct address space and the access is
restricted to the caller’s environment’s data and variables from the
remote procedure.
 The caller and callee processes in the RPC communicate to
exchange information via the message-passing scheme.
 The first task from the server-side is to extract the procedure’s
parameters when a request message arrives, then the result, send a
reply message, and finally wait for the next call message.
 Only one process is enabled at a certain point in time.
 The caller is not always required to be blocked.
 The asynchronous mechanism could be employed in the RPC that
permits the client to work even if the server has not responded
yet.
 In order to handle incoming requests, the server might create a
thread that frees the server for handling consequent requests.
Types of RPC:
Callback RPC: In a Callback RPC, a P2P (Peer-to-Peer)paradigm opts
between participating processes. In this way, a process provides both client
and server functions which are quite helpful. Callback RPC’s features include:
 The problems encountered with interactive applications that are handled
remotely
 It provides a server for clients to use.
 Due to the callback mechanism, the client process is delayed.
 Deadlocks need to be managed in callbacks.
 It promotes a Peer-to-Peer (P2P) paradigm among the processes involved.
RPC for Broadcast: A client’s request that is broadcast all through the
network and handled by all servers that possess the method for handling that
request is known as a broadcast RPC. Broadcast RPC’s features include:
 You have an option of selecting whether or not the client’s request message
ought to be broadcast.
 It also gives you the option of declaring broadcast ports.
 It helps in diminishing physical network load.
Batch-mode RPC: Batch-mode RPC enables the client to line and separate
RPC inquiries in a transmission buffer before sending them to the server in a
single batch over the network. Batch-mode RPC’s features include:
 It diminishes the overhead of requesting the server by sending them all at
once using the network.
 It is used for applications that require low call rates.
 It necessitates the use of a reliable transmission protocol.
 Local Procedure Call Vs Remote Procedure Call:
 Remote Procedure Calls have disjoint address space i.e. different address
space, unlike Local Procedure Calls.
 Remote Procedure Calls are more prone to failures due to possible
processor failure or communication issues of a network than Local
Procedure Calls.
 Because of the communication network, remote procedure calls take longer
than local procedure calls.
Advantages of Remote Procedure Calls:

 The technique of using procedure calls in RPC permits high-level


languages to provide communication between clients and servers.
 This method is like a local procedure call but with the difference that the
called procedure is executed on another process and a different computer.
 The thread-oriented model is also supported by RPC in addition to the
process model.
 The RPC mechanism is employed to conceal the core message passing
method.
 The amount of time and effort required to rewrite and develop the code is
minimal.
 The distributed and local environments can both benefit from remote
procedure calls.
 To increase performance, it omits several of the protocol layers.
 Abstraction is provided via RPC.  To exemplify, the user is not known
about the nature of message-passing in network communication.
 RPC empowers the utilization of applications in a distributed environment.
Disadvantages of Remote Procedure Calls:
 In Remote Procedure Calls parameters are only passed by values as pointer
values are not allowed.
 It involves a communication system with another machine and another
process, so this mechanism is extremely prone to failure.
 The RPC concept can be implemented in a variety of ways, hence there is
no standard.
 Due to the interaction-based nature, there is no flexibility for hardware
architecture in RPC.
 Due to a remote procedure call, the process’s cost has increased.

Distributed Object Framework


The acronym DOF (Distributed Object Framework) refers to a technology that
allows many different products, using many different standards, to work
together and share information effortlessly across many different networks (e.g.,
LAN, WAN, Intranet, Internet—any type of network or mesh). At its core, DOF
technology was designed to network embedded devices, whether simple or
complex. However, to support advanced networking functions for those devices,
DOF technology has also evolved into a server technology, appropriate for
services that expand the functionality of networked devices, whether those
services reside on your own physical servers, or you are taking advantage of
advanced cloud technology, such as Amazon Web Services. Ultimately, DOF
technology has the flexibility to enhance all products, from the simplest
resource-constrained device to the most powerful of computer networks.
How Does DOF Technology Work?
 DOF technology includes object-oriented networking. This means devices of
almost any type can be networked—securely, with little configuration. We
accomplish this through a few simple concepts and “tools.”
 Actor: An actor is a universal primitive that can send or receive messages, make
local decisions, create more actors, and determine how to respond to
consecutive messages.
 DOF Object: In the DOF Object Model, a DOF object can be anything that
provides the functionality defined in a DOF interface. All DOF objects are
providers.
 Requestor: A requestor is a node on a DOF network that requests functionality
from a provider.
 Provider: A provider is a node on a DOF network that provides functionality
defined in one or more DOF interfaces.
 DOF Interface: A DOF interface defines the items of functionality that a DOF
object must provide. Interface items of functionality are properties, methods,
events, and exceptions.
 Node: A node is simply a connection point: either redistributing communication
(data) or acting as a communication endpoint.
 Service: A service is a type of object that can provide centralized functionality
to other objects.
 Security: By leveraging common libraries and components, using DOF
technology makes security easier to understand, manage, and integrate into
products.
Consider this example: if a DOF-enabled device needs a service (or
information), it simply sends a request across the network. A provider will
respond, even if it exists in another node or has to cross networks. The two
devices can create a path for as long as they require, transferring secure
information across any distance (to them, it will seem as if they are sitting right
next to each other talking. They could even be in a noisy room and would be
able to hear each other perfectly). The required data flows between requestor
and provider, regardless of any other network activity. The inherent security of
DOF protects the communication, and its multiple protocols allow many
different types of communication to take place.
How Can DOF Technology Benefit My Business?
DOF technology is perfect for companies that produce different products in
different divisions and want to unify those products so they work together. DOF
technology can help eliminate obstacles to product unification by doing the
following:
 Reducing cost. When all products adopt DOF solutions, future implementations
are already prepared because current products are now more flexible and can
accept future product roll-outs easily.
 Supporting multiple standards. You can continue to work with existing industry
standards with the assurance that you’ll be ready to work with future standards.
DOF technology is not proprietary.
 Future-proofing products. As technology advances, so does DOF technology. It
was designed to keep pace with the latest product offerings from cutting-edge
industries. This benefits your business because existing installations can easily
adopt new installations without costly equipment replacements or upgrades.

Service-Oriented Architecture
Service-Oriented Architecture (SOA) is a stage in the evolution of application
development and/or integration. It defines a way to make software components
reusable using the interfaces. 
Formally, SOA is an architectural approach in which applications make use of
services available in the network. In this architecture, services are provided to
form applications, through a network call over the internet. It uses common
communication standards to speed up and streamline the service integrations
in applications. Each service in SOA is a complete business function in itself.
The services are published in such a way that it makes it easy for the
developers to assemble their apps using those services. Note that SOA is
different from microservice architecture.
 SOA allows users to combine a large number of facilities from existing
services to form applications.
 SOA encompasses a set of design principles that structure system
development and provide means for integrating components into a coherent
and decentralized system.
 SOA-based computing packages functionalities into a set of interoperable
services, which can be integrated into different software systems belonging
to separate business domains.
The different characteristics of SOA are as follows : 
Provides interoperability between the services. 
Provides methods for service encapsulation, service discovery, service
composition, 
service reusability and service integration. 
Facilitates QoS (Quality of Services) through service contract based on
Service Level 
Agreement (SLA). 
Provides loosely couples services. 
Provides location transparency with better scalability and availability. 
Ease of maintenance with reduced cost of application development and 
deployment.
There are two major roles within Service-oriented Architecture: 
1. Service provider: The service provider is the maintainer of the service and
the organization that makes available one or more services for others to
use. To advertise services, the provider can publish them in a registry,
together with a service contract that specifies the nature of the service, how
to use it, the requirements for the service, and the fees charged.
2. Service consumer: The service consumer can locate the service metadata
in the registry and develop the required client components to bind and use
the service.
 

Services might aggregate information and data retrieved from other services or
create workflows of services to satisfy the request of a given service
consumer. This practice is known as service orchestration Another important
interaction pattern is service choreography, which is the coordinated
interaction of services without a single point of control. 
Components of SOA: 
 
Guiding Principles of SOA: 
1. Standardized service contract: Specified through one or more service
description documents.
2. Loose coupling: Services are designed as self-contained components,
maintain relationships that minimize dependencies on other services.
3. Abstraction: A service is completely defined by service contracts and
description documents. They hide their logic, which is encapsulated within
their implementation.
4. Reusability: Designed as components, services can be reused more
effectively, thus reducing development time and the associated costs.
5. Autonomy: Services have control over the logic they encapsulate and,
from a service consumer point of view, there is no need to know about their
implementation.
6. Discoverability: Services are defined by description documents that
constitute supplemental metadata through which they can be effectively
discovered. Service discovery provides an effective means for utilizing
third-party resources.
7. Composability: Using services as building blocks, sophisticated and
complex operations can be implemented. Service orchestration and
choreography provide a solid support for composing services and achieving
business goals.
Advantages of SOA: 
 Service reusability: In SOA, applications are made from existing services.
Thus, services can be reused to make many applications.
 Easy maintenance: As services are independent of each other they can be
updated and modified easily without affecting other services.
 Platform independent: SOA allows making a complex application by
combining services picked from different sources, independent of the
platform.
 Availability: SOA facilities are easily available to anyone on request.
 Reliability: SOA applications are more reliable because it is easy to debug
small services rather than huge codes
 Scalability: Services can run on different servers within an environment,
this increases scalability
Disadvantages of SOA: 
 High overhead: A validation of input parameters of services is done
whenever services interact this decreases performance as it increases load
and response time.
 High investment: A huge initial investment is required for SOA.
 Complex service management: When services interact they exchange
messages to tasks. the number of messages may go in millions. It becomes
a cumbersome task to handle a large number of messages.
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of
something, such as a server, a desktop, a storage device, an operating system or
network resources".

In other words, Virtualization is a technique, which allows to share a single


physical instance of a resource or an application among multiple customers and
organizations. It does by assigning a logical name to a physical storage and
providing a pointer to that physical resource when demanded.

Characteristics of Virtualization

1. Increased Security – 
The ability to control the execution of a guest program in a completely
transparent manner opens new possibilities for delivering a secure, controlled
execution environment. All the operations of the guest programs are generally
performed against the virtual machine, which then translates and applies them to
the host programs. 
A virtual machine manager can control and filter the activity of the guest
programs, thus preventing some harmful operations from being performed.
Resources exposed by the host can then be hidden or simply protected from the
guest. Increased security is a requirement when dealing with untrusted code. 
Example-1: Untrusted code can be analyzed in Cuckoo sandboxes
environment. 
The term sandbox identifies an isolated execution environment where
instructions can be filtered and blocked before being translated and executed in
the real execution environment. 
2. Managed Execution – 
In particular, sharing, aggregation, emulation, and isolation are the most
relevant features. 
 
Functions enabled by a managed execution 
3. Sharing – 
Virtualization allows the creation of a separate computing environment within
the same host. This basic feature is used to reduce the number of active servers
and limit power consumption. 
4. Aggregation – 
It is possible to share physical resources among several guests, but virtualization
also allows aggregation, which is the opposite process. A group of separate
hosts can be tied together and represented to guests as a single virtual host. This
functionality is implemented with cluster management software, which
harnesses the physical resources of a homogeneous group of machines and
represents them as a single resource. 
5. Emulation – 
Guest programs are executed within an environment that is controlled by the
virtualization layer, which ultimately is a program. Also, a completely different
environment with respect to the host can be emulated, thus allowing the
execution of guest programs requiring specific characteristics that are not
present in the physical host. 
6. Isolation – 
Virtualization allows providing guests—whether they are operating systems,
applications, or other entities—with a completely separate environment, in
which they are executed. The guest program performs its activity by interacting
with an abstraction layer, which provides access to the underlying resources.
The virtual machine can filter the activity of the guest and prevent harmful
operations against the host. 
Besides these characteristics, another important capability enabled by
virtualization is performance tuning. This feature is a reality at present, given
the considerable advances in hardware and software supporting virtualization. It
becomes easier to control the performance of the guest by finely tuning the
properties of the resources exposed through the virtual environment. This
capability provides a means to effectively implement a quality-of-service (QoS)
infrastructure. 
7. Portability – 
The concept of portability applies in different ways according to the specific
type of virtualization considered.
In the case of a hardware virtualization solution, the guest is packaged into a
virtual image that, in most cases, can be safely moved and executed on top of
different virtual machines. 
Taxonomy of virtualization

 Virtual machines are broadly classified into two types: System Virtual
Machines (also known as Virtual Machines) and Process Virtual
Machines (also known as Application Virtual Machines). The
classification is based on their usage and degree of similarity to the linked
physical machine. The system VM mimics the whole system hardware
stack and allows for the execution of the whole operating system Process
VM, on the other hand, provides a layer to an operating system that is
used to replicate the programming environment for the execution of
specific processes. 

 A Process Virtual Machine, also known as an application virtual machine,


operates as a regular program within a host OS and supports a single
process. It is formed when the process begins and deleted when it
terminates. Its goal is to create a platform-independent programming
environment that abstracts away features of the underlying hardware or
operating system, allowing a program to run on any platform. With
Linux, for example, Wine software aids in the execution of Windows
applications.

 A System Virtual Machine, such as VirtualBox, offers a full system


platform that allows the operation of a whole operating system (OS).

 Virtual Machines are used to distribute and designate suitable system


resources to software (which might be several operating systems or an
application), and the software is restricted to the resources provided by
the VM. The actual software layer that allows virtualization is the Virtual
Machine Monitor (also known as Hypervisor). Hypervisors are classified
into two groups based on their relationship to the underlying hardware.
Native VM is a hypervisor that takes direct control of the underlying
hardware, whereas hosted VM is a different software layer that runs
within the operating system and so has an indirect link with the
underlying hardware.

 The system VM abstracts the Instruction Set Architecture, which differs


slightly from that of the actual hardware platform. The primary benefits
of system VM include consolidation (it allows multiple operating systems
to coexist on a single computer system with strong isolation from each
other), application provisioning, maintenance, high availability, and
disaster recovery, as well as sandboxing, faster reboot, and improved
debugging access.

 The process VM enables conventional application execution inside the


underlying operating system to support a single process. To support the
execution of numerous applications associated with numerous processes,
we can construct numerous instances of process VM. The process VM is
formed when the process starts and terminates when the process is
terminated. The primary goal of process VM is to provide platform
independence (in terms of development environment), which implies that
applications may be executed in the same way on any of the underlying
hardware and software platforms. Process VM as opposed to system VM,
abstracts high-level programming languages. Although Process VM is
built using an interpreter, it achieves comparable speed to compiler-based
programming languages using a just-in-time compilation mechanism.

 Java Virtual Machine (JVM) and Common Language Runtime are two
popular examples of Process VMs that are used to virtualize the Java
programming language and the.NET Framework programming
environment, respectively.

Execution Virtualization
When an execution environment is virtualized at unlike levels of the stack of
computation then it requires a reference model which defines the interfaces
within the level of abstractions, and this level of abstraction hides the details of
implementations.
This projects an idea that, virtualization techniques can substitute any one layer
and can intercept the calls which are directed to it. That’s why a clear separation
within the layers can simplify their implementations, which only need an
emulation of the interfaces and a proper response with the underlying layer.
At the base layer, the model for the hardware is declared or manifested on terms
of an architecture i.e. Instruction Set Architecture (ISA).
Figure- A machine reference model
Instruction Set Architecture (ISA) defines the instruction set for the
processor, registers, memory, and interrupt management. It is an interface
between software and hardware and It is mandatory for the operating system
(OS) developer (system ISA) developers of applications who directly manages
core hardware (user ISA). The operating system layer is separated by the
application binary interface (ABI) from the application and libraries, which are
managed by operating system.
Application Binary Interface (ABI) covers facts such as low-level data types
and call conventions and it also defines a format for many programs. Mainly,
system calls are defined at this level. Moreover, this type of interface enables
portability of various applications and libraries across OS which employ the
same ABI. Application programming interface (API) is represented by the
highest level of abstraction. This API interfaces applications to libraries and/or
the core OS. For an action is to be performed in the application level API, ABI
and the two which are responsible to make it done. Mainly, CPU runs on two
privilege levels:
1. User Mode: In this mode, memory access is restricted up to some limit
whereas access to peripherals is denied.
2. Kernel Mode: In this mode, CPU has instructions which manage memory
and how to be accessed and it also has instructions which enable access of
the peripherals like disks and network cards. From one running program to
another running program, CPU switches automatically. The expansions and
applications of computing system are simplified by this layered approach.
Application of multitasking and co-existence of multiple executing is
simplified by this layered approach.
The first can be made within privileged and non-privileged instructions. Those
instructions which can be used with interrupting with another task are known as
Non- privileged instruction. They are also called so because shared resources
are not accessed. Ex- contains all the fixed points, floating and arithmetic
instructions. The instructions which are executed under particular restrictions
and which are frequently used for sensitive operations (which expose behavior-
sensitive or modify control sensitive) are known as privileged instructions.

Figure- Security Rings and Privileged Mode


It is expected that in a hyper visor-managed environment, code of guest OS runs
in user to prevent it from the direct access of OS’s status. It is no longer
possible to completely isolate the guest OS when non-privileged instructions are
implemented.

Types of Virtualization: virtualization.

1. Application Virtualization: 
Application virtualization helps a user to have remote access of an application
from a server. The server stores all personal information and other
characteristics of the application but can still run on a local workstation
through the internet. Example of this would be a user who needs to run two
different versions of the same software. Technologies that use application
virtualization are hosted applications and packaged applications. 
2. Network Virtualization: 
The ability to run multiple virtual networks with each has a separate control
and data plan. It co-exists together on top of one physical network. It can be
managed by individual parties that potentially confidential to each other. 
Network virtualization provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual Private
Network (VPN), and workload security within days or even in weeks. 
3. Desktop Virtualization: 
Desktop virtualization allows the users’ OS to be remotely stored on a server
in the data centre. It allows the user to access their desktop virtually, from any
location by a different machine. Users who want specific operating systems
other than Windows Server will need to have a virtual desktop. Main benefits
of desktop virtualization are user mobility, portability, easy management of
software installation, updates, and patches. 
4. Storage Virtualization: 
Storage virtualization is an array of servers that are managed by a virtual
storage system. The servers aren’t aware of exactly where their data is stored,
and instead function more like worker bees in a hive. It makes managing
storage from multiple sources to be managed and utilized as a single
repository. storage virtualization software maintains smooth operations,
consistent performance and a continuous suite of advanced functions despite
changes, break down and differences in the underlying equipment. 
5. Server Virtualization: 
This is a kind of virtualization in which masking of server resources takes
place. Here, the central-server(physical server) is divided into multiple
different virtual servers by changing the identity number, processors. So, each
system can operate its own operating systems in isolate manner. Where each
sub-server knows the identity of the central server. It causes an increase in the
performance and reduces the operating cost by the deployment of main server
resources into a sub-server resource. It’s beneficial in virtual migration, reduce
energy consumption, reduce infrastructural cost, etc.
6. Data virtualization: 
This is the kind of virtualization in which the data is collected from various
sources and managed that at a single place without knowing more about the
technical information like how data is collected, stored & formatted then
arranged that data logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the various cloud
services remotely. Many big giant companies are providing their services like
Oracle, IBM, At scale, Cdata, etc.
 
Cloud Computing and Virtualization
1. Cloud Computing : 
Cloud computing is a client-server computing architecture. In cloud
computing, resources are used in centralized pattern and cloud computing is a
high accessible service. Cloud computing is a payment and useful business
tool, users pay for usage. 

 
2. Virtualization: 
Virtualization is the establishment of cloud computing. It is this novelty that
empowers a continuous asset age from certain eccentric conditions or a
singular physical device framework. Here the job of hypervisor is essential,
which is legitimately associated with the equipment to make a few virtual
machines from it. These virtual machines working is unmistakable,
independent and doesn’t meddle with one another.In the condition of disaster
recovery, it relies on single peripheral device as single dedicated hardware do
a great job in it. 
Virtualization exists in different classes which are:-
 

Let’s see the difference between Cloud computing and Virtualization:-


S.N
O Cloud Computing Virtualization

Cloud computing is used to


provide pools and automated While It is used to make various
resources that can be accessed simulated environments through
1. on-demand. a physical hardware system.

While virtualization setup is


Cloud computing setup is simple as compared to cloud
2. tedious, complicated. computing.

While virtualization is low


Cloud computing is high scalable compared to cloud
3. scalable. computing.

Cloud computing is Very While virtualization is less


4. flexible. flexible than cloud computing.

In the condition of disaster


recovery, cloud computing relies While it relies on single
5. on multiple machines. peripheral device.

In cloud computing, the In virtualization, the workload is


6. workload is stateless. stateful.

The total cost of cloud


computing is higher than The total cost of virtualization is
7. virtualization. lower than Cloud Computing.

Cloud computing requires many While single dedicated hardware


8. dedicated hardware. can do a great job in it.

9. Cloud computing provides While storage space depends on


unlimited storage space. physical server capacity in
S.N
O Cloud Computing Virtualization

virtualization.

Cloud computing is of two Virtualization is of two types :


types : Public cloud and Private Hardware virtualization and
10. cloud. Application virtualization.

Pros and cons of Virtualization in Cloud Computing


Virtualization is the creation of Virtual Version of something such as server,
desktop, storage device, operating system etc.
Thus, Virtualization is a technique which allows us to share a single physical
instance of a resource or an application among multiple customers and an
organization. Virtualization often creates many virtual resources from one
physical resource.
  Host Machine –
The machine on which virtual machine is going to create is known as Host
Machine. 
 
  Guest Machine – 
The virtual machines which are created on Host Machine is called Guest
Machine. 
Pros of Virtualization in Cloud Computing :  
 Utilization of Hardware Efficiently –
With the help of Virtualization Hardware is Efficiently used by user as well
as Cloud Service Provider. In this the need of Physical Hardware System
for the User is decreases and this results in less costly.In Service Provider
point of View, they will vitalize the Hardware using Hardware
Virtualization which decrease the Hardware requirement from Vendor side
which are provided to User is decreased. Before Virtualization, Companies
and organizations have to set up their own Server which require extra space
for placing them, engineer’s to check its performance and require extra
hardware cost but with the help of Virtualization the all these limitations
are removed by Cloud vendor’s who provide Physical Services without
setting up any Physical Hardware system.
 Availability increases with Virtualization –
One of the main benefit of Virtualization is that it provides advance
features which allow virtual instances to be available all the times. It also
has capability to move virtual instance from one virtual Server another
Server which is very tedious and risky task in Server Based System. During
migration of Data from one server to another it ensures its safety. Also, we
can access information from any location and any time from any device.
 Disaster Recovery is efficient and easy –
With the help of virtualization Data Recovery, Backup, Duplication
becomes very easy. In traditional method , if somehow due to some disaster
if Server system Damaged then the surety of Data Recovery is very less.
But with the tools of Virtualization real time data backup recovery and
mirroring become easy task and provide surety of zero percent data loss.
 Virtualization saves Energy –
Virtualization will help to save Energy because while moving from
physical Servers to Virtual Server’s, the number of Server’s decreases due
to this monthly power and cooling cost decreases which will Save Money
as well. As cooling cost reduces it means carbon production by devices
also decreases which results in Fresh and pollution free environment.
 Quick and Easy Set up –
In traditional methods Setting up physical system and servers are very time-
consuming. Firstly Purchase them in bulk after that wait for shipment.
When Shipment is done then wait for Setting up and after that again spend
time in installing required software etc. Which will consume very time. But
with the help of virtualization the entire process is done in very less time
which results in productive setup.
 Cloud Migration becomes easy –
Most of the companies those who already have spent a lot in the server
have a doubt of Shifting to Cloud. But it is more cost-effective to shift to
cloud services because all the data that is present in their server’s can be
easily migrated into the cloud server and save something from maintenance
charge, power consumption, cooling cost, cost to Server Maintenance
Engineer etc.
Cons of Virtualization :

 Data can be at Risk –


Working on virtual instances on shared resources means that our data is
hosted on third party resource which put’s our data in vulnerable condition.
Any hacker can attack on our data or try to perform unauthorized access.
Without Security solution our data is in threaten situation.
 Learning New Infrastructure –
As Organization shifted from Servers to Cloud. They required skilled staff
who can work with cloud easily. Either they hire new IT staff with relevant
skill or provide training on that skill which increase the cost of company.
 High Initial Investment –
It is true that Virtualization will reduce the cost of companies but also it is
truth that Cloud have high initial investment. It provides numerous services
which are not required and when unskilled organization will try to set up in
cloud they purchase unnecessary services which are not even required to
them.
Virtualization | Xen: Paravirtualization

Xen is an open source hypervisor based on paravirtualization. It is the most


popular application of paravirtualization. Xen has been extended to compatible
with full virtualization using hardware-assisted virtualization . It enables high
performance to execute guest operating system. This is probably done by
removing the performance loss while executing the instructions requiring
significant handling and by modifying portion of the guest operating system
executed by Xen, with reference to the execution of such instructions. Hence
this especially support x86, which is the most used architecture on commodity
machines and servers. 
 

Figure – Xen Architecture and Guest OSnManagement 


Above figure describes the Xen Architecture and its mapping onto a classic
x86 privilege model. A Xen based system is handled by Xen hypervisor,
which is executed in the most privileged mode and maintains the access of
guest operating system to the basic hardware. Guest operating system are run
between domains, which represents virtual machine instances. 
In addition, particular control software, which has privileged access to the host
and handles all other guest OS, runs in a special domain called Domain 0. This
the only one loaded once the virtual machine manager has fully booted, and
hosts an HTTP server that delivers requests for virtual machine creation,
configuration, and termination. This component establishes the primary
version of a shared virtual machine manager (VMM), which is a necessary part
of Cloud computing system delivering Infrastructure-as-a-Service (IaaS)
solution. 
Various x86 implementation support four distinct security levels, termed as
rings, i.e., 

Ring 0,
Ring 1,
Ring 2,
Ring 3
Here, Ring 0 represents the level having most privilege and Ring 3 represents
the level having least privilege. Almost all the frequently used Operating
system, except for OS/2, uses only two levels i.e. Ring 0 for the Kernel code
and Ring 3 for user application and non-privilege OS program. This provides a
chance to the Xen to implement paravirtualization. This enables Xen to control
unchanged the Application Binary Interface (ABI) thus allowing a simple shift
to Xen-virtualized solutions, from an application perspective. 
Due to the structure of x86 instruction set, some instructions allow code
execution in Ring 3 to switch to Ring 0 (Kernel mode). Such an operation is
done at hardware level, and hence between a virtualized environment, it will
lead to a TRAP or a silent fault, thus preventing the general operation of the
guest OS as it is now running in Ring 1. 
This condition is basically occurred by a subset of system calls. To eliminate
this situation, implementation in operating system requires a modification and
all the sensitive system calls needs re-implementation with hypercalls. Here,
hypercalls are the particular calls revealed by the virtual machine (VM)
interface of Xen and by use of it, Xen hypervisor tends to catch the execution
of all the sensitive instructions, manage them, and return the control to the
guest OS with the help of a supplied handler. 
Paravirtualization demands the OS codebase be changed, and hence all
operating systems can not be referred to as guest OS in a Xen-based
environment. This condition holds where hardware-assisted virtualization can
not be free, which enables to run the hypervisor in Ring 1 and the guest OS in
Ring 0. Hence, Xen shows some limitations in terms of legacy hardware and in
terms of legacy OS. 
In fact, these are not possible to modify to be run in Ring 1 safely as their
codebase is not reachable, and concurrently, the primary hardware hasn’t any
support to execute them in a more privileged mode than Ring 0. Open source
OS like Linux can be simply modified as its code is openly available, and Xen
delivers full support to virtualization, while components of Windows are
basically not compatible with Xen, unless hardware-assisted virtualization is
available. As new releases of OS are designed to be virtualized, the problem is
getting resolved and new hardware supports x86 virtualization. 
Pros:
 a) Xen server is developed over open-source Xen hypervisor and it uses a
combination of hardware-based virtualization and paravirtualization. This
tightly coupled collaboration between the operating system and virtualized
platform enables the system to develop lighter and flexible hypervisor that
delivers their functionalities in an optimized manner.
 b) Xen supports balancing of large workload efficiently that capture CPU,
Memory, disk input-output and network input-output of data. It offers two
modes to handle this workload: Performance enhancement, and For
handling data density.
 c) It also comes equipped with a special storage feature that we call Citrix
storage link. Which allows a system administrator to uses the features of
arrays from Giant companies- Hp, Netapp, Dell Equal logic etc.
 d) It also supports multiple processor, Iive migration one machine to
another, physical server to virtual machine or virtual server to virtual
machine conversion tools, centralized multiserver management, real time
performance monitoring over window and linux.
Cons:
 a) Xen is more reliable over linux rather than on window.
 b) Xen relies on 3rd-party component to manage the resources like drivers,
storage, backup, recovery & fault tolerance.
 c) Xen deployment could be a burden some on your Linux kernel system as
time passes.
 d) Xen sometimes may cause increase in load on your resources by high
input-output rate and and may cause starvation of other Vm’s.
VMware: Full Virtualization
 
In full virtualization primary hardware is replicated and made available to the
guest operating system, which executes unaware of such abstraction and no
requirements to modify. Technology of VMware is based on the key concept of
Full Virtualization. Either in desktop environment, with the help of type-II
hypervisor, or in server environment, through type-I hypervisor, VMware
implements full virtualization. In both the cases, full virtualization is possible
through the direct execution for non-sensitive instructions and binary translation
for sensitive instructions or hardware traps, thus enabling the virtualization of
architecture like x86. 
Full Virtualization and Binary Translation – 
VMware is widely used as it tends to virtualize x86 architectures, which
executes unmodified on-top of their hypervisors. With the introduction
of hardware-assisted virtualization, full virtualization is possible to achieve by
support of hardware. But earlier, x86 guest operating systems unmodified in a
virtualized environment could be executed only with the use of dynamic binary
translation. 
Since the set of sensitive instruction is not a subset of privileged instruction, x86
architecture design is not satisfy the first theorem of virtualization. Due to this
different behaviour occurs while such instructions are not run in the Ring 0,
which is normal in a virtualization environment where the guest OS is run in
Ring 1. Basically, a trap is created, and the method in which it manages
differentiation of the solution in which virtualization is applied for x86. In
dynamic binary translation, the trap encounters the translation of interrupts or
offending instructions into a corresponding set of instructions that establishes
the same target without making exceptions. In addition, to expand performance,
the corresponding set of instruction is cached, so the translation is not important
anymore for further encounters of the same instructions. Below is the figure
which demonstrates it. 
 

Figure – Full Virtualization Reference Model 


The major benefit of this approach is that guests can run unmodified in a
virtualized environment, which is an important feature for operating system
whose source code does not existed. Binary translation is portable for full
virtualization. As well as translation of instructions at runtime presents an
additional overhead that is not existed in other methods like paravirtualization
or hardware-assisted virtualization. Contradict, binary translation is only
implemented to a subset of the instruction set, while the others are managed
through direct execution on the primary hardware. This depletes somehow the
impact on performance of binary translation. 
Advantages of Binary Translation – 
 
1. This kind of virtualization delivers the best isolation and security for Virtual
Machine.
2. Truly isolated numerous guest OS can execute concurrently on the same
hardware.
3. It is only implementation that needs no hardware assist or operating system
assist to virtualize sensitive instruction as well as privileged instruction. 
 
Disadvantages of Binary Translation – 
 
1. It is time consuming at run-time.
2. It acquires a large performance overhead.
3. It employs a code cache to stock the translated most used instructions to
enhance the performance, but it increases memory utilization along with the
hardware cost.
4. The performance of full virtualization on the x86 architecture is 80 to 95
percent that of the host machine. 
 
What Is Hyper-V Software and Hardware Virtualization Technology?
By Staff Contributor on July 14, 2020
Hyper-V is a virtualization software created by Microsoft. In this Hyper-V
virtualization tutorial, we’ll look at some of the major Hyper-V virtualization
concepts. In addition to covering the basics of Hyper-V virtualization and
hardware virtualization technology, we’ll consider the benefits of virtualization,
how virtual machines are managed, how to use Hyper-V virtualization
technology, and Hyper-V considerations you should be aware of. This should
provide you with a robust understanding of the nuances of Hyper-V
virtualization technology.
I’ll also give my recommendation for the best Hyper-V virtualization software
on the market: SolarWinds® Virtualization Manager (VMAN). This tool
outshines competitors by combining an impressive range of advanced
functionalities with optimal user-friendliness.
What Is Hyper-V and Hardware Virtualization Technology?
Virtualization Benefits
Virtualization Security
How to Use Hyper-V Virtualization Technology
Hyper-V Considerations
Managing Virtual Machines
Best Virtualization Software: SolarWinds Virtualization Manager
Getting Started With Hyper-V Virtualization Software
What Is Hyper-V and Hardware Virtualization Technology?

Hyper-V is a virtualization software initially released in 2016 by Microsoft. It’s


built into Windows and is widely recognized as a major competitor to VMware
Fusion and Oracle VM VirtualBox. Hyper-V can be used to virtualize hardware
components and operating systems. Additionally, it’s not limited to the user’s
device and can be used to facilitate server virtualization.
There are three versions of Hyper-V available:

1. Hyper-V Server: a stand-alone product created for managing dedicated


and virtual server instances
2. Hyper-V for Windows 10: a product you can run on your laptop or
desktop
3. Hyper-V for Windows Server: an add-on for Windows Server
Hardware virtualization involves creating a virtual version of an operating
system or computer as opposed to a physical version. Hardware virtualization
technology was developed by AMD and Intel for their server platforms and was
intended to improve processor performance. This technology was also designed
to overcome basic virtualization challenges, like translating instructions and
memory addresses.
Hardware virtualization is also referred to as hardware-assisted virtualization. It
involves embedding virtual machine software into a server’s hardware
component. This software goes by several names, but “virtual machine monitor”
and “hypervisor” are the ones most commonly used.
Hardware virtualization technology is constantly evolving and continues to gain
popularity in server platforms. The purpose of this technology is to consolidate
numerous small, physical servers into one large physical server, thereby
allowing the processor to be used more effectively. The physical server’s
operating system gets converted into a distinct operating system running within
the virtual machine.

Virtualization Benefits

There are numerous advantages associated with hardware virtualization


technology, as managing and controlling virtual machines is easier than
managing and controlling a physical server.

Virtualizing resources allows administrators to pool their physical resources.


This means hardware can be fully commoditized. In other words, legacy
infrastructure that supports critical applications but is costly to maintain can be
virtualized to achieve optimal usage.

With virtualization technology, administrators don’t have to wait for every


application to be certified on new hardware. They can simply set up the
environment and migrate the virtual machine, and everything works exactly as
before. When you conduct regression tests, you can create or copy a testbed,
eliminating the need for redundant development servers or testing hardware.

One of the most notable benefits of virtualization is—with the right training—


these environments can be continuously optimized to achieve greater density
and more extensive capabilities.
Virtualization Security

If you’re new to the game, you may be wondering whether hardware


virtualization technology is secure. It’s widely accepted that security should be
integrated and continuous. Fortunately, virtualization provides a solution to
many common security issues.
In environments where security policies require systems to be separated by a
firewall, two systems can securely reside within a single physical box. In a
development environment, every single developer can have their own sandbox,
and these sandboxes are immune to other developers’ runaway or rogue code.

How to Use Hyper-V Virtualization Technology

Here’s a step-by-step guide to activating and using Hyper-V virtualization


technology.

Activating Hyper-V Virtualization

1. Go to the control panel and select “uninstall a program.”


2. Click on “turn Windows features on or off” in the sidebar on the left side
of the screen.
3. Locate “Hyper-V management tools” and “Hyper-V platform” under the
subtitle “Hyper-V.” Activate these options by putting a tick in each box.
4. You’ll be asked to restart your computer, so the appropriate changes can
be made. Accept and allow your computer to restart.
Using Hyper-V Virtualization

1. Launch the Hyper-V Manager.


2. Click “connect to server” followed by “local computer” to connect to a
default Hyper-V server. This is a requirement.
3. You should see your PC name displayed as a local server on the left side
of your screen. In the middle of your screen, you’ll see an overview of
virtual machines currently in existence on the device. On the right side,
you should see a list of commands under the subtitle “Actions.” To create
a new virtual machine, select “quick create.”
4. You’ll be given two installation options in a pop-up window. Choose
whichever installation option you’d prefer.
5. Choose the appropriate operating system and click “create virtual
machine.” This will initiate the download process, which might take a
while.
6. To make other configuration changes, choose “new” in the main menu
instead of “create virtual machine.”
To extend the capabilities of your Hyper-V hardware virtualization, you should
use Hyper-V virtualization software. Effective software makes Hyper-V
virtualization management significantly easier, more productive, and less error-
prone.

You might also like