CC Unit-1
CC Unit-1
UNIT - I
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed
Computing,Cluster Computing, Grid Computing, Cloud Computing, Bio computing,
Mobile Computing, Quantum Computing, Optical Computing, Nano computing.
UNIT - II
Cloud Computing Fundamentals: Motivation for Cloud Computing, The Need for Cloud
Computing, Defining Cloud Computing, Definition of Cloud computing, Cloud Computing
Is a Service, Cloud Computing Is a Platform, Principles of Cloud computing, Five
Essential Characteristics, Four Cloud Deployment Models
UNIT - III
Cloud Computing Architecture and Management: Cloud architecture, Layer, Anatomy
of the Cloud, Network Connectivity in Cloud Computing, Applications, on the Cloud,
Managing the Cloud, Managing the Cloud Infrastructure Managing the Cloud application,
Migrating Application to Cloud, Phases of Cloud Migration Approaches for Cloud
Migration.
UNIT - IV
Cloud Service Models: Infrastructure as a Service, Characteristics of IaaS. Suitability of
IaaS, Pros and Cons of IaaS, Summary of IaaS Providers, Platform as a Service,
Characteristics of PaaS, Suitability of PaaS, Pros and Cons of PaaS, Summary of PaaS
Providers, Software as a Service, Characteristics of SaaS, Suitability of SaaS, Pros and
Cons of SaaS, Summary of SaaS Providers, Other Cloud Service Models.
UNIT V
Cloud Service Providers: EMC, EMC IT, Captiva Cloud Toolkit, Google, Cloud Platform,
Cloud Storage, Google Cloud Connect, Google Cloud Print, Google App Engine, Amazon
Web Services, Amazon Elastic Compute Cloud, Amazon Simple Storage Service,
Amazon Simple Queue ,service, Microsoft, Windows Azure, Microsoft Assessment and
Planning Toolkit, SharePoint, IBM, Cloud Models, IBM Smart Cloud, SAP Labs, SAP
HANA Cloud Platform, Virtualization Services Provided by SAP, Sales force, Sales Cloud,
Service Cloud: Knowledge as a Service, Rack space, VMware, Manjra soft, Aneka
Platform
CLOUD COMPUTING
Computing Paradigms
High-Performance Computing
Therefore, HPC can also be attributed to mean the other computing paradigms that
are discussed in the forthcoming sections, as it is a common name for all these
computing systems.
HPC systems are normally found in those applications where it is required to use or
solve scientific problems.
Most of the time, the challenge in working with these kinds of problems is to perform
suitable simulation study, and this can be accomplished by HPC without any difficulty.
Parallel computing is also one of the facets of HPC. Here, a set of processors work
cooperatively to solve a computational problem.
Therefore, this definition is the same as that of HPC and is broad enough to include
supercomputers that have hundreds or thousands of processors interconnected with
other resources.
One can distinguish between conventional (also known as serial or sequential or Von
Neumann) computers and parallel computers in the way the applications are
executed.
• It is run using multiple processors (multiple CPUs). • A problem is broken down into
discrete parts that can be solved concurrently. • Each part is further broken down
into a series of instructions. • Instructions from each part are executed
simultaneously on different processors. • An overall control/coordination mechanism
is employed.
Distributed Computing
The connectivity can be such that the CPUs in a distributed system can be physically
close together and connected by a local network, or they can be geographically
distant and connected by a wide area network.
2. Redundancy or replication: Here, several machines can provide the same services,
so that even if one is unavailable (or failed), work does not stop because other similar
computing supports will be available.
Cluster Computing
A cluster computing system consists of a set of the same or similar type of processor
machines connected using a dedicated network infrastructure.
All processor machines share resources such as a common home directory and have
a software such as a message passing interface (MPI) implementation installed to
allow programs to be run across all nodes simultaneously.
And, the nodes need to communicate with one another in order to work cooperatively
and meaningfully together to solve the problem in hand. If we have processor
machines of heterogeneous types in a cluster, this kind of clusters become a subtype
and still mostly are in the experimental or research stage.
Grid Computing
The computing resources in most of the organizations are underutilized but are
necessary for certain operations.
The idea of grid computing is to make use of such non utilized computing power by
the needy organizations, and thereby the return on investment (ROI) on computing
investments can be increased.
The managing activity of grid resources through the middleware is called grid
services.
Grid services provide access control, security, access to data including digital libraries
and databases, and access to large-scale interactive and long-term storage facilities.
• Its ability to make use of unused computing power, and thus, it is a cost-effective
solution (reducing investments, only recurring costs)
In this context, the difference between electrical power grid and grid computing is
worth noting (Table 1.1).
Cloud Computing
The computing trend moved toward cloud from the concept of grid computing,
particularly when large computing resources are required to solve a single problem,
using the ideas of computing power as a utility and other allied concepts.
However, the potential difference between grid and cloud is that grid computing
supports leveraging several computers in parallel to solve a particular application,
while cloud computing supports leveraging multiple resources, including computing
resources, to deliver a unified service to the end user.
In addition, while a cloud can provision and support a grid, a cloud can also support
nongrid environments, such as a three-tier web architecture running on traditional or
Web 2.0 applications.
We will be looking at the details of cloud computing in different chapters of this book.
Biocomputing
systems use the concepts of biologically derived or simulated molecules (or models)
that perform computational processes in order to solve a problem. The biologically
derived models aid in structuring the computer programs that become part of the
application. Biocomputing provides the theoretical background and practical tools for
scientists to explore proteins and DNA. DNA and proteins are nature’s building blocks,
but these building blocks are not exactly used as bricks; the function of the final
molecule rather strongly depends on the order of these blocks. Thus, the
biocomputing scientist works on inventing the order suitable for various applications
mimicking biology. Biocomputing shall, therefore, lead to a better understanding of
life and the molecular causes of certain diseases.
Mobile Computing
In mobile computing, the processing (or computing) elements are small (i.e.,
handheld devices) and the communication between various resources is taking place
using wireless media. Mobile communication for voice applications (e.g., cellular
phone) is widely established throughout the world and witnesses a very rapid growth
in all its dimensions including the increase in the number of subscribers of various
cellular networks. An extension of this technology is the ability to send and receive
data across various cellular networks using small devices such as smartphones. There
can be numerous applications based on this technology; for example, video call or
conferencing is one of the important applications that people prefer to use in place
of existing voice (only) communications on mobile phones. Mobile computing–based
applications are becoming very important and rapidly evolving with various
technological advancements as it allows users to transmit data from remote locations
to other remote or fixed locations
Quantum Computing
Manufacturers of computing systems say that there is a limit for cramming more and
more transistors into smaller and smaller spaces of integrated circuits (ICs) and
thereby doubling the processing power about every 18 months. This problem will
have to be overcome by a new quantum computing–based solution, wherein the
dependence is on quantum information, the rules that govern the subatomic world.
Quantum computers are millions of times faster than even our most powerful
supercomputers today. Since quantum computing works differently on the most
fundamental level than the current technology, and although there are working
prototypes, these systems have not so far proved to be alternatives to today’s silicon-
based machines.
Optical Computing
Optical computing system uses the photons in visible light or infrared beams, rather
than electric current, to perform digital computations. An electric current flows at
only about 10% of the speed of light. This limits the rate at which data can be
exchanged over long distances and is one of the factors that led to the evolution of
optical fiber. By applying some of the advantages of visible and/or IR networks at the
device and component scale, a computer can be developed that can perform
operations 10 or more times faster than a conventional electronic computer.
Nanocomputing