0% found this document useful (0 votes)
8 views9 pages

CC Unit-1

The document outlines the syllabus for a Cloud Computing course as part of a B.Tech program, detailing various computing paradigms including High-Performance, Parallel, Distributed, Cluster, Grid, and Cloud Computing. It covers fundamental concepts of cloud computing, its architecture, service models, and notable cloud service providers. Additionally, it introduces advanced computing concepts such as Biocomputing, Mobile Computing, Quantum Computing, Optical Computing, and Nanocomputing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

CC Unit-1

The document outlines the syllabus for a Cloud Computing course as part of a B.Tech program, detailing various computing paradigms including High-Performance, Parallel, Distributed, Cluster, Grid, and Cloud Computing. It covers fundamental concepts of cloud computing, its architecture, service models, and notable cloud service providers. Additionally, it introduces advanced computing concepts such as Biocomputing, Mobile Computing, Quantum Computing, Optical Computing, and Nanocomputing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

CLOUD COMPUTING

(PROFESSIONAL ELECTIVE – IV)


B.Tech. IV Year I Sem.
Course Code: CS742PE

R18 B.Tech. CSE Syllabus JNTU HYDERABAD


131
CS714PE: CLOUD COMPUTING (Professional Elective - IV)
IV Year B.Tech. CSE I –Sem LTPC 3003

UNIT - I
Computing Paradigms: High-Performance Computing, Parallel Computing, Distributed
Computing,Cluster Computing, Grid Computing, Cloud Computing, Bio computing,
Mobile Computing, Quantum Computing, Optical Computing, Nano computing.

UNIT - II
Cloud Computing Fundamentals: Motivation for Cloud Computing, The Need for Cloud
Computing, Defining Cloud Computing, Definition of Cloud computing, Cloud Computing
Is a Service, Cloud Computing Is a Platform, Principles of Cloud computing, Five
Essential Characteristics, Four Cloud Deployment Models

UNIT - III
Cloud Computing Architecture and Management: Cloud architecture, Layer, Anatomy
of the Cloud, Network Connectivity in Cloud Computing, Applications, on the Cloud,
Managing the Cloud, Managing the Cloud Infrastructure Managing the Cloud application,
Migrating Application to Cloud, Phases of Cloud Migration Approaches for Cloud
Migration.

UNIT - IV
Cloud Service Models: Infrastructure as a Service, Characteristics of IaaS. Suitability of
IaaS, Pros and Cons of IaaS, Summary of IaaS Providers, Platform as a Service,
Characteristics of PaaS, Suitability of PaaS, Pros and Cons of PaaS, Summary of PaaS
Providers, Software as a Service, Characteristics of SaaS, Suitability of SaaS, Pros and
Cons of SaaS, Summary of SaaS Providers, Other Cloud Service Models.

UNIT V
Cloud Service Providers: EMC, EMC IT, Captiva Cloud Toolkit, Google, Cloud Platform,
Cloud Storage, Google Cloud Connect, Google Cloud Print, Google App Engine, Amazon
Web Services, Amazon Elastic Compute Cloud, Amazon Simple Storage Service,
Amazon Simple Queue ,service, Microsoft, Windows Azure, Microsoft Assessment and
Planning Toolkit, SharePoint, IBM, Cloud Models, IBM Smart Cloud, SAP Labs, SAP
HANA Cloud Platform, Virtualization Services Provided by SAP, Sales force, Sales Cloud,
Service Cloud: Knowledge as a Service, Rack space, VMware, Manjra soft, Aneka
Platform
CLOUD COMPUTING

UNIT - I Computing Paradigms: High-Performance Computing, Parallel Computing,


Distributed Computing, Cluster Computing, Grid Computing, Cloud Computing, Bio
computing, Mobile Computing, Quantum Computing, Optical Computing, Nano
computing.

Cloud computing is a model for enabling convenient, on-demand network access to


a shared pool of configurable computing resources (e.g., networks, servers,
storage, applications, and services) that can be rapidly provisioned and released with
minimal management effort or service provider interaction.

Computing Paradigms

High-Performance Computing

In high-performance computing systems, a pool of processors (processor


machines or central processing units [CPUs]) connected (networked) with other
resources like memory, storage, and input and output devices, and the deployed
software is enabled to run in the entire system of connected components.

The processor machines can be of homogeneous or heterogeneous type. The


legacy meaning of high-performance computing (HPC) is the supercomputers;

however, it is not true in present-day computing scenarios.

Therefore, HPC can also be attributed to mean the other computing paradigms that
are discussed in the forthcoming sections, as it is a common name for all these
computing systems.

Thus, examples of HPC include a small cluster of desktop computers or personal


computers (PCs) to the fastest supercomputers.

HPC systems are normally found in those applications where it is required to use or
solve scientific problems.

Most of the time, the challenge in working with these kinds of problems is to perform
suitable simulation study, and this can be accomplished by HPC without any difficulty.

Scientific examples such as protein folding in molecular biology and studies on


developing models and applications based on nuclear fusion are worth noting as
potential applications for HPC.
Parallel Computing

Parallel computing is also one of the facets of HPC. Here, a set of processors work
cooperatively to solve a computational problem.

These processor machines or CPUs are mostly of homogeneous type.

Therefore, this definition is the same as that of HPC and is broad enough to include
supercomputers that have hundreds or thousands of processors interconnected with
other resources.

One can distinguish between conventional (also known as serial or sequential or Von
Neumann) computers and parallel computers in the way the applications are
executed.

In serial or sequential computers, the following apply:

• It runs on a single computer/processor machine having a single CPU. • A problem


is broken down into a discrete series of instructions. • Instructions are executed one
after another.
In parallel computing, since there is simultaneous use of multiple processor
machines, the following apply:

• It is run using multiple processors (multiple CPUs). • A problem is broken down into
discrete parts that can be solved concurrently. • Each part is further broken down
into a series of instructions. • Instructions from each part are executed
simultaneously on different processors. • An overall control/coordination mechanism
is employed.

Distributed Computing

Distributed computing is also a computing system that consists of multiple


computers or processor machines connected through a network, which can be
homogeneous or heterogeneous, but run as a single system.

The connectivity can be such that the CPUs in a distributed system can be physically
close together and connected by a local network, or they can be geographically
distant and connected by a wide area network.

The heterogeneity in a distributed system supports any number of possible


configurations in the processor machines, such as mainframes, PCs, workstations,
and minicomputers.

The goal of distributed computing is to make such a network work as a single


computer.

Distributed computing systems are advantageous over centralized systems, because


there is a support for the following characteristic features:

1. Scalability: It is the ability of the system to be easily expanded by adding more


machines as needed, and vice versa, without affecting the existing setup.

2. Redundancy or replication: Here, several machines can provide the same services,
so that even if one is unavailable (or failed), work does not stop because other similar
computing supports will be available.

Cluster Computing

A cluster computing system consists of a set of the same or similar type of processor
machines connected using a dedicated network infrastructure.

All processor machines share resources such as a common home directory and have
a software such as a message passing interface (MPI) implementation installed to
allow programs to be run across all nodes simultaneously.

This is also a kind of HPC category.


The individual computers in a cluster can be referred to as nodes. The reason to
realize a cluster as HPC is due to the fact that the individual nodes can work together
to solve a problem larger than any computer can easily solve.

And, the nodes need to communicate with one another in order to work cooperatively
and meaningfully together to solve the problem in hand. If we have processor
machines of heterogeneous types in a cluster, this kind of clusters become a subtype
and still mostly are in the experimental or research stage.
Grid Computing

The computing resources in most of the organizations are underutilized but are
necessary for certain operations.

The idea of grid computing is to make use of such non utilized computing power by
the needy organizations, and thereby the return on investment (ROI) on computing
investments can be increased.

Thus, grid computing is a network of computing or processor machines managed


with a kind of software such as middleware, in order to access and use the resources
remotely.

The managing activity of grid resources through the middleware is called grid
services.

Grid services provide access control, security, access to data including digital libraries
and databases, and access to large-scale interactive and long-term storage facilities.

Grid computing is more popular due to the following reasons:

• Its ability to make use of unused computing power, and thus, it is a cost-effective
solution (reducing investments, only recurring costs)

• As a way to solve problems in line with any HPC-based application

• Enables heterogeneous resources of computers to work cooperatively and


collaboratively to solve a scientific problem Researchers associate the term grid to
the way electricity is distributed in municipal areas for the common man.

In this context, the difference between electrical power grid and grid computing is
worth noting (Table 1.1).
Cloud Computing

The computing trend moved toward cloud from the concept of grid computing,
particularly when large computing resources are required to solve a single problem,
using the ideas of computing power as a utility and other allied concepts.

However, the potential difference between grid and cloud is that grid computing
supports leveraging several computers in parallel to solve a particular application,
while cloud computing supports leveraging multiple resources, including computing
resources, to deliver a unified service to the end user.

In cloud computing, the IT and business resources, such as servers, storage,


network, applications, and processes, can be dynamically provisioned to the user
needs and workload.

In addition, while a cloud can provision and support a grid, a cloud can also support
nongrid environments, such as a three-tier web architecture running on traditional or
Web 2.0 applications.
We will be looking at the details of cloud computing in different chapters of this book.

Biocomputing

systems use the concepts of biologically derived or simulated molecules (or models)
that perform computational processes in order to solve a problem. The biologically
derived models aid in structuring the computer programs that become part of the
application. Biocomputing provides the theoretical background and practical tools for
scientists to explore proteins and DNA. DNA and proteins are nature’s building blocks,
but these building blocks are not exactly used as bricks; the function of the final
molecule rather strongly depends on the order of these blocks. Thus, the
biocomputing scientist works on inventing the order suitable for various applications
mimicking biology. Biocomputing shall, therefore, lead to a better understanding of
life and the molecular causes of certain diseases.

Mobile Computing

In mobile computing, the processing (or computing) elements are small (i.e.,
handheld devices) and the communication between various resources is taking place
using wireless media. Mobile communication for voice applications (e.g., cellular
phone) is widely established throughout the world and witnesses a very rapid growth
in all its dimensions including the increase in the number of subscribers of various
cellular networks. An extension of this technology is the ability to send and receive
data across various cellular networks using small devices such as smartphones. There
can be numerous applications based on this technology; for example, video call or
conferencing is one of the important applications that people prefer to use in place
of existing voice (only) communications on mobile phones. Mobile computing–based
applications are becoming very important and rapidly evolving with various
technological advancements as it allows users to transmit data from remote locations
to other remote or fixed locations

Quantum Computing

Quantum computing is the use of quantum phenomena such as superposition and


entanglement to perform computation. Computers that perform quantum
computations are known as quantum computers.

Manufacturers of computing systems say that there is a limit for cramming more and
more transistors into smaller and smaller spaces of integrated circuits (ICs) and
thereby doubling the processing power about every 18 months. This problem will
have to be overcome by a new quantum computing–based solution, wherein the
dependence is on quantum information, the rules that govern the subatomic world.
Quantum computers are millions of times faster than even our most powerful
supercomputers today. Since quantum computing works differently on the most
fundamental level than the current technology, and although there are working
prototypes, these systems have not so far proved to be alternatives to today’s silicon-
based machines.

Optical Computing

Optical or photonic computing uses photons produced by lasers or diodes for


computation. For decades, photons have promised to allow a higher bandwidth than
the electrons used in conventional computers.

Optical computing system uses the photons in visible light or infrared beams, rather
than electric current, to perform digital computations. An electric current flows at
only about 10% of the speed of light. This limits the rate at which data can be
exchanged over long distances and is one of the factors that led to the evolution of
optical fiber. By applying some of the advantages of visible and/or IR networks at the
device and component scale, a computer can be developed that can perform
operations 10 or more times faster than a conventional electronic computer.

Nanocomputing

Nanocomputing describes computing that uses extremely small, or nanoscale,


devices (one nanometer [nm] is one billionth of a meter). In 2001, state-of-the-art
electronic devices could be as small as about 100 nm, which is about the same size
as a virus. The integrated circuits (IC) industry, however, looks to the future to
determine the smallest electronic devices possible within the limits of computing
technology.

Nanocomputing refers to computing systems that are constructed from nanoscale


components. The silicon transistors in traditional computers may be replaced by
transistors based on carbon nanotubes. The successful realization of nanocomputers
relates to the scale and integration of these nanotubes or components. The issues of
scale relate to the dimensions of the components; they are, at most, a few
nanometers in at least two dimensions. The issues of integration of the components
are twofold: first, the manufacture of complex arbitrary patterns may be economically
infeasible, and second, nanocomputers may include massive quantities of devices.
Researchers are working on all these issues to bring nanocomputing a reality

You might also like