Unit I
Unit I
Cloud computing
Definition:
The cloud is a large group of interconnected computers. These computers can be personal
computers or network servers; they can be public or private.
Cloud computing is a technology that uses the internet and central remote servers to maintain
data and applications.
Eg: Yahoo email or Gmail etc
Cloud carriers provide the connectivity and transport of cloud services from cloud providers
to cloud consumers.
A cloud provider participates in and arranges for two unique service level agreements
(SLAs), one with a cloud carrier (e.g. SLA2) and one with a cloud consumer (e.g. SLA1).
A cloud provider arranges service level agreements (SLAs) with encrypted connections to
ensure the cloud services are consumed at a consistent level according to the contractual
obligations with the cloud consumers.
In this case, the provider may specify its requirements on capability, flexibility and
functionality in SLA2 in order to provide essential requirements in SLA1.
For example, cloud must be replicated on other computers in the cloud. If that one computer
goes offline, the cloud’s programming automatically redistributes that computer’s data to a
new computer in the cloud.
Examples of cloud computing applications: Google Docs & Spreadsheets, Google Calendar,
Gmail, Picasa.
Advantages
HISTORICAL DEVELOPMENTS
In client/server model all the software applications, data, and the control resided on huge
mainframe computers, known as servers.
If a user wanted to access specific data or run a program, he had to connect to the
mainframe, gain appropriate access, and then do his business.
Users connected to the server via a computer terminal, called a workstation or client.
Access was not immediate nor could two users access the same data at the same
time. When multiple people are sharing a single computer, you have to wait for your
turn.
So the client/server model, while providing similar centralized storage, differed from cloud
computing in that it did not have a user-centric focus. It was not a user-enabling
environment.
Peer-to-Peer Computing: Sharing Resources
P2P computing defines a network architecture in which each computer has equivalent
capabilities and responsibilities.
In the P2P environment, every computer is a client and a server; there are no masters and
slaves.
There is no need for a central server, because any computer can function in that capacity
when called on to do so.
P2P was a decentralizing concept. Control is decentralized, with all computers functioning
as equals. Content is also dispersed among the various peer computers.
distributed computing, where idle PCs across a network or Internet are tapped to provide
computing power for large, processor-intensive projects.
The goal was to enable multiple users to collaborate on group projects online, in real time.
To collaborate on any project, users must first be able to talk to one another.
Most collaboration systems offer the complete range of audio/video options, for full-
featured multiple-user video conferencing.
In addition, users must be able to share files and have multiple users work on the same
document simultaneously.
Users from multiple locations within a corporation, and from multiple organizations,
desired to collaborate on projects that crossed company and geographic boundaries.
To do this, projects had to be housed in the “cloud” of the Internet, and accessed from any
Internet-enabled location.
DISTRIBUTED SYSTEM
A distributed system contains multiple nodes that are physically separate but linked together
using the network.
All the nodes in this system communicate with each other and handle processes in
tandem. Each of these nodes contains a small part of the distributed operating system
software.
The nodes in the distributed systems can be arranged in the form of client/server systems or
peer to peer systems. Details about these are as follows:
Client/Server Systems
In client server systems, the client requests a resource and the server provides that resource.
A server may serve multiple clients at the same time while a client is in contact with only
one server.
Both the client and server usually communicate via a computer network and so they are a
part of distributed systems.
VIRTUALIZATION
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
Google AppEngine: Launched in 2008, it provides applications (SaaS) and raw hardware
(IaaS).
Microsoft Azure: It is also a scalable runtime environment for web & distributed
applications.
it provides additional services such as support for storage (relational data & blobs),
networking, caching, content delivery & others.
Cloud application, or cloud app, is a software program where cloud-based and local
components work together. This model relies on remote servers for processing logic that is
accessed through a web browser with a continual internet connection.
Cloud application servers typically are located in a remote data centre operated by a third
party cloud services infrastructure provider. Cloud-based application tasks may encompass
email, file storage and sharing, order entry, inventory management, word processing,
customer relationship management (CRM), data collection, or financial accounting features.
Benefits of cloud apps
Fast response to business needs. Cloud applications can be updated, tested and deployed
quickly, providing enterprises with fast time to market and agility. This speed can lead to
culture shifts in business operations.
Simplified operation. Infrastructure management can be outsourced to third-party cloud
providers.
Instant scalability. As demand rises or falls, available capacity can be adjusted.
API use. Third-party data sources and storage services can be accessed with an application
programming interface (API). Cloud applications can be kept smaller by using APIs to hand
data to applications or API-based back-end services for processing or analytics
computations, with the results handed back to the cloud application. Vetted APIs impose
passive consistency that can speed development and yield predictable results.
Gradual adoption. Refactoring legacy, on-premises applications to a cloud architecture in
steps, allows components to be implemented on a gradual basis.
Reduced costs. The size and scale of data centres run by major cloud infrastructure and
service providers, along with competition among providers, has led to lower prices. Cloud-
based applications can be less expensive to operate and maintain than equivalents on-
premises installation. Improved data sharing and security. Data stored on cloud services
is instantly available to authorized users. Due to their massive scale, cloud providers can hire
world-class security experts and implement infrastructure security measures that typically
only large enterprises can obtain. Centralized data managed by IT operations personnel is
more easily backed up on a regular schedule and restored should disaster recovery become
necessary.
Google AppEngine
● Google AppEngine is a scalable runtime environment frequently dedicated to
executing web applications.
● These utilize the benefits of the large computing infrastructure of Google to
dynamically scale as per the demand.
● AppEngine offers both a secure execution environment and a collection of which
simplifies the development of scalable and high-performance Web applications.
● These services include: in-memory caching, scalable data store, job queues,
messaging, and corn tasks.
● Currently, the supported programming languages are Python, Java, and Go.
Microsoft Azure–
● Microsoft Azure is a Cloud operating system and a platform in which users can
develop the applications in the cloud.
● Azure provides a set of services that support storage, networking, caching, content
delivery, and others.
Hadoop
● Apache Hadoop is an open source framework that is appropriate for processing large
data sets on commodity hardware.
● Hadoop is an implementation of MapReduce, an application programming model
which is developed by Google.
● This model provides two fundamental operations for data processing: map and
reduce.
Force.com and Salesforce.com –
● Force.com is a Cloud computing platform at which users can develop social
enterprise applications.
● The platform is the basis of SalesForce.com – a Software-as-a-Service solution for
customer relationship management.
● Force.com allows creating applications by composing ready-to-use blocks: a
complete set of components supporting all the activities of an enterprise are
available.
● From the design of the data layout to the definition of business rules and user
interface is provided by Force.com as a support.
● This platform is completely hostel in the Cloud, and provides complete access to its
functionalities, and those implemented in the hosted applications through Web
services technologies.
The primary goal of parallel computing is to increase the computational power available to
your essential applications.
Typically, This infrastructure is where the set of processors are present on a server, or
separate servers are connected to each other to solve a computational problem.
In the earliest computer software, that executes a single instruction (having a single Central
Processing Unit (CPU)) at a time that has written for serial computation. A Problem is broken
down into multiple series of instructions, and that Instructions executed one after another.
Only one computational instruction complete at a time.
Main Reasons to use Parallel Computing is that:
1. Save time and money.
2. Solve larger problems.
3. Provide concurrency.
4. Multiple execution units
Types of parallel computing Bit-level parallelism
In the Bit-level parallelism every task is running on the processor level and depends on
processor word size (32-bit, 64-bit, etc.) and we need to divide the maximum size of
instruction into multiple series of instructions in the tasks. For Example, if we want to do an
operation on 16-bit numbers in the 8-bit processor, then we would require dividing the
process into two 8 bit operations.
Instruction-level parallelism (ILP)
Instruction-level parallelism (ILP) is running on the hardware level (dynamic parallelism),
and it includes how many instructions executed simultaneously in a single CPU clock cycle.
Data Parallelism
The multiprocessor system can execute a single set of instructions (SIMD), data parallelism
achieved when several processors simultaneously perform the same task on the separate
section of the distributed data. Task Parallelism
Task parallelism is the parallelism in which tasks are splitting up between the processors to
perform at once.
Hardware architecture of parallel computing –
The hardware architecture of parallel computing is disturbed along the following categories
as given below :
1. Single-instruction, single-data (SISD) systems
2. Single-instruction, multiple-data (SIMD) systems
3. Multiple-instruction, single-data (MISD) systems
4. Multiple-instruction, multiple-data (MIMD) systems