Unit-1 Cloud Computing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Unit-1

Cloud Computing

Cloud computing is a computing paradigm, where a large pool of systems are connected in private
or public networks, to provide dynamically scalable infrastructure for application, data and file
storage. With the advent of this technology, the cost of computation, application hosting, content
storage and delivery is reduced significantly.
Cloud computing is a practical approach to experience direct cost benefits and it has the potential to
transform a data center from a capital-intensive set up to a variable priced environment.
The idea of cloud computing is based on a very fundamental principal of ‘reusability of IT
capabilities'. The difference that cloud computing brings compared to traditional concepts of “grid
computing”, “distributed computing”, “utility computing”, or “autonomic computing” is to broaden
horizons across organizational boundaries.

Characteristics of cloud computing:

On-demand self-service: A consumer can unilaterally provision computing capabilities, such as


server time and network storage, as needed automatically without requiring human interaction with
each service provider.
1. Broad network access: Capabilities are available over the network and accessed through
standard mechanisms that promote use by heterogeneous thin or thick client platforms
(e.g., mobile phones, tablets, laptops and workstations).
2. Resource pooling: The provider's computing resources are pooled to serve multiple
consumers using a multi-tenant model, with different physical and virtual resources
dynamically assigned and reassigned according to consumer demand. There is a sense of
location independence in that the customer generally has no control or knowledge over the
exact location of the provided resources but may be able to specify location at a higher level
of abstraction (e.g., country, state or datacenter). Examples of resources include storage,
processing, memory and network bandwidth.
3. Rapid elasticity: Capabilities can be elastically provisioned and released, in some cases
automatically, to scale rapidly outward and inward commensurate with demand. To the
consumer, the capabilities available for provisioning often appear to be unlimited and can be
appropriated in any quantity at any time.
4. Measured service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some level of abstraction appropriate to the type of
service (e.g., storage, processing, bandwidth and active user accounts). Resource usage can
be monitored, controlled and reported, providing transparency for the provider and
consumer.
Advantages of Cloud Computing

Enterprises would need to align their applications, so as to exploit the architecture models that
Cloud Computing offers. Some of the typical benefits are listed below:

1. Reduced Cost
There are a number of reasons to attribute Cloud technology with lower costs. The billing model is
pay as per usage; the infrastructure is not purchased thus lowering maintenance. Initial expense and
recurring expenses are much lower than traditional computing.
2. Increased Storage
With the massive Infrastructure that is offered by Cloud providers today, storage & maintenance of
large volumes of data is a reality. Sudden workload spikes are also managed effectively &
efficiently, since the cloud can scale dynamically.
3. Flexibility
This is an extremely important characteristic. With enterprises having to adapt, even more rapidly,
to changing business conditions, speed to deliver is critical. Cloud computing stresses on getting
applications to market very quickly, by using the most appropriate building blocks necessary for
deployment.

Cloud Computing Challenges


Despite its growing influence, concerns regarding cloud computing still remain. In our opinion, the
benefits outweigh the drawbacks and the model is worth exploring. Some common challenges are:

1. Data Protection
Data Security is a crucial element that warrants scrutiny. Enterprises are reluctant to buy an
assurance of business data security from vendors. They fear losing data to competition and the data
confidentiality of consumers. In many instances, the actual storage location is not disclosed, adding
onto the security concerns of enterprises. In the existing models, firewalls across data centers
(owned by enterprises) protect this sensitive information. In the cloud model, Service providers are
responsible for maintaining data security and enterprises would have to rely on them.
2. Data Recovery and Availability
All business applications have Service level agreements that are stringently followed. Operational
teams play a key role in management of service level agreements and runtime governance of
applications. In production environments, operational teams support:-
Appropriate clustering and Fail over
Data Replication
System monitoring (Transactions monitoring, logs monitoring and others)
Maintenance (Runtime Governance)
Disaster recovery
Capacity and performance management
3. Management Capabilities
Despite there being multiple cloud providers, the management of platform and infrastructure is still
in its infancy. Features like „Auto-scaling‟ for example, are a crucial requirement for many
enterprises. There is huge potential to improve on the scalability and load balancing features
provided today.
4. Regulatory and Compliance Restrictions
In some of the European countries, Government regulations do not allow customer's personal
information and other sensitive information to be physically located outside the state or country. In
order to meet such requirements, cloud providers need to setup a data center or a storage site
exclusively within the country to comply with regulations. Having such an infrastructure may not
always be feasible and is a big challenge for cloud providers.

Cloud Computing Applications:

Cloud computing has many applications and those applications are sub-divided as per all the cloud
services but major applications of cloud computing are in:
• Business, Telecommunication, Health Care , Education, Banking, IT-companies etc
Big Data Analytics
Businesses create a huge amount of data in various formats; structured as in SQL databases, semi-
structured often stored in data warehouses, and unstructured usually stored in data lakes.
Unstructured data includes documents, emails, images. All of this data needs to be analyzed for
reporting, metrics and business predictions. Cloud computing is flexible and companies analyze
their big data in the cloud because they do not have to buy large computing systems to do the work.
Less cost, more flexibility.
File Storage
Cloud offers you the possibility of storing, accessing and retrieving your files anywhere anytime
from various web interfaces. With cloud computing, you get high speed, availability, and scalability
for your business environment. Cloud storage comes in several forms depending on the use case:
long term storage, stable storage, or the need for flexible storage amounts to handle computing
peaks. In addition, for distributed companies, having file access available anywhere lowers the cost
of company networks and improves security.
Backup
Backing up data is a risky operation. You can backup that data in-house, but there is always a risk
of inadequate storage space, corrupted data and restore time. Cloud backup services provide off-site
storage, easily configured backup/replication processes and easily increased space so you have less
risk of “disk full” type errors. Plus backups are available to multiple locations thanks to the cloud.
Cost Effective Computing
Because cloud computing companies install large server farms, their cost per Gb for storage or
applications is very low. For their customers, this lower cost is passed on (with the usual markup)
along with management services, 24x7 availability and providing upgrades to the latest technology.
For smaller companies especially, this is a boon. They do not have to incur the costs of building and
maintaining servers, including hiring more IT personnel. They can upgrade their services at any
time at a fraction of the in house cost.
Unit-2

Cloud service models

Cloud service models can be broadly defined in three categories – SaaS (Software as a Service),
PaaS (Platform as a Service) and IaaS (Infrastructure as a Service).

Infrastructure as a service (IaaS) is a cloud computing offering in which a vendor provides users
access to computing resources such as servers, storage and networking. Organizations use their own
platforms and applications within a service provider’s infrastructure.
Key features
• Instead of purchasing hardware outright, users pay for IaaS on demand.
• Infrastructure is scalable depending on processing and storage needs.
• Saves enterprises the costs of buying and maintaining their own hardware.
• Because data is on the cloud, there can be no single point of failure.
• Enables the virtualization of administrative tasks, freeing up time for other work.

Platform as a service (PaaS) is a cloud computing offering that provides users with a cloud
environment in which they can develop, manage and deliver applications. In addition to storage and
other computing resources, users are able to use a suite of prebuilt tools to develop, customize and
test their own applications.
Key features
• PaaS provides a platform with tools to test, develop and host applications in the same environment.
• Enables organizations to focus on development without having to worry about underlying
infrastructure.
• Providers manage security, operating systems, server software and backups.
• Facilitates collaborative work even if teams work remotely.

Software as a service (SaaS) is a cloud computing offering that provides users with access to a
vendor’s cloud-based software. Users do not install applications on their local devices. Instead, the
applications reside on a remote cloud network accessed through the web or an API. Through the
application, users can store and analyze data and collaborate on projects.
Key features
• SaaS vendors provide users with software and applications via a subscription model.
• Users do not have to manage, install or upgrade software; SaaS providers manage this.
• Data is secure in the cloud; equipment failure does not result in loss of data.
• Use of resources can be scaled depending on service needs.
• Applications are accessible from almost any internet-connected device, from virtually anywhere in
the world.
Figure: Cloud models
Cloud Deployment Models

Cloud computing refers to the use of network of remote servers that are hosted over the Internet,
and there are many cloud deployment models.
One of the most unique characteristics of cloud computing is that the services from data storage to
creation of software applications can be availed on pay-per-use basis.
There are four basic cloud deployment models, which are:

1) Private cloud model


In this system, the cloud infrastructure is set up on the premise for the exclusive use of an
organization and its customers. In terms of cost efficiency, this deployment model doesn’t bring
many benefits. However, many large enterprises choose it because of the security it offers.
The advantages of a private model:
• Individual development
• Storage and network components are customizable
• High control over the corporate information
• High security, privacy and reliability
The major disadvantage of the private cloud deployment model is its cost intensiveness, as it entails
considerable expenses on hardware, software and staff training.

2) Public cloud model


Public cloud is hosted on the premise of the service provider. The service provider than
provides cloud services to all of its customers. This deployment is generally adopted by many small
to mid-sized organizations for their non-core and some of their core functions.
The pros of a public cloud are:
• Unsophisticated setup and use
• Easy access to data
• Flexibility to add and reduce capacity
• Cost-effectiveness
• Continuous operation time
• 24/7 upkeep
• Scalability
• Eliminated need for software
The cons of a public model:
• Data security and privacy
• Compromised reliability
• The lack of individual approach

3) Community cloud
Community cloud model is a cloud infrastructure shared by a group of organizations of similar
industries and backgrounds with similar requirements i.e. mission, security, compliance and IT
policies. It may exist on or off premise and can be managed by a community of these organizations.
The strengths of a community computing type include the following:
• Cost reduction
• Improved security, privacy and reliability
• Ease of data sharing and collaboration
The shortcomings are:
• Higher cost than that of a public one
• Sharing of fixed storage and bandwidth capacity
• It is not widespread so far

4) Hybrid cloud model


Hybrid cloud is a combination of two or more models, private cloud, public cloud or community
cloud. Though these models maintain their separate entities they are amalgamated through a
standard technology that enables the portability of data and applications.
The benefits of a hybrid model are:
• Improved security and privacy
• Enhanced scalability and flexibility
• Reasonable price
Unit-3

Grid computing

Grid computing is a group of networked computers that work together as a virtual supercomputer to
perform large tasks, such as analyzing huge sets of data or weather modeling. Through the cloud,
you can assemble and use vast computer grids for specific time periods and purposes, paying, if
necessary, only for what you use to save both the time and expense of purchasing and deploying the
necessary resources yourself. Also by splitting tasks over multiple machines, processing time is
significantly reduced to increase efficiency and minimize wasted resources.
Unlike with parallel computing, grid computing projects typically have no time dependency
associated with them. They use computers that are part of the grid only when idle, and operators
can perform tasks unrelated to the grid at any time. Security must be considered when using
computer grids as controls on member nodes are usually very loose. Redundancy should also be
built in as many computers may disconnect or fail during processing.

Pros of Grid Computing

Cheaper Servers
No need to buy large SMP servers! Applications would be able to break apart and run across
smaller servers. Those servers cost far less than SMP servers.
More Efficient
Much more efficient use of idle resources. Idle servers and desktops would be able to accept jobs!
Many resources sit idle, especially during off business hours.
This is not the case anymore with a grid computing setup.
Fail-safe
Grid computer environments are modular and don’t have just one fail point. Hence if one of the
machines within the grid fails, there are plenty of others able to pick the load. Jobs can
automatically restart if a failure occurs.

Disadvantages

• Grid software and standards are still evolving


• Learning curve to get started
• Non-interactive job submission

Applications of Grid Computing


Currently, there are five general applications for Grid Computing:
• Super distributed computing- They are those applications whose needs can not get met in a
single node. The needs occur at specific times of time and consume many resources.
• Systems distributed in real time- They are applications that generate a flow of data at high
speed that must be analyzed and processed in real time.
• Specific services- Here we do not take into account the computing power and storage capacity
but the resources that an organization can consider as not necessary. Grid presents these
resources to the organization.
• The intensive process of data- Are those applications that make great use of storage space.
These types of applications overwhelm the storage capacity of a single node, and the data gets
distributed throughout the grid. In addition to the benefits of the increase in space, the
distribution of data along the grid allows access to them in a distributed manner.
• Virtual collaboration environments- Area associated with the concept of Tele-immersion.
So that the substantial computational resources of the grid and its distributed nature are used
to generate distributed 3D virtual environments.

Virtual organization

Virtual organization (VO) is an important abstraction for designing large-scale distributed


applications involving extensive resource-sharing. Existing works on VO mostly assumes that the
VO already exists or is created by mechanisms outside of their system model. The VO
construction is challenging and critical due to its dynamic and distributed nature. This paper
presents a VO Construction Model and an implementation algorithm which is based on a
threshold approach and is secure and robust in that events such as member admission, member
revocation, VO splitting and merging etc. can be handled without centralized administration. Also
authentication and communications among VO members are efficient and without tedious key
exchanges and management usually needed in VO built upon the Grid Security Infrastructure
(GSI).

Unit-4

Cluster computing

Cluster computing or High-Performance computing frameworks is a form of computing in which


bunch of computers (often called nodes) that are connected through a LAN (local area network) so
that, they behave like a single machine. A computer cluster help to solve complex operations more
efficiently with much faster processing speed, better data integrity than a single computer and they
only used for mission-critical applications.

The Clustering methods have identified as- HPC IAAS, HPC PAAS, that are more expensive and
difficult to setup and maintain than a single computer.
A computer cluster defined as the addition of processes for delivering large-scale processing to
reduce downtime and larger storage capacity as compared to other desktop workstation or
computer.
Some of the critical Applications of Cluster Computers are Google Search Engine, Petroleum
Reservoir Simulation, Earthquake Simulation, Weather Forecasting.

Cluster Can be classified into two category Open and Close Cluster.
Open Cluster: All nodes in Open Cluster are needed IPs, and that are accessible
through internet/web, that cause more security concern.
Close Cluster: On the other hand Close Cluster are hide behind the gateway node and provide
better security.
Types of Cluster computing
1. Load-balancing clusters: As the name implies, This system is used to distribute workload across
multiple computers. That system
distributes the processing load as possible across a cluster of computers.
2. High availability (HA) clusters: A high availability clusters (HA cluster) are the bunch of
computers that can reliably utilise for
redundant operations in the event of nodes failure in Cluster computing.
3. High performance (HP) clusters: This computer networking methodology use supercomputers
and Cluster computing to solve advanced computation problems.

Advantages of using Cluster computing


1. Cost efficiency: In a Cluster computing Cost efficiency is the ratio of cost to output, that is
the connecting group of the computer as computer cluster much cheaper as compared
to mainframe computers.
2. Processing speed: The Processing speed of computer cluster is the same as a mainframe
computer.
3. Expandability: The best benefit of Cluster Computing is that it can be expanded easily
by adding the additional desktop workstation to the system.
4. High availability of resources: If any node fails in a computer cluster, another node
within the cluster continue to provide uninterrupted processing. When a mainframe system
fails, the entire system fails.

Peer-to-Peer Networks
1. In the peer to peer computer network model we simply use the same Workgroup for all the
computers and a unique name for each computer in a computer network.
2. There is no master or controller or central server in this computer network and computers join
hands to share files, printers and Internet access.
3. It is practical for workgroups of a dozen or less computers making it common environments, where
each PC acts as an independent workstation and maintaining its own security that stores data on its
own disk but which can share it with all other PCs on the network.
4. Software for peer-to-peer network is included with most modern desktop operating systems such as
Windows and Mac OS.
5. Peer to peer relationship is suitable for small networks having less than 10 computers on a single
LAN.
6. In a peer to peer network each computer can not act as both a server and a client.
Advantages of Peer to Peer Networks
1. Such networks are easy to set up and maintain as each computer manages itself.
2. It eliminates extra cost required in setting up the server.
3. Since each device is master of its own, they an: not dependent on other computers for their
operations.

Disadvantages of Peer to Peer Networks


1. In peer-to-peer network, the absence of centralized server make it difficult to backup data as data
is located on different workstations.
2. Security is weak as each system manages itself only.
3. There is no central point of data storage for file archiving.
Utility computing

Utility computing is referred to be known as a facility provided by such providers to variety of


users on there demand and also charge them in return of using this service on the basis of using in
short specific usage. As well as it also provides infrastructure to the users or customers. Utility
computing is a model of providing facilities on demand of customers as other demands are fulfilled
of grid computing etc. The utility model provides the benefit of using the tools maximum on
demand as well as better usage of resources and to minimize the costs. The word utility is an
analogy for the customers to use and pay for the quantity of usage. Like electricity is charged on the
extent we use rather than paying full part as school fees or college to join class or not but have to
pay but the utility computing provides this facility of pay per use. The term utility computing is
taking place in society in different dimensions also like enterprise computing, website access, file
sharing, different applications, used by the consumers in markets. Some other version of utility
computing is taken from the enterprise known as a shared pool utility model which centralizes its
computing resources to serve a larger number of users without unnecessary redundancy.

Utility computing provides the ability to all companies to access computing services, business
processes, and applications from a utility-like service over a network. This influences in term of
being saving their money via using the capability on just pay per use. This means all companies can
save their money via this utility.

Importance of utility computing:

The utility computing sometime demands a kind of cloud strategy because it highlights the model,
which can be known as business model that also provides the computing services. The customer
receives computing resources in utility computing resources that resources provide service like
hardware, or software. It is like you would do for your electric service at home. It is defined in the
term of ‘pay by the drink’ which is an analogy from ‘the big switch’ by Nicholas

Features of Utility Computing

The major advantage and benefit that can be getting from utility computing is better economics.
Corporate data centers are better used in a way that with the resources, often idle 85 percent of the
time. The main cause behind this is the large number of buying the hardware that is more than
average need of that hardware which causes the problem to handle the expected future burdens. So
the utility computing plays its best by allowing the companies to only pay for the computing
resources they need.

Pervasive computing allows us to use modern technologies together to create interconnected device
system. It is also known as ubiquitous computing. Pervasive computing goes beyond the concept of
personal computing because it can connect basic home, kitchen, electronic appliances, could be
embedded with microchips, could be controlled from anywhere. Pervasive computing is modern
field in which many computational devices used to process information.
Ubiquitous Computing is also known as Pervasive Computing. Generally it is present in devices
and sensors. Most of the Internet of Things (IOT) devices are based on Ubiquitous Computing.

Some of the examples are:

• Apple Watch
• Amazon Echo Speaker
• Amazon EchoDot
• Fitbit
• Electronic Toll Systems
• Smart Traffic Lights
• Self Driving Cars
• Home Automation
• Smart Locks
Applications

1. Traffic Control System – In India we use traditional signal system to manage traffic
on busy roads. Many utomobile companies provide smart features that assists driver
of vehicle. Addition to this we can provide networking to connect such systems with
city traffic control system. If all such systems are interconnected we can provide
better solution. This is the actual aim of pervasive computing.
2. Internet Commerce – Pervasive computing system allow selling and buying products
smartly over the internet. Location based monitoring ads; quality shipping service,
Smart systems can assist in delivering products on time.
3. Communication - Pervasive computing can be used in data transmission and
communication. All traditional networking devices communicate through networks
which will become smart with use of pervasive computing
4. Defense Sector – Pervasive computing system can be used for the security of people
and to protect public life. In India providing security to public is state’s
responsibility. Internal security, law and order, flood management, disaster
management are state subjects. On the other hand Indian Army provides security to
entire nation. Pervasive system can include sensor system, monitoring system and
identification system to provide better security to people using more resources
together.
5. Home Pervasive system – Smart home pervasive system consists network of home
equipment’s like air conditioner, electrical system and home Wi-Fi network. Many
day to day tasks can be automatized using pervasive system.
Comparison Chart

BASIS FOR
CLOUD COMPUTING GRID COMPUTING
COMPARISON

Application focus business and web-based Collaborative purposes.

applications.

Architecture used Client-server Distributed computing

Management Centralized Decentralized

Business model Pay per use No defined business model

Accessibility of High because it is real-time Low because of scheduled

services services.

Programming Eucalyptus, Open Nebula, Different middlewares are


BASIS FOR
CLOUD COMPUTING GRID COMPUTING
COMPARISON

models Open stack etc, for Iaas but no available such as Globus

middleware exists. gLite, Unicore, etc.

Resource usage Centralized manner Collaborative manner

patterns

Flexibility High Low

Interoperability Vendor lock-in and Easily deals with

integration are some issues interoperability between

providers.

There are many differences between Grid and Clusters. The following table shows comparison of
Grid and Clusters.

CHARACTERISTIC CLUSTER GRID


Population Commodity Computers Commodity and High-end
computers
Ownership Single Multiple
Discovery Membership Services Centralized Index and
Decentralized Info
User Management Centralized Decentralized
Resource management Centralized Distributed
Allocation/ Scheduling Centralized Decentralized

Inter-Operability VIA and Proprietary No standards being


developed
Single System Image Yes No
Scalability 100s 1000?
Capacity Guaranteed Varies, but high
Throughput Medium High
Speed(Lat. Bandwidth) Low, high High, Low

You might also like