0% found this document useful (0 votes)
7 views108 pages

Cloud Computing SIT1304

The document provides an overview of cloud computing, including its history, architecture, types, and challenges. It discusses various cloud deployment models (private, community, public, hybrid) and highlights key players in the market such as Microsoft, Amazon, and IBM. Additionally, it covers virtualization, its importance in cloud computing, and the role of hypervisors in managing virtual environments.

Uploaded by

tonbunhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views108 pages

Cloud Computing SIT1304

The document provides an overview of cloud computing, including its history, architecture, types, and challenges. It discusses various cloud deployment models (private, community, public, hybrid) and highlights key players in the market such as Microsoft, Amazon, and IBM. Additionally, it covers virtualization, its importance in cloud computing, and the role of hypervisors in managing virtual environments.

Uploaded by

tonbunhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

SIT1304 – CLOUD COMPUTING

UNIT – I – Cloud Computing – SIT1304


UNIT 1
INTRODUCTION TO CLOUD COMPUTING

History of Cloud computing - Cloud Computing Architectural Framework - Types of Clouds - pros and
cons of cloud computing - difference between web 2.0 and cloud - key challenges in cloud computing -
Major Cloud players - Cloud Deployment Models - Virtualization in Cloud Computing - types of
virtualization - Parallelization in Cloud Computing - cloud resource management - dynamic resource
allocation - Optimal allocation of cloud models.
1.0 Introduction
Cloud computing is the on-demand availability of computer system resources,
especially data storage (cloud storage) and computing power, without direct active
management by the user. The term is generally used to describe data centers available to
many users over the Internet. Large clouds, predominant today, often have functions
distributed over multiple locations from central servers. If the connection to the user is
relatively close, it may be designated an edge server.
1.0.1 Difference between Cloud Computing and Web 2.0

Cloud Computing Web 2.0


It is more specific and definite Programming and business models
It is a way of searching through data. It is sharing entire pieces of data between
different websites.
Cloud computing is about computers. Web 2.0 is about people.
The internet as a computing platform Attempt to explore and explain the business
rules of that platform
Googleapps are considered in A web-based application is considered in Web
Cloud computing. 2.0.
It is a business model for hosting these It is a technology which allows web pages to
services. act as more responsive applications

1.1. Key challenges in cloud computing

 Bandwidth cost
 Cloud computing saves the hardware acquisition costs but their expenditure on
bandwidth rises considerably.
 Sufficient bandwidth is required to deliver intensive and complex data over the
network.
2
 Continuous monitoring and supervision
 It is important to monitor the cloud service continuously as well as to supervise its

3
performance, business dependency and robustness.
 Security concerns.
To prevent cloud infrastructure damages, some of the measures include tracking unusual
behaviour across servers, buying security hardware and using security applications.
1.2 Data access and integration
In cloud where the data stored, how to access it, who is the owner and how to control it.
Companies are often concerned about data ownership and loss of data control while moving
to cloud.
The integration of existing applications in the cloud for smooth running is another challenge.

1.2.1 Proper usability


Enterprises need to have a good and clear view of how to use the technology to add value to
their unique businesses
1.2.2 Migration issues
Migrating data from system to the cloud can pose major risks, if it not handled properly.
It need to develop migration strategy that integrates well with the current IT infrastructure.
1.2.3 Cost assessment
Scalable and on-demand nature of cloud services makes the assessment of cost difficult.
Heavy use of a service for a few days may consume the budget of several months.
Current Cloud Computing Challenges
1.3 Requirement of Cloud
In January 2016, RightScale conducted its fifth annual State of the Cloud Survey on
the latest cloud computing trends. They questioned 1,060 technical professionals across a
broad cross-section of organizations about their adoption of cloud infrastructure.

1.3.1 Lack of resources/expertise


The organizations are increasingly placing more workloads in the cloud while cloud
technologies continue to rapidly advance.
Due to these factors organizations are having a hard time keeping up with the tools. Also, the
need for expertise continues to grow.
These challenges can be minimized through additional training of IT and development staff.
1.3.2. Security issues
The start of cloud computing technology: you are unable to see the exact location where your
data is stored or being processed.

4
1.3.3. Compliance
The organization needs to be able to comply with regulations and standards, no matter where
your data is stored.
Speaking of storage, also ensure the provider has strict data recovery policies in place.
1.3.4. Cost management and containment
The most part cloud computing can save businesses money.
To avoid purchase of new hardware
Use pay-as-you go models
1.3.5. Governance / Control
Proper IT governance should ensure that IT assets are implemented and used according to
agreed upon policies and procedures.
In today’s cloud-based world, IT does not always have full control over the provisioning, de-
provisioning and operations of infrastructure.
This has increased the difficulty for IT to provide the governance, compliance and risk
management required.
1.3.6. Performance
When a business moves to the cloud it becomes dependent on the service providers.
When your provider is down, you are also down.
Make sure your SaaS provider has real time monitoring policies in place to help mitigate
these issues.
So, a robust cloud adoption strategy in place when they started to move to the cloud.
The Top 5 Cloud-Computing Vendors:
#1 Microsoft,
#2 Amazon,
#3 IBM,
#4Salesforce,
#5 SAP
#1 Microsoft remains an absolute lock at the top due to four factors: its deep involvement at
all three layers of the cloud (IaaS, PaaS and SaaS); its unmatched commitment to developing
and helping customers deploy AI, ML and Blockchain in innovative production
environments; its market-leading cloud revenue, which I estimate at about $16.7 billion for
the trailing 12 months (not to be confused with the forward-projected $20.4 billion
annualized run rate the company released on Oct. 26); and the extraordinary vision and
leadership of CEO Satya Nadella.

5
#2 Amazon might not have the end-to-end software chops of the others in the Top 5 but it
was and continues to be the poster-child for the cloud-computing movement: the first-moving
paradigm-buster and category creator. I believe Amazon will make some big moves to bolster
its position in software, and no matter how you slice it, the $16 billion in trailing-12-month
cloud revenue from AWS is awfully impressive.
#3 IBM has leapfrogged both Salesforce.com (formerly tied with Amazon for #2 and now in
the #4 spot) and SAP (formerly #4) on the strength of its un-trendy but highly successful
emphasis on transforming its vast array of software expertise and technology from the on-
premises world to the cloud. In so doing, IBM has quietly created a $15.8-billion cloud
business (again on trailing-12-month basis) that includes revenue of $7 billion from helping
big global corporations convert legacy systems to cloud or cloud-enabled environments. And
like #1 Microsoft, IBM plays in all three layers of the cloud—IaaS, PaaS and SaaS—which is
hugely important for the elite cloud vendors because it allows them to give customers more
choices, more seamless integration, better cybersecurity, and more reasons for third-party
developers to rally to the IBM Cloud. Plus, its relentless pairing of "cloud and cognitive" is
an excellent approach toward weaving AI and ML deeply into customer-facing solutions.
#4 Salesforce.com falls a couple of spots from its long-time tie with Amazon at #2 but—and
this will be the case as long as founder Marc Benioff is CEO—remains a powerful source of
digital innovation and disruptive strategy. However, to remain in the rarified air near the top
of the Cloud Wars Top 10, Benioff and Salesforce must find a way to extend their market
impact beyond their enormously successful SaaS business and become more of a high-impact
player in the platform or PaaS space. At this stage, it's simply not possible for Salesforce to
become a player in IaaS, so Benioff needs to crank up the genius machine and hammer his
way into top contention as a platform powerhouse.
#5 SAP has what all of the other cloud vendors would kill for: unmatched incumbency within
all of the world's leading corporations as the supplier of mission-critical business applications
that run those companies. It's also fashioned, under CEO Bill McDermott, powerful new
partnerships with Amazon and Google to complement its long-standing relationships with
IBM and Microsoft, all of which give customers a heightened sense of confidence that SAP
will be willing and able to play nice in heterogeneous environments. Plus, SAP's HANA
technology is now in full deployment across thousands of businesses, and as it takes root and
SAP continues to rationalize its massive product portfolio around HANA in the cloud, SAP
has a very bright future ahead of it in the cloud.
So the Cloud Wars are clearly intensifying—particularly among the five most-

6
powerful players at the top—as business customers are fully embracing the cloud as the best
and most- capable approach toward digital transformation.And in such an environment, the
long-term winners in the Cloud Wars will be those tech companies that choose to view the
world by and create their strategies around what business customers want and need, rather
than by the distorted and distorting tech-centric perspectives of the Silicon Valley bubble.

1.4 Cloud Deployment Models

The four types of Cloud Deployment Models identified by NIST.


 Private cloud
 Community cloud
 Public cloud
 Hybrid cloud

1.4.1 Private Cloud


Cloud environments are defined based on hardware location and owner. Private clouds are
accessible only to a respective customer residing either on-site or be outsourced by a third
party.
The cloud infrastructure is operated solely for an organization.
Contrary to popular belief, private cloud may exist off premises and can be managed by a
third party.
Thus, two private cloud scenarios exist, as follows:
 On-site Private Cloud
 Applies to private clouds implemented at a customer’s premises.
 Outsourced Private Cloud
 Applies to private clouds where the server side is outsourced to a hosting company.
 Examples of Private Cloud:
 Eucalyptus
 Ubuntu Enterprise Cloud - UEC (powered by Eucalyptus)
 Amazon VPC (Virtual Private Cloud)
 VMware Cloud Infrastructure Suite
 Microsoft ECI data center.

7
Figure 1.1 Schematic Sketch of Private Cloud
1.4.2 Community Cloud
The cloud infrastructure is shared by several organizations and supports a specific
community that has shared concerns (e.g., mission, security requirements, policy, and
compliance considerations). Government departments, universities, central banks etc. often
find this type of cloud useful.
Community cloud also has two possible scenarios:
On-site Community Cloud Scenario
Applies to community clouds implemented on the premises of the customers composing a
community cloud
Outsourced Community Cloud
 Applies to community clouds where the server side is
outsourced to a hosting company.
 Examples of Community Cloud:
 Google Apps for Government
 Microsoft Government Community Cloud

8
Figure 1.2. Schematic Sketch of Public Cloud

1.4.3 Public Cloud


The most ubiquitous, and almost a synonym for, cloud computing. The cloud infrastructure is
made available to the general public or a large industry group and is owned by an
organization selling cloud services.
Examples of Public Cloud:
 Google App Engine
 Microsoft Windows Azure
 IBM Smart Cloud
 Amazon EC2
1.4.4 Hybrid Cloud
The cloud infrastructure is a composition of two or more clouds (private, community, or
public) that remain unique entities but are bound together by standardized or proprietary
technology that enables data and application portability (e.g., cloud bursting for load-
balancing between clouds).
Examples of Hybrid Cloud:
 Windows Azure (capable of Hybrid Cloud)
 VMware vCloud (Hybrid Cloud Services)

Figure 1.3. Schematic Sketch of Hybrid Cloud

9
Figure 1.4. Cloud Computing Life Cycle

10
1.5 Cloud Components
It has three components
 Client computers
 Distributed Servers
 Datacenters

Figure 1.4. Schematic Sketch of Interconnection of Cloud Components

1. Clients
Clients are the device that the end user interacts with cloud.
Three types of clients: 1.) Mobile
2. Thick
3. Thin (Most Popular)
Datacenter
It is collection of servers where application is placed and is accessed via internet.
Distributed servers
Often servers are in geographically different places, but server acts as if they are working
next to each other
Who Benefits from Cloud Computing
 Collaborators
 Road Warriors
 Cost-Conscious Users
 Cost-Conscious IT Departments
 Users with Increasing Needs
Who Should not be using Cloud Computing
 The internet impaired
 Offline workers

11
 The security conscious
 Anyone married to existing applications
Dark Clouds: Barriers to use web-based applications
 Technical issues
 Business model issues
 Internet issues
 Security issues
 Compatibility issues
 Social issues
Cloud Computing Applications
 Clients would be able to access their applications and data from anywhere at any time.
 It could bring hardware costs down.
 The right software available in place to achieve goals.
 Servers and digital storage devices take up space.
 Corporations might save money on IT support.
 The client takes advantage of the entire network's processing power.
Virtualization in cloud computing

Figure 1.5. Concept of Virtualization

 Virtualization is Nothing but Creating Virtual Version of Something, in Computer


Terms it can be OS, Storage Device, Network Resources.
 Virtualization is the ability to run multiple operating systems on a single physical
system and share the underlying hardware resources.
 It is the process by which one computer hosts the appearance of many computers.
 Virtualization is used to improve IT throughput and costs by using physical resources
as a pool from which virtual resources can be allocated.
 With VMware virtualization solutions you can reduce IT costs while increasing the

12
efficiency, utilization and flexibility of their existing computer hardware.
 You don’t need to own the hardware
Resources are rented as needed from a cloud
You get billed only for what you used
Virtualization Architecture
A Virtual machine (VM) is an isolated runtime environment (guest OS and applications)
Multiple virtual systems (VMs) can run on a single physical system

Figure 1.6. Concept of Virtualization

Before Virtualization
 Single OS image per machine
 Software and hardware tightly coupled
 Running multiple applications on same machine often creates conflict
 Inflexible and costly infrastructure
After virtualization
 Hardware-independence of operating system and applications
 Virtual machines can be provisioned to any system
 Can manage OS and application as a single unit by encapsulating them into virtual
Machines
 Benefits of Virtualization
 Sharing of resources helps cost reduction.
Isolation: Virtual machines are isolated from each other as if they are physically separated.
Encapsulation: Virtual machines encapsulate a complete computing
environment.
Hardware Independence: Virtual machines run independently of underlying hardware.
Portability: Virtual machines can be migrated between different hosts.
What makes virtualization possible?
There is a software that makes virtualization possible. This software is known as a
Hypervisor, also known as a virtualization manager.

13
It sits between the hardware and the operating system and assigns the amount of access that
the applications and operating systems have with the processor and other hardware resources.
1.6 Hypervisor

Figure 1.7. Schematic Sketch of Hypervisor


The term hypervisor was first coined in 1956 by IBM
Hypervisor acts as a link between the hardware and the virtual environment and distributes
the hardware resources such as CPU usage, memory allotment between the different virtual
environments.
A hypervisor or virtual machine monitor (VMM) is computer software, firmware or hardware
that creates and runs virtual machines. A computer on which a hypervisor runs one or more
virtual machines is called a host machine, and each virtual machine is called a guest machine.
A hypervisor is a hardware virtualization technique that allows multiple guest operating
systems (OS) to run on a single host system at the same time. The guest OS shares the
hardware of the host computer, such that each OS appears to have its own processor, memory
and other hardware resources.A hypervisor is also known as a virtual machine manager
(VMM).

14
1.6.1Types of virtualization
The 7 Types of Virtualization
 Hardware Virtualization.
 Software Virtualization.
 Network Virtualization.
 Storage Virtualization.
 Memory Virtualization.
 Data Virtualization.
 Desktop Virtualization.
1.6.1.1 Hardware Virtualization
Hardware or platform virtualization means creation of virtual machine that act like real
computer.
Ex. Computer running Microsoft Windows 7 may host the virtual machine look like a
Ubundu
Hardware virtualization also knows as hardware-assisted virtualization or server
virtualization.
The basic idea of the technology is to combine many small physical servers into one large
physical server, so that the processor can be used more effectively and efficiently.
Each small server can host a virtual machine, but the entire cluster of servers is treated as a
single device by any process requesting the hardware.
The hardware resource allotment is done by the hypervisor.
The advantages are increased processing power as a result of maximized hardware utilization
and application uptime.
Hardware virtualization is further subdivided into the following types
1.6.1.2 Full Virtualization – Guest software does not require any modifications since the
underlying hardware is fully simulated
Para Virtualization – The hardware is not simulated and the guest software run their own
isolated domains.
Partial Virtualization – The virtual machine simulates the hardware and becomes
independent of it. The guest operating system may require modifications.

15
1.6.1.3 Software Virtualization
 The ability to computer to run and create one or more virtual environments.
 It is used to enable a computer system in order to allow a guest OS to run.
 Ex. Linux to run as a guest that is natively running a Microsoft Windows OS
Subtypes:
Operating System Virtualization – Hosting multiple OS on the native OS
Application Virtualization – Hosting individual applications in a virtual environment
separate from the native OS
Service Virtualization – Hosting specific processes and services related to a particular
application
1.6.2 Network Virtualization
 It refers to the management and monitoring of a computer network as a single
managerial entity from a single software-based administrator’s console.
 Multiple sub-networks can be created on the same physical network, which may or
may not is authorized to communicate with each other.
 It allows network optimization of data transfer rates, scalability, reliability, flexibility,
and security
 Subtypes:
 Internal network: Enables a single system to function like a network.
1.6.3 Storage Virtualization
 Multiple physical storage devices are grouped together, which look like a single
storage device.
 Ex. Partitioning your hard drive into multiple partitions
 Advantages
 Improved storage management in a heterogeneous IT environment
 Easy updates, better availability
 Reduced downtime
 Better storage utilization
 Automated management
 Two types
 Block- Multiple storage devices are consolidated into one
 File- Storage system grants access to files that are stored over multiple hosts

16
1.6.4 Memory Virtualization
 The way to decouple memory from the server to provide a shared, distributed or
networked function.
 It enhances performance by providing greater memory capacity without any addition
to the main memory.
 Implementations
 Application-level integration – Applications access the memory pool directly
 Operating System Level Integration – Access to the memory pool is provided through
an operating system.
1.6.5 Data Virtualization
Without any technical details, you can easily manipulate data and know how it is formatted
or where it is physically located.
It decreases the data errors and workload
The data is presented as an abstract layer completely independent of data structure and
database systems
1.6.6 Desktop Virtualization
The user’s desktop is stored on a remote server, allowing the user to access his/her desktop
from any device or location.
It provides the work convenience and security
It provides a lot of flexibility for employees to work from home or on the go
Since the data transfer takes place over secure protocols, any risk of data theft is minimized
Which Technology to use?
Virtualization is possible through a wide range of Technologies which are available to use
and are also Open Source.
They are,
 XEN
 KVM
 OpenVZ
1.7 Parallelization in cloud computing
 Parallel computing is a type of computing architecture in which several processors
execute or process an application or computation simultaneously.
 Parallel computing helps in performing large computations by dividing the workload
between more than one processor, all of which work through the computation at the
same time.

17
 Most supercomputers implemented parallel computing principles to operate.
 Parallel computing is also known as parallel processing.
 Parallel processing is generally implemented in operational environments/scenarios
that require massive computation or processing power.
 The primary objective of parallel computing is to increase the available computation
power for faster application processing.
 Typically, parallel computing infrastructure is housed within a single facility where
many processors are installed in a server rack or separate servers are connected
together.
 The application server sends a processing request that is distributed in small
components, which are concurrently executed on each processor/server.
 Parallel computation can be classified as bit-level, instructional level, data and task
parallelism.
Cloud Resource Management
 Critical function of any man-made system.
 It affects the three basic criteria for the evaluation of a system:
 Functionality.
 Performance.
 Cost.
 Scheduling in a computing system deciding how to allocate resources of a system,
such as CPU cycles, memory, secondary storage space, I/O and network bandwidth,
between users and tasks.
 Policies and mechanisms for resource allocation.
 Policy: principles guiding decisions.
 Mechanisms: the means to implement policies
 Cloud resources
 Requires complex policies and decisions for multi-
objective optimization.
 It is challenging - the complexity of the system makes it impossible to have accurate
global state information.
 Affected by unpredictable interactions with the environment,
e.g.,system failures, attacks.
 Cloud service providers are faced with large fluctuating loads which challenge the
claim of cloud elasticity

18
 The strategies for resource management for IaaS, PaaS, and SaaS are different.
Cloud resource management (CRM) policies
 Admission control: prevent the system from accepting workload in violation of high-
level system policies.
 Capacity allocation: allocate resources for individual activations of a service.
 Load balancing: distribute the workload evenly among the servers.
 Energy optimization: minimization of energy consumption
 Quality of service (QoS) guarantees: ability to satisfy timing or other conditions
specified by a Service Level Agreement
1.7 Dynamic resource allocation
 Cloud Computing environment can supply of computing resources on the basis of
demand and when needed
 Managing the customer demand creates the challenges of on-demand resource
allocation.
 Effective and dynamic utilization of the resources in cloud can help to balance the
load and avoid situations like slow run of systems.
 Cloud computing allows business outcomes to scale up and down their resources
based on needs.
 Virtual Machines are allocated to the user based on their job in order to reduce the
number of physical servers in the cloud environment
 If the VM is available then job is allowed to run on the VM.
 If the VM is not available then the algorithm finds a low priority job taking into
account the job’s lease type.
 The low priority job is paused its execution by pre-empting its resource.
 The high priority job is allowed to run on the resources pre-empted from the low
priority.
 When any other job running on VMs are completed, the job which was paused early
can be resumed if the lease type of the job is suspendable.
 If not, the suspended job has to wait for the completion of high priority job running in
its resources, so that it can be resumed.
 There are three types
 Cancellable: These requests can be scheduled at any time after their arrival time
 Suspendable: Suspendable leases are flexible in start time and can be scheduled at any
time after their ready time

19
 Non-Preemptable: The leases associated with such requests cannot be pre-empted at
all.
1.8 Optimal allocation of cloud models
The optimal allocation of computing resources is a core part for implementing cloud
computing.
High heterogeneity, high dynamism, and virtualization make the optimal allocation problem
more complex than the traditional scheduling problems in grid system or cloud computing
system.

20
SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

UNIT – II – Cloud computing – SIT1304


UNIT 2
CLOUD SERVICE MODELS
Software as a Service (SaaS) - Infrastructure as a Service (IaaS)- Platform as a
Service (PaaS)- Service Oriented Architecture (SoA) - Elastic Computing - On
Demand Computing.

2.1 Types of Services provided by Cloud


 Software as a Service (SaaS)
 Infrastructure as a Service (IaaS)
 Platform as a Service (PaaS)
Service Oriented Architecture
 Elastic Computing
 On Demand Computing
2.2 Cloud Services
Software as a Service
Introduction
SaaS (software-as-a-service). WAN-enabled application services (e.g., Google Apps,
Salesforce.com, WebEx).This is a public cloud service model where the application is 100%
managed by the cloud provider. SaaS removes the need for organizations to install and run
applications on their own computers or in their own data centers.This eliminates the expense
of hardware acquisition, provisioning and maintenance, as well as software licensing,
installation and support.Software-as-a-Service (SaaS) has evolved from limited on-line
software delivery in 1990s to a fully matured “direct-sourcing” business model for enterprise
applications.
SaaS is one of the fastest growing concepts: more than 10 million companies will be
using SaaS in the next 5 - 10 years; more than 50% of all Fortune 500 companies are already
using SaaS.According to influential IT institutes, SaaS is the leading business model of
choice for 2008/2009Virtually all big software/service vendors (IBM, Microsoft, Oracle,
Cisco) are investing heavily in SaaSWith the continuously increasing bandwidth and
reliability of the internet, using web services over the (public) internet has become a viable
option.Microsoft Office 365 is available with the Azure cloud platform.The architecture uses
an application instance instead of server instances.There is no actual migration of company
servers to the cloud.
The SaaS model provides single-tenant and multiple tenant services.The single-tenant
dedicates the application instance to the assigned tenant.The multiple tenant application is
2
shared by multiple tenants.The company can manage the security and storage with the single-

3
tenant model.The SaaS application is well suited to internet connectivity.The employees
along with their partners and customers can access the application with a variety of network
access devices.The SaaS billing model is based on either per usage or monthly
subscription.The security compliance requirements for some applications prevented
deployment to the SaaS cloud.
Some SaaS providers offer Virtual Private SaaS (VPS) services for additional
application security.It is a hybrid deployment model that allows peering with an enterprise or
VPC database server.The peering is for storage purposes only and used for security
compliance. Salesforce.com is a leading SaaS provider with a CRM application to customers.
2.2.1Benefits of SaaS
 Flexible payments
 Scalable usage
 Automatic updates
 Accessibility and persistence
 On demand computing
 Opportunities of SaaS
Software provided as a service by a software vendor to multiple customers with the following
main characteristics:
 Standardization of software
 Service including maintenance, support and upgrades
 Web based – usage over the (public) internet
 SaaS offers potential for lowering the Total Cost of Ownership
 Lower operational costs
 No large scale, costly, high risk implementations of applications
 Need few operational resources for application management
 No platform and hardware (maintenance) costs for application servers
 Reduced operational complexity: software delivered as a transparent service through
the web
 Minimized software development costs – No lengthy software development and
testing cycles
 Lower costs for software use
 No software license and annual maintenance fees
 No expensive software upgrades
 Lower application consultancy and support costs

4
 SaaS allows corporations to focus on core business activities and responsibilities
 Transparent overview and usage of electronic data and information
 Automation of iterative, manual tasks
 Faster Time to Market – easy to scale software
 More flexibility in changing and modifying application services for business needs –
Full-scale integration of business processes
 Control over IT
 Minimized IT Service Management efforts mainly focused on availability –
 Well-defined SLAs between the corporation and the IT vendor
 More predictable cash flow – easier licensing based on access/usage of software
 Increased productivity and improved user satisfaction
 Automatic software upgrades with minimal outage
2.2.2 Limitations
 Businesses must rely on outside vendors to provide the software, keep that software
up and running, track and report accurate billing and facilitate a secure environment
for the business' data.
2.3Platform as a Service
The Platform as a Service (PaaS) is a way to rent hardware, operating systems, storage and
network capacity over the Internet.
PaaS services are,
 Data services
 Application runtime
 Messaging & queueing
 Application management.
 The PaaS is a computing platform that abstracts the infrastructure, OS, and middle-
ware to drive developer productivity.
 The PaaS is foundational elements to develop new applications
 E.g., Google Application Engine, Microsoft Azure, Coghead.
 Microsoft Azure
 Pay per role instance
 Add and remove instances based on demand
 Elastic computing!
 Load balancing is part of the Azure fabric and automatically allocated.
 The PaaS is the delivery of a computing platform and solution stack as a service

5
 The Solution stack is integrated set of software that provides everything a developer
needs to build an application for both software development and runtime.
2.3.1 PaaS offers the following
Facilities for application design
 Application development
 Application testing, deployment
 Application services are,
 Operating system
 Server-side scripting environment
 Database management system
 Server Software
 Support
 Storage
 Network access
 Tools for design and development
 Hosting
All these services may be provisioned as an integrated solution over the web
2.3.2 Properties and characteristics of PaaS
 Scalability
 Availability
 Manageability
 Performance
 Accessibility
2.3.3 PaaS Features
 It delivers the computing platform as service
 The capacities to abstract and control all the underlying resources
 It helps to providers any smallest unit of resources
 To provide a reliable environment for running applications and services
 Act as a bridge between consumer and hardware
 Do not need to care about how to build, configure, manage and maintain the backend
environment
 It provides a development and testing platform for running developed applications
 Reduce the responsibility of managing the development and runtime environment
2.3.4 Advantages of PaaS

6
It helps to provide deployment of application without the cost and complexity of buying and
managing the hardware and software
It provides all the required to support the complete life cycle of building and delivering web
applications and services entirely available from the internet
2.3.5Disadvantages of PaaS
 Less flexible than IaaS
 Dependency on provider
 Adoption of software / system architecture required
 Evolving from different standard.
 Evolving “upwards” from IaaS
 Amazon (Mail, Notification, Events, Databases, Workflow, etc.)
 Evolving “downwards” from SaaS
 Force.com – a place to host additional per-tenant logic.
 Google App Engine
 Evolving “sideways” from middleware platforms
 WSO2, Tibco, vmWare, Oracle, IBM
 Generic PaaS Model
 Infrastructure as a Service
 This service offers the computing architecture and infrastructure i.e. all types of
computing resources
All resources are offered in a virtual environment, so that multiple users can access it.
The resources are including,
Data storage
 Virtualization
 Servers
 Networking
 The vendors are responsible for managing all the computing resources which they
provided.
 It allows existing applications to be run on a supplier’s hardware.
 User Task in IaaS Cloud
 Multiple user can access Virtual instances
The user responsible for handling other resources such as,
2.3.6 Applications
 Data

7
 Runtime
 Middleware
Example IaaS service providers
 AWS EC2 / S3 / RD
 GoGrid
 RackSpace
2.3.7 Pros
 The cloud provides the infrastructure with enhanced scalability i.e. dynamic
workloads are supported
 It is flexible
 Cons
 Security issues
 Network and service delay
2.3.8 Comparison of cloud services
 Blue indicates the levels owned and operated by the organization / Customer
 White levels are run and operated by the service provider / Operator
 Cloud Computing Services Pros
 Lower computer costs
 Improved performance:
 Reduced software costs
 Instant software updates
 Improved document format compatibility
 Unlimited storage capacity
 Increased data reliability
 Universal document access
 Latest version availability
 Easier group collaboration
 Device independence
2.3.9Cons
 Requires a constant Internet connection
 Does not work well with low-speed connections
 Features might be limited
 Can be slow
 Stored data can be lost

8
 Stored data might not be secure
 Service Oriented Architecture
2.4Service
A service is a program you interact with via message exchanges A system is a set of deployed
services cooperating in a given task.
2.5Architecture
It serves as the blueprint for the system
Team structure
 Documentation organization
 Work breakdown structure
 Scheduling, planning, budgeting
 Unit testing, integration
 Architecture establishes the communication and coordination mechanisms among
components
2.5.1 Software Architecture
 It is collection of the fundamental decisions about a software product/solution
designed to meet the project's quality attributes (i.e. requirements).
 The architecture includes the main components, their main attributes, and their
collaboration (i.e. interactions and behavior) to meet the quality attributes.
 Architecture can and usually should be expressed in several levels of abstraction
(depending on the project's size).
 Architecture is communicated from multiple viewpoints
2.5.2 Service Oriented Architecture (SOA)
 It is a design pattern or software architecture which provides application functionality
as a service to other applications.
 The basic principles of service-oriented architecture are independent of vendors,
products and technologies.
 The services are provided to the other components through a communication protocol
over a network.
 Every service has its own business logic
 Consumer interface layer – this layer is used by the customer
 Business process layer – it provides the business process flow
 Service layer – this layer comprises of all the services in the enterprises
 Component layer – this layer has the actual service to be provided

9
 Operational system layer – this layer contains the data model
SOA – Architecture in details
2.5.3 Principles of SOA
 Service loose coupling – service does not have high dependency
 implementation from outside world
 Service reusability – services can be used again and again instead of rewriting them
 Service statelessness – they usually do not maintain the state to reduce the resource
consumption
 Service discoverability – services are registered in registry, so that the client can
discover them in the service registry.
 Applications
 Manufacturing – E.g. Inventory management
 Insurance – Take up the insurance of the employees in companies
Companies using SOA
 Banking Sector
 ICICI Bank
 HDFC Bank
 UTI Bank etc..
 Manufacturing Sector
 Apollo Tyres
 Maruthi
 Hyundai
2.5.4 Advantages
 Interoperability
 Programs to run different vendors / locations
 To interact with different networks
 Different operating systems
 Solution: XML
 Scalability
 To extend the processing power of the servers
 Reusability
 If any new systems are introduced, no need to create a new service for every time.
 Parallel application development
 Modular approach

10
 Easy maintenance
 Greater Reliability
 Improved Software Quality
 Platform Independence
 Increased Productivity
2.5.5 Disadvantages
 Stand alone, non-distributed applications
 Homogenous application environments
 GUI based applications
 Short lived applications
 Real time applications
 One-way asynchronous communication applications

ELASTIC COMPUTING:
Elastic computing is the ability to quickly expand or decrease computer processing, memory and storage
resources to meet changing demands without worrying about capacity planning and engineering for peak
usage. Typically controlled by system monitoring tools, elastic computing matches the amount of
resources allocated to the amount of resources actually needed without disrupting operations. With cloud
elasticity, a company avoids paying for unused capacity or idle resources and does not have to worry
about investing in the purchase or maintenance of additional resources and equipment.
While security and limited control are concerns to take into account when considering elastic cloud
computing, it has many benefits. Elastic computing is more efficient than your typical IT infrastructure,
is typically automated so it does not have to rely on human administrators around the clock and offers
continuous availability of services by avoiding unnecessary slowdowns or service interruptions.

11
12
13
On-Demand Computing:
On-demand computing is a business computing model in which computing resources are made available
to the user on an “as needed” basis. Rather than all at once, on-demand computing allows cloud hosting
companies to provide their clients with access to computing resources as they become necessary.

14
On-demand computing is a delivery model in which computing resources are made available to the user
as needed. The resources may be maintained within the user's enterprise, or made available by a cloud
service provider. When the services are provided by a third-party, the term cloud computing is often used
as a synonym for on-demand computing.
The on-demand model was developed to overcome the common challenge to an enterprise of being able
to meet fluctuating demands efficiently. Because an enterprise's demand on computing resources can
vary drastically from one time to another, maintaining sufficient resources to meet peak requirements can
be costly. Conversely, if an enterprise tried to cut costs by only maintaining minimal computing
resources, it is likely there will not be sufficient resources to meet peak requirements.
The on-demand model provides an enterprise with the ability to scale computing resources up or down
with the click of a button, an API call or a business rule. The model is characterized by three attributes:
scalability, pay-per-use and self-service. Whether the resource is an application program that helps team
members collaborate or additional storage for archiving images, the computing resources are elastic,
metered and easy to obtain.
Many on-demand computing services in the cloud are so user-friendly that non-technical end users can
easily acquire computing resources without any help from the organization's information technology (IT)
department. This has advantages because it can improve business agility, but it also has disadvantages
because shadow IT can pose security risks. For this reason, many IT departments carry out periodic cloud
audits to identify greynet on-demand applications and other rogue IT.
Advantages of On-Demand Computing:
The on-demand computing model was developed to overcome the common challenge that enterprises
encountered of not being able to meet unpredictable, fluctuating computing demands in an efficient
manner.
Businesses today need to be agile and need the ability to scale resources easily and quickly based on
rapidly changing market needs.
Because an enterprise’s demand for computing resources can vary dramatically from one period of time
to another, maintaining sufficient resources to meet peak requirements can be costly.
However, with on-demand computing, companies can cut costs by maintaining minimal computing
resources until they run into the need to increase them, meanwhile only paying for what they use.
Industry experts predict on-demand computing to soon be the most widely used computing model for
enterprises.
In fact, IBM’s vice-president of technology and strategy stated, “The technology is at a point where we
can start to move into an era of on-demand computing. I give it between two and four years to reach a
level of maturity.”

15
SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

UNIT – III – Cloud Computing– SIT1304


UNIT III
Deployment of application on cloud , Hypervisor, Case Studies, Xen , VMWare, Eucalyptus,
Amazon EC2, KVM,Virtual Box, Hyper-V

Deployment of Applications on cloud:


Step 1: Prepare to Deploy

Before you deploy your app to Cloud Foundry, make sure that:

 Your app is cloud-ready. Cloud Foundry behaviors related to file storage, HTTP sessions, and port usage
may require modifications to your app.
 All required app resources are uploaded. For example, you may need to include a database driver.
 Extraneous files and artifacts are excluded from upload. You should explicitly exclude extraneous files
that reside within your app directory structure, particularly if your app is large.
 An instance of every service that your app needs has been created.
 Your Cloud Foundry instance supports the type of app you are going to deploy, or you have the URL of
an externally available buildpack that can stage the app.

Step 2: Know Your Credentials and Target

Before you can push your app to Cloud Foundry you need to know:

 The API endpoint for your Cloud Foundry instance. Your username and password for your Cloud
Foundry instance.
 The organization and space where you want to deploy your app. A Cloud Foundry workspace is
organized into organizations, and within them, spaces. As a Cloud Foundry user, you have access to one
or more organizations and spaces.

Step 3: (Optional) Configure Domains

Cloud Foundry directs requests to an app using a route, which is a URL made up of a host and a domain.

 The name of an app is the default host for that app, unless you specify the host name with the -n flag.
 Every app is deployed to an app space that belongs to a domain. Every Cloud Foundry instance has a
default domain defined. You can specify a non-default, or custom, domain when deploying, provided that
the domain is registered and is mapped to the organization which contains the target app space.

Step 4: Determine Deployment Options

Before you deploy, you need to decide on the following:

 Name: You can use any series of alpha-numeric characters as the name of your app.
 Instances: Generally speaking, the more instances you run, the less downtime your app will experience.
If your app is still in development, running a single instance can simplify troubleshooting. For any
production app, we recommend a minimum of two instances.
 Memory Limit: The maximum amount of memory that each instance of your app can consume. If an
instance exceeds this limit, Cloud Foundry restarts the instance.
 Start Command: This is the command that Cloud Foundry uses to start each instance of your app. This
start command varies by app framework.
 Subdomain (host) and Domain: The route, which is the combination of subdomain and domain, must
be globally unique. This is true whether you specify a portion of the route or allow Cloud Foundry to use
defaults.
 Services: Apps can bind to services such as databases, messaging, and key-value stores. Apps are
deployed into app spaces. An app can only bind to a service that has an existing instance in the target app
space.

Define Deployment Options

You can define deployment options on the command line, in a manifest file, or both together.
See Deploying with App Manifests to learn how app settings change from push to push, and how
command-line options, manifests, and commands like cf scale interact.

When you deploy an app while it is running, Cloud Foundry stops all instances of that app and then
deploys. Users who try to run the app get a “404 not found” message while cf push runs. Stopping all
instances is necessary to prevent two versions of your code from running at the same time. A worst-case
example would be deploying an update that involved a database schema migration, because instances
running the old code would not work and users could lose data.

Cloud Foundry uploads all app files except version control files and folders with names such
as .svn, .git, and _darcs. To exclude other files from upload, specify them in a .cfignore file in the
directory where you run the push command. For more information, see the Ignore Unnecessary Files
When Pushing section of the Considerations for Designing and Running an App in the Cloud topic.

For more information about the manifest file, see the Deploying with App Manifests topic.

# Set the default LANG for your apps


export LANG=en_US.UTF-8

Step 5: Push the App

Run the following command to deploy an app without a manifest:

cf push APP-NAME

Hypervisor

Hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the resources
on various pieces of hardware.The program which provide partitioning, isolation or abstraction is called
virtualization hypervisor. Hypervisor is a hardware virtualization technique that allows multiple guest
operating systems (OS) to run on a single host system at the same time. A hypervisor is sometimes also
called a virtual machine manager(VMM).

Types of Hypervisor –

TYPE-1 Hypervisor:
Hypervisor runs directly on underlying host system.It is also known as “Native Hypervisor” or “Bare
metal hypervisor”.It dose not require any base server operating system.It has direct access to hardware
resources.Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer and Microsoft
Hyper-V hypervisor.

TYPE-2 Hypervisor:
A Host operating system runs on undrlying host system.It is also known as ‘Hosted
Hypervisor”.Basically a software installed on an operating system.Hypervisor asks operating system to
make hardware calls.Example of Type 2 hypervisor include VMware Player or Parallels Desktop. Hosted
hypervisors are often found on endpoints like PCs.

Choosing the right hypervisor


Type 1 hypervisors offer much better performance than Type 2 ones because there’s no middle layer,
making them the logical choice for mission-critical applications and workloads. But that’s not to say that
hosted hypervisors don’t have their place – they’re much simpler to set up, so they’re a good bet if, say,
you need to deploy a test environment quickly.One of the best ways to determine which hypervisor meets
your needs is to compare their performance metrics. These include CPU overhead, amount of maximum
host and guest memory, and support for virtual processors.The following factors should be examined
before choosing a suitable hypervisor:

1. Understand your needs: The company and its applications are the reason for the data center (and
your job). Besides your company’s needs, you (and your co-workers in IT) also have your own
needs.Needs for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is striking the
right balance between cost and functionality. While a number of entry-level solutions are free, or
practically free, the prices at the opposite end of the market can be staggering. Licensing frameworks also
vary, so it’s important to be aware of exactly what you’re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance of their
physical counterparts, at least in relation to the applications within each server. Everything beyond
meeting this benchmark is profit.
4. Ecosystem: It’s tempting to overlook the role of a hypervisor’s ecosystem – that is, the availability of
documentation, support, training, third-party developers and consultancies, and so on – in determining
whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You can run
both VMware vSphere and Microsoft Hyper-V in either VMware Workstation or VMware Fusion to
create a nice virtual learning and testing environment.

HYPERVISOR REFERENCE MODEL

There are 3 main modues coordinate in order to emiulate the undrelying hardware:
1. Dispatcher
2. Allocator
3. Interpreter
DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the instructions of the virtual
machine instance to one of the other two modules.
ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the virtual machine
instance.It means whenever virtual machine tries to execute an instruction that results in changing the
machine resources associated with the virtual machine, the allocator is invoked by the dispatcher.
INTERPRETER:
The interpreter module consists of interpreter routines.These are executed, whenever virtual machine
executes a priviliged instruction.

Virtualization CaseStudies:
KVM:
SCHOOL OF COMPUTING

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

UNIT – IV – Cloud computing – SIT1304


UNIT 4
CLOUD COMPUTING FOR EVERYONE
Cloud data center- Energy efficiency in data center- Mobile Cloud
Computing Service Model-Collaboration with services and
applications-CRM Management-Project Management-Email-Online
Database-Calendar Schedule-Word Processing-Presentation-
Spreadsheet-Databases-Network-Social Network and Groupware

CLOUD DATA CENTERS:


A data center (or datacenter) is a facility composed of networked computers and storage that
businesses or other organizations use to organize, process, store and disseminate large amounts of
data. A business typically relies heavily upon the applications, services and data contained within a
data center, making it a focal point and critical asset for everyday operations.

Data centers are not a single thing, but rather, a conglomeration of elements. At a minimum, data
centers serve as the principal repositories for all manner of IT equipment, including servers, storage
subsystems, networking switches, routers and firewalls, as well as the cabling and physical racks
used to organize and interconnect the IT equipment. A data center must also contain an adequate
infrastructure, such as power distribution and supplemental power subsystems, including electrical
switching; uninterruptable power supplies; backup generators and so on; ventilation and data center
cooling systems, such as computer room air conditioners; and adequate provisioning for network
carrier (telco) connectivity. All of this demands a physical facility with physical security and
sufficient physical space to house the entire collection of infrastructure and equipment.

Data center consolidation and colocation

There is no requirement for a single data center, and modern businesses may use two or more data
center installations across multiple locations for greater resilience and better application
performance, which lowers latency by locating workloads closer to users.

Conversely, a business with multiple data centers may opt to consolidate data centers, reducing the
number of locations in order to minimize the costs of IT operations. Consolidation typically occurs
during mergers and acquisitions when the majority business doesn't need the data centers owned by
the subordinate business.

Alternatively, data center operators can pay a fee to rent server space and other hardware in
a colocation facility. Colocation is an appealing option for organizations that want to avoid the
43
large capital expenditures associated with building and maintaining their own data centers. Today,
colocation providers are expanding their offerings to include managed services, such as
interconnectivity, allowing customers to connect to the public cloud.

Data center tiers

Data centers are not defined by their physical size or style. Small businesses may operate
successfully with several servers and storage arrays networked within a convenient closet or small
room, while major computing organizations, such as Facebook, Amazon or Google, may fill an
enormous warehouse space with data center equipment and infrastructure. In other cases, data
centers can be assembled in mobile installations, such as shipping containers, also known as data
centers in a box, which can be moved and deployed as required.

However, data centers can be defined by various levels of reliability or resilience, sometimes
referred to as data center tiers. In 2005, the American National Standards Institute (ANSI) and the
Telecommunications Industry Association (TIA) published standard ANSI/TIA-942,
"Telecommunications Infrastructure Standard for Data Centers," which defined four tiers of data
center design and implementation guidelines. Each subsequent tier is intended to provide more
resilience, security and reliability than the previous tier. For example, a tier 1 data center is little
more than a server room, while a tier 4 data center offers redundant subsystems and high security.

Data center architecture and design

Although almost any suitable space could conceivably serve as a "data center," the deliberate
design and implementation of a data center requires careful consideration. Beyond the basic issues
of cost and taxes, sites are selected based on a multitude of criteria, such as geographic location,
seismic and meteorological stability, access to roads and airports, availability of energy and
telecommunications and even the prevailing political environment.

Once a site is secured, the data center architecture can be designed with attention to the mechanical
and electrical infrastructure, as well as the composition and layout of the IT equipment. All of these
issues are guided by the availability and efficiency goals of the desired data center tier.

44
Energy consumption and efficiency

Data center designs also recognize the importance of energy efficiency. A simple data center may
need only a few kilowatts of energy, but an enterprise-scale data center installation can demand
tens of megawatts or more. Today, the green data center, which is designed for minimum
environmental impact through the use of low-emission building materials, catalytic converters and
alternative energy technologies, is growing in popularity.

Organizations often measure data center energy efficiency through a metric called power usage
effectiveness (PUE), which represents the ratio of total power entering the data center divided by
the power used by IT equipment. However, the subsequent rise of virtualizationhas allowed for
much more productive use of IT equipment, resulting in much higher efficiency, lower energy use
and energy cost mitigation. Metrics such as PUE are no longer central to energy efficiency goals,
but organizations may still gauge PUE and employ comprehensive power and cooling analyses to
better understand and manage energy efficiency.

Data center security and safety

Data center designs must also implement sound safety and security practices. For example, safety
is often reflected in the layout of doorways and access corridors, which must accommodate the
movement of large, unwieldy IT equipment, as well as permit employees to access and repair the
infrastructure. Fire suppression is another key safety area, and the extensive use of sensitive, high-
energy electrical and electronic equipment precludes common sprinklers. Instead, data centers
often use environmentally friendly chemical fire suppression systems, which effectively starve a
fire of oxygen while mitigating collateral damage to the equipment. Since the data center is also a
core business asset, comprehensive security measures, like badge access and video surveillance,
help to detect and prevent malfeasance by employees, contractors and intruders.

Data center infrastructure management and monitoring

Modern data centers make extensive use of monitoring and management software. Software such
as data center infrastructure management tools allow remote IT administrators to oversee the
facility and equipment, measure performance, detect failures and implement a wide array of
corrective actions, without ever physically entering the data center room.
45
The growth of virtualization has added another important dimension to data center infrastructure
management. Virtualization now supports the abstraction of servers, networks and storage,
allowing every computing resource to be organized into pools without regard to their physical
location. Administrators can then provision workloads, storage instances and even network
configuration from those common resource pools. When administrators no longer need those
resources, they can return them to the pool for reuse. All of these actions can be implemented
through software, giving traction to the term software-defined data center.

Data center vs. cloud

Data centers are increasingly implementing private cloud software, which builds on virtualization
to add a level of automation, user self-service and billing/chargeback to data center administration.
The goal is to allow individual users to provision workloads and other computing resources on-
demand, without IT administrative intervention.

It is also increasingly possible for data centers to interface with public cloud providers. Platforms
such as Microsoft Azure emphasize the hybrid use of local data centers with Azure or other public
cloud resources. The result is not an elimination of data centers, but rather, the creation of a
dynamic environment that allows organizations to run workloads locally or in the cloud or to move
those instances to or from the cloud as desired.

46
47
48
49
50
51
Mobile Cloud application:
Mobile Cloud Computing, or MCC, merges the fast-growing Cloud Computing
Applicationsmarket with the ubiquitous smartphone. One of the most ground-breaking blends of
modern-day technologies, MCC has proved itself to be highly beneficial to all the mobile users and
cloud-based service-providers as well.
In this technique, user-friendly mobile applications are developed, which are powered by and
hosted using the cloud computing technology. The ‘mobile cloud’ approach enables the apps
52
developers to build applications designed especially for mobile-users, which can be used without
being bound to the operating system of the device or its capacity to store data. Here, the tasks of
data-processing and data storage are performed outside the mobile devices.

The ability of MCC to allow the device to run cloud-based web-applications unlike other native
apps differentiates it from the concept of ‘Mobile Computing’. Here the users can remotely access
the store applications and their associated data anytime on the Internet by subscribing to the cloud
services. Although most devices already run a mix of web-based and native apps, the trend these
days seems to be shifting more toward the services and convenience that are offered by a mobile
cloud.
Researchers are putting in serious efforts in forming a strong and symbiotic platform, coined
the ‘Third Platform,’ that would bring together the mobile and the cloud. Experts predict this
platform to revolutionize further the uprising of MCC which has enabled its users a better means to
access and store their data along with latest data synchronization techniques, improved reliability
and better performance. All these beneficial aspects have inspired a lot of people to consider MCC
for their smart-phones.
Mobile Cloud Computing confirms the impact of certain trends and factors. Here are the factors
that have had an astounding impact as far as MCC is concerned.
 Enhanced broadband coverage: Better connectivity is being rendered to our mobile devices
via 4G, WiFi, femto-cells, fixed wireless etc.
 Abundant Storage: Cloud-based mobile apps have proved themselves to be more capable than
any smart-phone, especially in terms of the storage space that is offered. Cloud apps’ server-
based computing infrastructure that is accessible through mobile interface of an app, is quite a
contrast to the limited data-storage space and processing power in a mobile device.
 Budding Technologies: Advanced technologies like HTML5, CSS3, Hyper-Visor virtual
machines for smart-phones, cloudlets and Web 4.0 etc are contributing a lot toward MCC’s
rising popularity.
 Latest Trends: Smart-phones have enabled us with 24/7 access to business applications and
other collaborative services has upped the scope to increase the productivity from anywhere, at
any given time.

53
Cloud Computing & The Future Scope of Android

Today, the popularity of this Linux-based operating system is quite apparent after looking at the
massive chunk of smart-phone users relying on Android. It has a large community of developers on
its platform that develop applications to increase the devices’ functionality for their users.
Introduction of cloud-computing on this platform has taken the user experience of Android
applications to another level altogether. In fact, both, the Android app-developers and smart-phone
users are benefiting from the power of cloud computing.
Various layers of Android programming model have smoothly accommodated the scope of creating
secure applications that are specially developed for the cloud environment. Also, its open-source
policy allows the complex cloud-computing applications to be run by the users anywhere.
For Android app-developers, it’s quite different for them to develop applications in the traditional
environment and in the cloud computing environment. In the traditional environment, the need to
maintain complete infrastructure at the back-end shifts the focus on maintaining the environment
instead of making innovative applications. Whereas, in case of apps for the cloud environment, it’s
the cloud-service providers who manage the infrastructure, software stack and hardware
maintenance. This allows developers to write mobile cloud applications that profit from cloud
computing and can deliver cost-benefits and other such advantages to the users.
Most of us just consider games and other daily-life simplifying apps as the only inspiration for the
developers to create Android applications, but a quick reality-check on the app-market reveals that
enterprise apps are catching and reaching a market share that attracts significant interest. In fact,
research analysts have found mobile-centric applications and interfaces to be among the top 10
technological trends in 2018 and 2019.
Here are two well-known examples of cloud-based Android applications:
 Dropbox: Operated by Dropbox Inc., this application is a file-hosting service which offers
cloud storage. It lets the users access their files in the ‘Dropbox’ from their Android devices,
which can be synced to other computers or mobile devices.
 Amazon Cloud Player: One of the most popular applications on Android platform, Amazon
Cloud Player is used to store and play MP3 files. Here the ‘Cloud Drive’ acts as a hard drive set

54
in the cloud. Users can play their MP3 files via the web or they can conveniently stream them
on their Android devices using Amazon ‘Cloud’ MP3 application.
Android -Mobile Cloud Computing-Robotics – A surreal combination

Organizations and companies have changed their approach towards designing and conceptualizing
new products after including cloud-computing in their calculations. Users and developers’ newly-
earned ability to access the immensely flexible and cost-effective power of cloud computing has
helped develop services that must have seemed simply infeasible just a few years back. A perfect
example of this is Voice Search by Google for mobile devices. ‘Voice Search’ has enabled users to
convey a voice query and have it transcribed accurately on their devices in real time. The credit for
this goes to Google’s ability to use the vast amount of search data to refine and define such voice
queries with cloud infrastructure. Ever since its introduction, smart voice search services have.
Today, almost 25% of queries on Android devices are using it.
Robotics and cloud computing can be a great combination which shall add more capabilities and
may also help in saving the battery life of the device. And by adding mobile connectivity to this
gives robotics new capabilities while using lesser battery power and memory.
Reasons why cloud computing is the future of mobile devices

The interface of MCC has undeniably enabled us to accommodate videos, music files, digital
images and more, right into out petite smart-phones. Here are a few reasons that explain why MCC
is considered to be the future for mobile devices:
1. Extended Battery Life: As the major role of processing is handled by the cloud, mobile devices’
battery usage is reduced automatically.
2. Abundant Storage Space: Enormous storage capacity that a mobile user can access happens to be
the most highlighted USP of the cloud service. Mobile users shall no longer need to worry about
their devices’ limited storage capacity and spend money on memory cards.
3. Improved data-synching techniques: Cloud storage enables the user to store and manage their
data by speedy data synchronization between the device and any other desktop or device chosen by

55
the user. This instantly benefits the users by eliminating their problems of storing all their data files
and maintaining a back-up.
4. Enhanced processing facilities: The processor of any mobile device determines its speed and
performance. However, in the case of mobile cloud computing, most of the processing is performed
at the cloud level. This takes the load off the device and thereby enhances its overall performance.
5. Superior user-experience: In case of MCC it is always the user who benefits the most by using
this platform. The wide range of benefits offered by this platform makes for an optimum
productivity and an enhanced user experience.
Scope to embrace new technologies: MCC can easily adjust to the ever-evolving nature of
technologies. It is capable enough to perform efficiently with all the upgrades in cloud computing
methods and changes in the smart-phones’ designs and features.

CRM MANAGEMENT:

Customer relationship management (CRM) describes all aspects of sales, marketing and service-
related interactions that a company has with its customers or potential customers. Both business-to-
consumer (B2C) and business-to-business (B2B) companies often use CRM systems to track and
manage communications through the Web, email telephone, mobile apps, chat, social media and
marketing materials.

CRM Information, Tracking and Analytics

Information tracked in a CRM system might include contacts, sales leads, clients, demographic or
firmographic data, sales history, technical support and service requests, and more. CRM systems
can also automate many marketing, sales and support processes, helping companies provide a
consistent experience to customers and prospects, while also lowering their costs.

Some CRM solutions also offer advanced analytics that offer suggested next steps for staff when
dealing with a particular customer or contact. Business leaders can also use these analytics to
measure the effectiveness of their current marketing, sales and support efforts and to optimize their
various business processes.

The Customer Relationship Management Strategy


Customer relationship management is a business strategy that enables companies to improve in the
following areas:

Understanding existing customers' needs


Obtaining a 360-degree view of customers and prospects
Retaining customers through better customer experience and loyalty programs
Attracting new customers
Winning new clients and contracts
Increasing profitably
Decreasing customer management costs
56
Today's CRM Solutions
Many of today's most popular CRM solutions are delivered as cloud-based solutions. Because they
have Web-based interfaces, these tools allow sales teams to access customer and lead information
from any device in any location at any time of day. These software as a service (SaaS) solutions
tend to be more user-friendly than older CRM applications, and some include artificial intelligence
or machine learning features that can help organizations make better business decisions and
provide enhanced support and service to their customers.

The data captured by CRM solutions helps companies target the right prospects with the right
products, offer better customer service, cross-sell and up-sell more effectively, close deals, retain
current customers and better understand exactly who their customers are.

The Business Benefits of CRM Systems:


The biggest benefit most businesses realize when moving to a CRM system comes directly from
having all their business data stored and accessed from a single location. Before CRM systems
became commonplace in the 1990s and 2000s, customer data was spread out over office
productivity suite documents, email systems, mobile phone data and even paper note cards and
Rolodex entries.

Storing all the data from all departments (e.g., sales, marketing, customer service and HR) in a
central location gives management and employees immediate access to the most recent data when
they need it. Departments can collaborate with ease, and CRM systems help organization to
develop efficient automated processes to improve business processes.

Other benefits include a 360-degree view of all customer information, knowledge of what
customers and the general market want, and integration with your existing applications to
consolidate all business information.

PROJECT MANAGEMENT:
The truly portable office is not only possible, but is quickly becoming the norm for many
businesses. Handheld devices were a big step toward the reality of working anywhere, and now
cloud computing has removed the final barrier. With project management software hosted in the
cloud, you can have everything you need at your fingertips, anywhere you happen to be.

Of course, every technology has its advantages and drawbacks. In this article, we'll take a look at
how companies can move their project management into the cloud, who can benefit, and what
pitfalls still remain for this lofty software solution. (For some background reading, check out
Project Management 101.)

Project Management Software, Meet the Cloud


Project management software coordinates and automates many of the more tedious functions of
managing projects. Most of these software programs include graphs and Gantt chart generation,
task assignment, time sheets and milestones. More advanced versions can also regulate resources,
monitor budgets, track expenses and calculate costs.
This method of working involves using programs and storing data through an internet connection.
Cloud software is hosted by the provider on remote servers, rather than being physically installed
on a company's machines. Microsoft OneDrive, Dropbox and your Amazon Kindle digital library
are examples of cloud storage. (To learn more about the overall benefits of cloud computing, read
57
A Beginner's Guide to the Cloud: What It Means for Small Business.)

The Benefits of Cloud Software


Cloud software comes with plenty of advantages. One of the primary benefits for project
management is super-easy sharing. Because the software is hosted in the cloud, an entire business
team can access the most recent tasks, schedules and progress updates anytime, which makes cloud
software ideal for travel and real-time collaboration.
Speaking of access, another great advantage to cloud software is portability. An internet connection
is all it takes to access the program anytime, anywhere, from any device, including a desktop
computer, laptop, smartphone or tablet.

There's also cost and time to consider. Traditional installed PM software can cost thousands, and
may require hardware upgrades. Installation and upgrades are not only expensive, they also slow
deployment. Cloud software, on the other hand, doesn't require installation, just a login and
password to access the latest version of the software. The pricing structure is also much easier on a
company's budget, as most cloud project management software is sold as subscription-based
software as a service (SaaS). This means low monthly payments instead of hundreds or thousands
upfront. In addition, most project management software doesn't require a long-term commitment.

In a nutshell, using the cloud deals with the following challenges, which all kinds of companies
face, whether they work in architecture, construction or telecommunications:
Distribution delays, time-zone problems
With cloud service, everyone can gain access to company data from a laptop, PC, smartphone or
tablet from anywhere there's a reliable internet connection.
Project collaboration
Forget cumbersome emails, employees can work on a project over the cloud and submit changes
that will be available to the rest of the group in minutes. This allows disparate employees to work
in near real time, just as if they were in the same room.
Backup
By having documents in the cloud, companies are protected against hardware and software failure.
Unlimited storage space
The cloud never runs out of space and is accessible from almost anywhere. This allows companies
to archive files, allowing team members to continue to access them in the future, even remotely.
What's the Risk?
There's always a catch, right? Despite its strengths, cloud project management software does have a
few disadvantages. Chief among them is security, which is an inherent risk for any online
transaction. (Read more about security risks in The Dark Side of the Cloud.)

Ever since cloud software began to grow in popularity, companies have cited security as a primary
concern. After all, using the cloud means storing all of a company's data – which may include trade
secrets, sensitive customer data, and company information – on someone else's servers. Those
servers could be vulnerable to hackers, viruses, or even natural disasters or physical theft.

Fortunately, cloud software vendors are aware of these risks, and most use the best available
security to protect their servers and their customers' data. After all, without satisfied customers,
cloud providers wouldn't be in business. In addition, cloud security has improved over time, and is
likely to continue to do so as cloud providers come up with new solutions to protect their clients'
data. Even so, companies should find out what security procedures and protocols are in place
58
before subscribing to any cloud software service.

Downtime is another potential problem. If the cloud provider runs into technical problems, its
clients will not be able to access their data. According to the International Working Group on
Cloud Computing Resiliency, the uptime of cloud providers ranged between 99.6 percent and 99.9
percent, for an average of 7.5 hours of unavailable time per year. This sounds pretty good, but
IWGCR states that it's a long way from the 99.99 percent reliability that's required of a mission-
critical system. Many large cloud software vendors – and even some smaller ones – have uptime
guarantees, so companies should look for one that carries the least risk. And of course, the most
crucial data should also be backed up in-house.

Where Cloud PM Software Works


While large corporations tend to stick to the more traditional installed PM software, cloud versions
are being embraced by companies that lack million-dollar IT budgets. The low initial costs,
minimal or non-existent IT infrastructure investment, pay-as-you-go rates and anywhere access of
cloud software is ideal for:

EMAIL: Email

59
WORD PROCESSING:

60
61
62
PRESENTATIONS:

63
64
SPREADSHEETS:

65
66
67
SOCIAL NETWORKS

68
69
70
71
72
SCHOOL OF COMPUTING
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

UNIT – V – Cloud computing – SIT1304


UNIT 5
CLOUD SECURITY
Cloud security-Security threats and solutions in cloud-Auditing
protocol-Dynamic Auditing-Storage Security-Privacy
preserving-Full homo morphic encryption-Big data security-
Cloud availability-DoS attack-Full tolerance management in
cloud computing-cloud computing in India
Cloud security is the protection of data, applications, and infrastructures involved in cloud
computing. Many aspects of security for cloud environments (whether it’s a public, private, or
hybridcloud) are the same as for any on-premise IT architecture.

High-level security concerns—like unauthorized data exposure and leaks, weak access controls,
susceptibility to attacks, and availability disruptions—affect traditional IT and cloud systems alike.
Like any computing environment, cloud security involves maintaining adequate preventative
protections so you:

Know that the data and systems are safe.


Can see the current state of security.
Know immediately if anything unusual happens.
Can trace and respond to unexpected events.

74
SECURITY THREATS AND SOLUTIONS:

75
Audit Protocols
Audit protocols assist the regulated community in developing programs at individual facilities to
76
evaluate their compliance with environmental requirements under federal law. The protocols are

77
intended solely as guidance in this effort. The regulated community's legal obligations are
determined by the terms of applicable environmental facility-specific permits, as well as
underlying statutes and applicable federal, state and local law.
Environmental audit reports are useful to a variety of businesses and industries, local, state and
federal government facilities, as well as financial lenders and insurance companies that need to assess
environmental performance. The audit protocols are designed for use by persons with various
backgrounds, including scientists, engineers, lawyers and business owners or operators. The following
protocols provide detailed regulatory checklists and are provided in an easy to understand question
format for evaluating compliance:
Comprehensive Environmental Response, Compensation and Liability (CERCLA)

Protocol for Conducting Environmental Compliance Audits and the Comprehensive


Environmental Response, Compensation and Liability Act (12/1/98)

Clean Water Act (CWA)


Protocol for Conducting Environmental Compliance Audits under the Stormwater
Program (1/15/05) Guidance including detailed regulatory checklists to to assess environmental
performance in the stormwater program.
Protocol for Conducting Environmental Compliance Audits for Municipal Facilities under US
EPA's Wastewater Regulations (12/1/00).

Emergency Planning and Community Right-to-Know Act (EPCRA)

Protocol for Conducting Environmental Compliance Audits under the Emergency Planning and
Community Right-to-Know Act and CERCLA Section 103 (3/1/01)

Federal Insecticide, Fungicide and Rodenticide Act (FIFRA)

Protocol for Conducting Environmental Compliance Audits under the Federal Insecticide,
Fungicide, and Rodenticide Act (FIFRA) (9/1/00)

Resource Conservation and Recovery Act (RCRA)


Protocol for Conducting Environmental Compliance, Audits of Facilities Regulated under
Subtitle D of RCRA (3/1/00)
Protocol for Conducting Environmental Compliance, Audits of Storage Tanks under the
Resource Conservation and Recovery Act (3/1/00)
Protocol for Conducting Environmental Compliance, Audits of Treatment, Storage and Disposal
Facilities under the Resource Conservation and Recovery Act (12/1/98)

Safe Drinking Water Act (SWDA)

Protocols for Conducting Environmental Compliance, Audits of Public Water Systems under the
Safe Drinking Water Act (3/1/00)

78
Toxic Substances Control Act (TSCA)

Protocol for Conducting Environmental Compliance Audits of Facilities with PCBs, Asbestos,
and Lead-based Paint Regulated under TSCA (3/1/00)

Federal Facilities

Environmental Audit Program Design Guidelines for Federal Agencies (EPA 300-B-96-
011) (4/30/97)

In addition, there is a "how to" manual on designing and implementing environmental


compliance auditing programs for federal agencies and facilities.
Contact Us to ask a question, provide feedback, or report a problem.

DYNAMIC AUDITING:
In today's complex business world of technology, disruption and globalisation, stakeholders,
regulators and the public are scrutinising the stability and accountability of organisations with
intensity.
While financial assurance will always be vital, this challenging new environment means there is room
for Audit to play a bigger role. With its access to every corner of a company, and its deep knowledge
of a company’s processes, finances and strategy, Audit is in a unique position to help organisations
see the potential for improvements – which in turn can bring confidence to markets and stakeholders.
Drawing on the deep expertise of our people, along with advanced data and analytics technology, our
Dynamic Audit offering can be enhanced with tailored procedures into risk, culture and processes, as
well as shining a light on where a company’s data and analytics could be harnessed.
As each organisation is unique, each KPMG’s Dynamic Audit is tailored to what will deliver the best
audit quality and independent insights. In addition to providing a high quality audit opinion, our audit
approach can assist with minimising risk, finding opportunities, making better decisions, and helping
the organisation stay ahead of the curve in a challenging environment.

STORAGE SECURITY:

Storage security is the collective processes, tools and technologies that ensure that only authorized
and legitimate users store, access and use storage resources. It enables better security of any storage
resource through the implementation of required technologies and policies on storage access and
consumption and the denial of access to all unidentified and potentially malicious users.

Storage security is a broad term that encompasses the implementation and management of security
across all layers of a storage environment. This includes storage hardware, software, networks
and/or the physical security of storage resources. Typically, storage security primarily deals with
implementation at the software or logical layer. This is achieved through several

79
techniques such as encrypting/encoding data at rest and in motion, firewalling storage servers and
implementing enterprise-wide identity and access management (IAM). Besides individuals, storage
security also encompasses the management and protection of storage resources from unverified
applications and services.

Full Homomorphic encryption

Homomorphic encryption is a form of encryption that allows computation on ciphertexts, generating


an encrypted result which, when decrypted, matches the result of the operations as if they had been
performed on the plaintext.
Homomorphic encryption can be used for privacy-preserving
outsourced storage and computation. This allows data to be encrypted and out-sourced to
commercial cloud environments for processing, all while encrypted. In highly regulated industries,
such as health care, homomorphic encryption can be used to enable new services by removing
privacy barriers inhibiting data sharing. For example, predictive analytics in health care can be hard
to apply due to medical data privacy concerns, but if the predictive analytics service provider can
operate on encrypted data instead, these privacy concerns are diminished.
Homomorphic encryption is a form of encryption with an additional evaluation capability for
computing over encrypted data without access to the secret key. The result of such a computation
remains encrypted. Homomorphic encryption can be viewed as an extension of either symmetric- key
or public-key cryptography. Homomorphic refers to homomorphismin algebra: the encryption and
decryption functions can be thought as homomorphisms between plaintext and ciphertext spaces.
Homomorphic encryption includes multiple types of encryption schemes that can perform different
classes of computations over encrypted data.[1] Some common types of homomorphic encryption are
partially homomorphic, somewhat homomorphic, leveled fully homomorphic, and fully homomorphic
encryption. The computations are represented as either Boolean or arithmetic circuits. Partially
homomorphic encryption encompasses schemes that support the evaluation of circuits consisting of
only one type of gate, e.g., addition or multiplication. Somewhat homomorphic encryption schemes
can evaluate two types of gates, but only for a subset of circuits. Leveled fully homomorphic
encryption supports the evaluation of arbitrary circuits of bounded (pre-determined) depth. Fully
homomorphic encryption (FHE) allows the evaluation of arbitrary circuits of unbounded depth, and is
the strongest notion of homomorphic encryption.
For the majority of homomorphic encryption schemes, the multiplicative depth of circuits is the
main practical limitation in performing computations over encrypted data.
Homomorphic encryption schemes are inherently malleable. In terms of malleability,
homomorphic encryption schemes have weaker security properties than non-homomorphic
schemes.

Big data security:

80
Big Data security is the processing of guarding data and analytics processes, both in the cloud and
on-premise, from any number of factors that could compromise their confidentiality.
Big Data Security Overview
Big data security’s mission is clear enough: keep out on unauthorized users and intrusions with
firewalls, strong user authentication, end-user training, and intrusion protection systems (IPS) and
intrusion detection systems (IDS). In case someone does gain access, encrypt your data in- transit
and at-rest.

This sounds like any network security strategy. However, big data environments add another level of
security because security tools must operate during three data stages that are not all present in the
network. These are 1) data ingress (what’s coming in), 2) stored data (what’s stored), and 3) data
output (what’s going out to applications and reports).

Stage 1: Data Sources. Big data sources come from a variety of sources and data types. User-
generated data alone can include CRM or ERM data, transactional and database data, and vast
amounts of unstructured data such as email messages or social media posts. In addition to this, you
have the whole world of machine generated data including logs and sensors. You need to secure
this data in-transit from sources to the platform.

Stage 2: Stored Data. Protecting stored data takes mature security toolsets including encryption at
rest, strong user authentication, and intrusion protection and planning. You will also need to run your
security toolsets across a distributed cluster platform with many servers and nodes. In addition, your
security tools must protect log files and analytics tools as they operate inside the platform.

Stage 3: Output Data. The entire reason for the complexity and expense of the big data platform is
being able to run meaningful analytics across massive data volumes and different types of data.
These analytics output results to applications, reports, and dashboards. This extremely valuable
intelligence makes for a rich target for intrusion, and it is critical to encrypt output as well as ingress.
Also, secure compliance at this stage: make certain that results going out to end-users do not contain
regulated data.

81
One of challenges of Big Data security is that data is routed through a circuitous path, and in
theory could be vulnerable at more than one point.

High-availability is, ultimately, the holy grail of the cloud. It embodies the idea of anywhere and
anytime access to services, tools and data and is the enabler of visions of a future with companies with
no physical offices or of global companies with completely integrated and unified IT systems.
Availability is also related to reliability: a service that is on 24x7 but goes constantly offline is useless.
For a service to have true high-availability, it needs not only to be always-on, but also to have several
"nines" (99.999[...]) of reliability.

It has long been the case that to build systems with this kind of reliability and availability means
large costs for companies. For something like this, it's not enough to simply have a failover cluster of
servers in a data center: you must also have multiple redundant energy sources for the data center and
even to have replication between multiple geographical locations in case of disasters. With the
exception of very large, multinational companies, almost no-one could afford such a setup.

With the advent of infrastructure-as-a-service and platform-as-a-service providers, however, the


costs of building such a service have decreased dramatically. It is now possible for most cloud-
based service providers, especially for software-based services, to offer very aggressive service level
agreements. Before getting there, however, it is necessary to understand what it means.

82
Understanding high availability
If you want a high-availability service, as a buyer or a seller, the first step is to understand what
exactly it means. Let's take a 99.99% SLA, for instance. In practice, this means that in any given
month (assuming a 30-day month), the service can only be offline for about 4 minutes and a few
seconds, or only about 50 minutes per year. If we look at most cloud service providers today, how
many actually deliver on this promise?

There are a few questions to be considered here. As a buyer of the service, do I really need this
service level? Am I willing to pay the extra cost that will be associated with this? Are the guarantees
being offered enough to cover my expenses in case of failure? The last one is perhaps the most
important and most difficult one, because several factors have to be taken into consideration. If your
company is going to rely on cloud services to function, what happens when they go offline?

Earlier this year, several high-profile tech companies had troubles when Amazon's EC2 service
suffered an outage. The outage, according to Amazon itself, lasted for almost 11 hours. This is much
larger than what would be acceptable according to their 99.95% SLA, and they offered a 10-day
credit for all affected customers, but does this credit cover the true cost of the outage? As more and
more services and applications go to the cloud, this question becomes increasingly important.

Standing on the shoulders of others


As cloud services and applications become more complex and more reliant on the underlying cloud
platform, it becomes harder and harder to quickly identify and solve problems. Troubles can arise
not only from an individual service, but from the interaction of multiple components and automated
systems over distributed networks and data centers, resulting in issues that take a long time to be
resolved. Regardless of the quality of a service provider, of their underlying hardware or platform,
the chance of failure increases with complexity.

Understanding this complexity and the reliance on the platform is fundamental when defining the
required availability level for a service. If you are building a service that needs 99.99% availability,
you cannot simply rely on Amazon's EC2, since they only offer 99.95%. It would be necessary to have
a different host for that service, or even multiple hosts, to be able to achieve that.

The same thing goes to buyers: if your vendor offers 99.99% availability look at how well they
have maintained this service level in the past, and look at the availability of the underlying
platform. If you know that it will not be enough, make it clear. I've had clients who demanded that I
have replicated servers on geographically distinct locations in case of natural disasters. It might
sound crazy, but it may make sense, and if it does, I must be ready to do this.

This is where the existence of multiple providers on the cloud comes in handy. Services can be
hosted and replicated to multiple providers, on multiple locations, to greatly reduce the chances of
failure. Even downtime related to maintenance can be reduced by spreading a service over

83
multiple providers: the chance of planned maintenance windows overlapping between providers is
very small.

Putting it all together


As we've seen, there are several important factors that need to be considered when discussing
availability on the cloud. Several large cloud providers, such as Rackspace and Amazon, offer very
aggressive service levels - 100% uptime for Rackspace, 99.95% for Amazon - that are almost
impossible to maintain, so we must ask ourselves what happens when the services fail? Are the
credits we are entitled to in case of failure enough to cover the true costs of said failure? Is my
service provider really capable of delivering on the promised availability level? What is his track
record?

As cloud services mature, these questions become as important as price or other factors in choosing
the right service provider. Answering them and understanding what is behind them becomes crucial
so that everyone can trust and use cloud services without restrictions.

DOS ATTACKS:
A denial-of-service (DoS) is any type of attack where the attackers (hackers) attempt to prevent
legitimate users from accessing the service. In a DoS attack, the attacker usually sends excessive
messages asking the network or server to authenticate requests that have invalid return addresses. The
network or server will not be able to find the return address of the attacker when sending the
authentication approval, causing the server to wait before closing the connection. When the server
closes the connection, the attacker sends more authentication messages with invalid return addresses.
Hence, the process of authentication and server wait will begin again, keeping the network or server
busy.
A DoS attack can be done in a several ways. The basic types of DoS attack include:

1. Flooding the network to prevent legitimate network traffic


2. Disrupting the connections between two machines, thus preventing access to a service
3. Preventing a particular individual from accessing a service.
4. Disrupting a service to a specific system or individual
5. Disrupting the state of information, such resetting of TCP sessions

Another variant of the DoS is the smurf attack. This involves emails with automatic responses. If
someone emails hundreds of email messages with a fake return email address to hundreds of people in
an organization with an autoresponder on in their email, the initial sent messages can become
thousands sent to the fake email address. If that fake email address actually belongs to someone, this
can overwhelm that person's account.
DoS attacks can cause the following problems:

1. Ineffective services
2. Inaccessible services
3. Interruption of network traffic
4. Connection interference

84
FAULY TOLERENCE IN CLOUD COMPUTING:
Fault tolerance in cloud computing is largely the same (conceptually) as in private or hosted
environments. Meaning that it simply means the ability of your infrastructure to continue providing
service to underlying applications even after the failure of one or more component pieces in any layer.

Explicating Fault Tolerance in Cloud Computing


Fault tolerance in cloud computing is about designing a blueprint for continuing the ongoing work
whenever a few parts are down or unavailable. This helps the enterprises to evaluate their
infrastructure needs and requirements, and provide services when the associated devices are
unavailable due to some cause. It doesn’t mean that the alternate arrangement can provide 100% of
the full service, but this concept keeps the system in running mode at a useable, and most importantly,
at a reasonable level. This is important if the enterprises are to keep growing in a continuous mode
and increase their productivity levels.

Main Concepts behind Fault Tolerance in Cloud Computing System

Replication: The fault tolerant system works on the concept of running several other
replicates for each and every service. Thus, if one part of the system goes wrong, it has
other instances that can be placed instead of it to keep it running. Take for example, a
database cluster that has 3 servers with the same information on each of them. All the
actions like data insertion, updates, and deletion get written on each of them. The servers,
which are redundant, would be in the inactive mode unless and until any fault tolerance
system doesn’t demand the availability of them.
Redundancy: When any system part fails or moves towards a downstate, then it is
important to have backup type systems. For example, a website program that has MS
SQL as its database may fail in between due to some hardware fault. Then a new
database has to be availed in the redundancy concept when the original is in offline mode.
The server operates with the emergency database which comprises of several redundant
services within.

Techniques for Fault Tolerance in Cloud Computing

All the services have to be given priority when designing a fault tolerance system. The
database has to be given special preference because it powers several other units.
After deciding the priorities, the enterprise has to work on the mock test. Take for
example, the enterprise has a forum website that enables users to log in and posts
comments. When the authentication services fail due to some problem, the users will not
be able to log in. Then, the forum becomes a read-only one and does not serve the

85
purpose. But with the fault tolerant systems, remediation will be ensured and the user can
search for information with minimal impact.

Major Attributes of Fault Tolerance in Cloud Computing

None Point Failure: The concepts of redundancy and replication defines that fault
tolerance can be had but with some minor impacts. If there isn’t even a single point
failure then the system is not a fault tolerant one.
Accept the Fault Isolation Concept: The fault occurrence has to be handled separately
from other systems. This helps the enterprise to isolate it from the existing system failure.

Existence of Fault Tolerance in Cloud Computing

System Failure: This may be either software or hardware issue. The software failure
results in system crash or hanging situation that may be due to stack overflow or other
reasons. Any improper maintenance of the physical hardware machines will result in
hardware system failure.
Security Breach Occurrences: There are several reasons why fault tolerance occurs due
to security failures. The hacking of the server negatively impacts the server and results
in data breach. Other reasons for the necessity of fault tolerance in the form of security
breaches include ransomware, phishing, virus attack etc.

CLOUD COMPUTING IN INDIA:

Digital transformation has been recognised as being vital to the growth of our nation.
This transformation has enjoyed the unanimous approval and contribution from all
stakeholders including enterprises, MSMEs, government bodies, and citizens.
But this level of adoption in a country with a population of over a billion people would
need a robust technology base that is capable of collecting and distributing vital data
seamlessly.
Digital India envisions creating high-speed digital highways, that will impact commerce
and create a digital footprint for every individual. Technologies based on mobility,
analytics, Internet of things and most importantly, cloud technologiesare the building
blocks for the digital India mission.
There is a growing need to manage huge volumes of data, and making them readily
available to the public through digital cloud services.
While Data centers have become crucial to this transformation, IT leaders increasingly
recognise that current data centers have reached their limits for supporting how state and
local governments need to work and provide services.
Lightening the load to adapt to increasing demand is the principle many government IT
managers have in mind as they look to make data centers more efficient, flexible and

86
capable of delivering new services. Government IT departments are also prioritising
investments in data center consolidation and new technologies to enable higher IT service
levels.
There are three trends that are impacting government IT today:

Virtualization and cloud


Infrastructure as a service (IaaS)
Flexible infrastructure for application development and delivery

Evolving The Data Center To Hyper-Convergence For Virtualisation And Cloud

The traditional image of a data center is a cavernous room filled with rows of equipment racks and
whirring, blinking boxes. This reality is fast disappearing as advances in virtualisation technology
pack more capabilities into smaller devices.

Government data centers are evolving to take advantage of virtualisation, especially for servers and
storage. The goal is to capture the associated benefits of higher data center efficiency and
optimization, as well as reduced capital and operational expenses.

The data center model is also evolving to support private cloud and IT as a service to accomplish IT
projects faster and more effectively.

Virtualisation enables a hyper-converged infrastructure that integrates servers and storage in a single
appliance. The systems leverage industry-standard hardware and software-defined storage, enabling
easy scalability and management.

A well-designed hyper-convergence infrastructure in the data center offers


several additional advantages for IT operations and service delivery like:

Cost reductions for infrastructure, software licenses, cabling and other elements, with
predictable budgeting for data center growth
Easier, on-demand and linear scalability of compute and storage resources, which reduces
the need to overprovision resources in anticipation of potential performance demands
Flexibility to support new IT offerings, such as analytics, that help government
employees improve services to constituents
Simpler management with fewer server and storage silos

Delivering Private Cloud And Infrastructure As A Service

One key to agility — in both government and IT — is having the right resources ready to go at a
moment’s notice but to use them only when they are truly needed. That agility is behind the idea of
IaaS on hyper-converged infrastructure: delivering computing, storage and network resources on
demand to application developers and users.

This environment operates like a private cloud, where the IT infrastructure can serve more
applications and users without the need to add more staff. By creating a private internal cloud, IT

87
managers also can reduce concerns that come with using untrusted or shared cloud services, including
security, compliance and audit trails.

IT can automate many operational tasks around provisioning and orchestration, which makes it easier
to activate or repurpose servers as needed. Additionally, automated configuration and management of IT
resources means IT staff can focus on strategic, high-value activities.

Any Application At Any Scale

From a smart phone app used by one employee to a complex information system used by hundreds, the
ability to deliver any application at any scale is essential for government IT. This scalability requires an
infrastructure that can quickly deliver the right resources for computing requirements, storage capacity
and application performance.

Yet different applications commonly used for government functions require different types of resources
and performance levels. For example, a GIS (Geographic Information Systems) application needs more
storage capacity than compute capability, while transaction-oriented applications are often compute-
intensive and don’t require as much data storage.

An agile government is one that has the information, applications and computing capabilities that keep pace
with the fast-paced changes in citizen and employee expectations for services.

By considering the data center trends discussed, IT can make the infrastructure simpler while also
delivering services that make government better.

Cloud has demonstrated the capability to digitize the governance system while proving to be cost-
effective. The global world is eager to see India embrace the Cloud Computing frontier led by technology
capability to improve people’s lives.

You might also like