0% found this document useful (0 votes)
115 views46 pages

Unit - II (Cloud Computing)

The document discusses the three main cloud computing service models: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS provides software applications via the internet, removing the need for installation and maintenance. PaaS provides platforms for developers to build applications over the internet without managing underlying infrastructure. IaaS provides basic computing and storage infrastructure as a service over the internet. Each model offers different levels of abstraction and control over the cloud computing stack.

Uploaded by

Nikita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views46 pages

Unit - II (Cloud Computing)

The document discusses the three main cloud computing service models: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). SaaS provides software applications via the internet, removing the need for installation and maintenance. PaaS provides platforms for developers to build applications over the internet without managing underlying infrastructure. IaaS provides basic computing and storage infrastructure as a service over the internet. Each model offers different levels of abstraction and control over the cloud computing stack.

Uploaded by

Nikita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Cloud Computing Service Models

If simply put, Cloud Computing is a model that offers convenient, on-demand network access to
a pool of shared resources. This may include data storage, databases, servers, networking, tools,
or any other resources that can be accessed through the internet. Cloud computing services come
mainly in three types of service models: SaaS (Software as a Service), IaaS (Infrastructure as a
Service), and PaaS (Platform as a Service). Each of the cloud models has its own set of benefits
that could serve the needs of various businesses.
Choosing between them requires an understanding of these cloud models, evaluating your
requirements, and finding out how the chosen model can deliver your intended set of workflows.

Most cloud computing services fall into three broad categories:


1. Software as a service (Saas)
2. Platform as a service (PaaS)
3. Infrastructure as a service (IaaS)
4. Anything as a service (XaaS)
These are sometimes called the cloud computing stack because they are built on top of one
another. Knowing what they are and how they are different, makes it easier to accomplish your
goals. These abstraction layers can also be viewed as a layered architecture where services of a
higher layer can be composed from services of the underlying layer i.e, Saas can provide
Infrastructure.

Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the Internet.
Instead of installing and maintaining software, we simply access it via the Internet, freeing
ourselves from the complex software and hardware management. It removes the need to install
and run applications on our own computers or in the data centers eliminating the expenses of
hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-go basis from a
cloud service provider. Most SaaS applications can be run directly from a web browser without
any downloads or installations required. The SaaS applications are sometimes called Web-based
software, on-demand software, or hosted software.
Advantages of SaaS
● Cost-Effective: Pay only for what you use.
● Reduced time: Users can run most SaaS apps directly from their web browser without
needing to download and install any software. This reduces the time spent in installation
and configuration and can reduce the issues that can get in the way of the software
deployment.
● Accessibility: We can Access app data from anywhere.
● Automatic updates: Rather than purchasing new software, customers rely on a SaaS
provider to automatically perform the updates.
● Scalability: It allows the users to access the services and features on-demand.

Disadvantages of SaaS

● SaaS applications are totally dependent on Internet connection. They are not usable
without an Internet connection.
● It is difficult to switch amongst the SaaS vendors.

The various companies providing Software as a service are Cloud9 Analytics,


Salesforce.com, Cloud Switch, Microsoft Office 365, Big Commerce, Eloqua, dropBox, and
Cloud Tran.

Why Should One Opt SaaS?


With SaaS, communication, transferring of content, and scheduling meetings are made easy.
SaaS is the ideal choice for small-scale businesses that do not have the necessary budget and
resources to deploy on on-premise hardware. Besides, companies that require frequent
collaboration on their projects will find SaaS platforms useful.

Studies reveal that Supply Chain Management, Business Intelligence, Enterprise Resource
Planning (ERP), and Project and Portfolio Management will see the fastest growth in end-user
spending on SaaS applications, through 2022.

Things to Consider Before SaaS Implementation


● Opt for configuration over customization within a SaaS-based delivery model. The
configuration will allow you to tailor without changing the core product, whereas,
customization will make it challenging to scale with the constant updates and
documentation.
● Understand the adoption and usage rates carefully, and set clear objectives to be achieved
with the SaaS adoption.
● Compliment your SaaS solution with integrations, and security options to make it more
user-initiated.
Platform as a Service
PaaS is a category of cloud computing that provides a platform and environment to allow
developers to build applications and services over the internet. PaaS services are hosted in the
cloud and accessed by users simply via their web browser.
A PaaS provider hosts the hardware and software on its own infrastructure. As a result, PaaS
frees users from having to install in-house hardware and software to develop or run a new
application. Thus, the development and deployment of the application take place independent of
the hardware.
The consumer does not manage or control the underlying cloud infrastructure including network,
servers, operating systems, or storage, but has control over the deployed applications and
possibly configuration settings for the application-hosting environment. To make it simple, take
the example of an annual day function, you will have two options either to create a venue or to
rent a venue but the function is the same.

Advantages of PaaS:
● Simple and convenient for users: It provides much of the infrastructure and other IT
services, which users can access anywhere via a web browser.
● Cost-Effective: It charges for the services provided on a per-use basis thus eliminating
the expenses one may have for on-premises hardware and software.
● Efficiently managing the lifecycle: It is designed to support the complete web
application lifecycle: building, testing, deploying, managing, and updating.
● Efficiency: It allows for higher-level programming with reduced complexity thus, the
overall development of the application can be more effective.

Disadvantages of PaaS

● One developer can write the applications as per the platform provided by PaaS vendor
hence moving the application to another PaaS vendor is a problem.

The various companies providing Platform as a service are Amazon Web services Elastic
Beanstalk, Salesforce, Windows Azure, Google App Engine, cloud Bess and IBM smart cloud.

Why Should One Opt PaaS?


PaaS is the preferred option if your project involves multiple developers and vendors. With PaaS,
it is easy to create customized applications as it leases all the essential computing and
networking resources. Being a different model, PaaS simplifies the app development process
that minimizes your organizational costs.

Besides, it is flexible and delivers the necessary speed in the process, which will rapidly improve
your development times. A typical disadvantage with PaaS is that since it is built on virtualized
technology, you will have less control over the data processing. In addition, it is also less flexible
compared to the IaaS cloud model.
A study by Market Reports World estimates that the global PaaS market will grow at a CAGR of
24.17% during 2019-2023 and will get valued at 28.4 billion USD by the end of 2023.

Things to Consider Before PaaS Implementation


● Crucially analyzing your business needs, decide the automation levels, if it needs to be
self-service or fully automated.
● Clearly determine whether to deploy on a private or public cloud.
● Plan through the customization, and efficiency levels.

Infrastructure as a Service
Infrastructure as a service (IaaS) is a service model that delivers computer infrastructure on an
outsourced basis to support various operations. Typically IaaS is a service where infrastructure is
provided as an outsource to enterprises such as networking equipment, devices, database, and
web servers.
It is also known as Hardware as a Service (HaaS). IaaS customers pay on a per-user basis,
typically by the hour, week, or month. Some providers also charge customers based on the
amount of virtual machine space they use.
It simply provides the underlying operating systems, security, networking, and servers for
developing such applications, services, and for deploying development tools, databases, etc.

Advantages of IaaS:
● Cost-Effective: Eliminates capital expense and reduces ongoing cost and IaaS customers
pay on a per-user basis, typically by the hour, week, or month.
● Website hosting: Running websites using IaaS can be less expensive than traditional web
hosting.
● Security: The IaaS Cloud Provider may provide better security than your existing
software.
● Maintenance: There is no need to manage the underlying data center or the introduction
of new releases of the development or underlying software. This is all handled by the
IaaS Cloud Provider.

Disadvantages of IaaS

● IaaS cloud computing platform model is dependent on availability of Internet and


virtualization services.

The various companies providing Infrastructure as a service are Amazon web services,
Bluestack, IBM, Openstack, Rackspace, and Vmware.

Why Should One Opt IaaS?


IaaS being the most flexible of cloud models gives the best option when it comes to IT hardware
infrastructure. IaaS is the right option if you need control over the hardware infrastructure such
as managing and customizing according to your requirements.

Whether you are running a startup or a large enterprise, IaaS gives access to computing resources
without the need to invest in them separately. However, the only downside with IaaS is that it is
much costlier than SaaS or PaaS cloud models.

According to Gartner’s latest report, the worldwide infrastructure-as-a-service (IaaS) market


grew 31.3% in 2018 to total $32.4 billion, and in 2019 it’s projected to be worth $38.9 billion.
This growth will continue well into 2022, where it’s expected to be worth $76.6 billion.

Things to Consider Before IaaS Implementation


● Clearly define your access needs and the bandwidth of your network to facilitate smooth
implementation and function.
● Plan out thorough data storage and security strategy to streamline the process.
● Ensure a disaster recovery plan so that your data remains safe and accessible at all means.

Anything as a Service
Most of the cloud service providers nowadays offer anything as a service that is a compilation of
all of the above services including some additional services.

Advantages of XaaS: As this is a combined service, it has all the advantages of every type of
cloud service.

Virtualization In Cloud Computing


Virtualization is a technique of how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware. It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather than the
actual version of the same resource. With the help of Virtualization, multiple operating systems
and applications can run on the same machine and its same hardware at the same time, increasing
the utilization and flexibility of hardware.
In other words, one of the main cost effective, hardware reducing, and energy saving techniques
used by cloud providers is virtualization. Virtualization allows sharing a single physical instance
of a resource or an application among multiple customers and organizations at one time. It does
this by assigning a logical name to a physical storage and providing a pointer to that physical
resource on demand. The term virtualization is often synonymous with hardware virtualization,
which plays a fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS)
solutions for cloud computing. Moreover, virtualization technologies provide a virtual
environment for not only executing applications but also for storage, memory, and networking.

The machine on which the virtual machine is going to be built is known as Host Machine and
that virtual machine is referred to as a Guest Machine.

Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun xVM
Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram shows the
Type 1 hypervisor.

The type1 hypervisor does not have any host operating system because they are installed on a
bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system normally
interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server 2005 R2,
Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2 hypervisor. The
following diagram shows the Type 2 hypervisor.

Types of Hardware Virtualization


Here are the three types of hardware virtualization:
● Full Virtualization
● Emulation Virtualization
● Paravirtualization
● Full Virtualization
In full virtualization, the underlying hardware is completely simulated. Guest software does not
require any modification to run.

In Emulation, the virtual machine simulates the hardware and hence becomes independent of it.
In this, the guest operating system does not require modification.
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.

VMware vSphere is a highly developed infrastructure that offers a management infrastructure


framework for virtualization. It virtualizes the system, storage and networking hardware.

BENEFITS OF VIRTUALIZATION
1. More flexible and efficient allocation of resources.
2. Enhance development productivity.
3. It lowers the cost of IT infrastructure.
4. Remote access and rapid scalability.
5. High availability and disaster recovery.
6. Pay peruse of the IT infrastructure on demand.
7. Enables running multiple operating systems.

Types of Virtualization:
1.Application Virtualization.
2.Network Virtualization.
3.Desktop Virtualization.
4.Storage Virtualization.
5.Server Virtualization.
6.Data virtualization.

1. Application Virtualization:
Application virtualization helps a user to have remote access of an application from a server. The
server stores all personal information and other characteristics of the application but can still run
on a local workstation through the internet. Example of this would be a user who needs to run
two different versions of the same software. Technologies that use application virtualization are
hosted applications and packaged applications.

2. Network Virtualization:
The ability to run multiple virtual networks with each has a separate control and data plan. It
co-exists together on top of one physical network. It can be managed by individual parties that
potentially confidential to each other.
Network virtualization provides a facility to create and provision virtual networks—logical
switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and workload security
within days or even in weeks.

3. Desktop Virtualization:
Desktop virtualization allows the users’ OS to be remotely stored on a server in the data centre. It
allows the user to access their desktop virtually, from any location by a different machine. Users
who want specific operating systems other than Windows Server will need to have a virtual
desktop. Main benefits of desktop virtualization are user mobility, portability, easy management
of software installation, updates, and patches.

4. Storage Virtualization:
Storage virtualization is an array of servers that are managed by a virtual storage system. The
servers aren’t aware of exactly where their data is stored, and instead function more like worker
bees in a hive. It makes managing storage from multiple sources to be managed and utilized as a
single repository. storage virtualization software maintains smooth operations, consistent
performance and a continuous suite of advanced functions despite changes, break down and
differences in the underlying equipment.

5. Server Virtualization:
This is a kind of virtualization in which masking of server resources takes place. Here, the
central-server(physical server) is divided into multiple different virtual servers by changing the
identity number, processors. So, each system can operate its own operating systems in isolate
manner. Where each sub-server knows the identity of the central server. It causes an increase in
the performance and reduces the operating cost by the deployment of main server resources into
a sub-server resource. It’s beneficial in virtual migration, reduce energy consumption, reduce
infrastructural cost, etc.

6. Data virtualization:
This is the kind of virtualization in which the data is collected from various sources and managed
that at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various cloud services
remotely. Many big giant companies are providing their services like Oracle, IBM, At scale,
Cdata, etc.

It can be used to performing various kind of tasks such as:


● Data-integration
● Business-integration
● Service-oriented architecture data-services
● Searching organizational data

Containers in Cloud Computing


Containers are a common option for deploying and managing software in the cloud. Containers
are used to abstract applications from the physical environment in which they are running. A
container packages all dependencies related to a software component, and runs them in an
isolated environment.
With containers, commonly running the Docker container engine, applications deploy
consistently in any environment, whether a public cloud, a private cloud, or a bare metal
machine. Containerized applications are easier to migrate to the cloud. Containers also make it
easier to leverage the extensive automation capabilities of the cloud—they can easily be
deployed, cloned or modified using APIs provided by the container engine or orchestrator.

Use Cases of Containers in the Cloud


Containers are becoming increasingly important in cloud environments. Many organizations are
considering containers as an alternative to virtual machines (VMs), which were traditionally the
preferred option for large-scale enterprise workloads.

The following use cases are especially suitable for running containers in the cloud:
● Microservices: containers are lightweight, making them well suited for applications with
microservices architectures consisting of a large number of loosely coupled,
independently deployable services.
● DevOps: many DevOps teams build applications using a microservices architecture, and
deploy services using containers. Containers can also be used to deploy and scale the
DevOps infrastructure itself, such as CI/CD tools.
● Hybrid and multi-cloud: for organizations operating in two or more cloud environments,
containers are highly useful for migrating workloads. They are a standardized unit that
can be flexibly moved between on-premise data centers and any public cloud.
● Application modernization: A common way to modernize a legacy application is to
containerize it, and move it as is to the cloud (a model known as “ lift and shift '').

How Do Cloud Containers Work?


Container technology began with the separation of partitions and chroot processes, introduced as
part of Linux. Modern container engines take the form of application containerization (such as
Docker) and system containerization (such as Linux containers).
Containers rely on isolation, controlled at the operating system kernel level, to deploy and run
applications. Containers share the operating system kernel, and do not need to run a full
operating system—they only need to run the necessary files, libraries and configuration to run
workloads. The host operating system limits the container’s ability to consume physical
resources.
In the cloud, a common pattern is to use containers to run an application instance. This can be an
individual microservice, or a backend application such as a database or middleware component.
Containers make it possible to run multiple applications on the same cloud VM, while ensuring
that problems with one container do not affect other containers, or the entire VM.
Cloud providers offer several types of services you can use to run containers in the cloud:
● Hosted container instances—let you run containers directly on public cloud
infrastructure, without the intermediary of a cloud VM. An example is Azure Container
Instances (ACI).
● Containers as a Service (CaaS)—manages containers at scale, typically with limited
orchestration capabilities. An example is Amazon Elastic Container Service (ECS) or
Amazon Fargate.
● Kubernetes as a Service (KaaS)—provides Kubernetes, the most popular container
orchestrator, as a managed service. Lets you deploy clusters of containers on the public
cloud. An example is Google Kubernetes Engine (GKE).

What Is Containerization?
Hopefully, you understand the containers concept pretty clearly now. You may have already
guessed that the applications hosted in containers have different coding standards than regular
applications. But what is containerization, and how do you create containerized applications?

Containerization in cloud computing is the process of building software applications for


containers. The final product of packaging and designing a container app is a container image.

A typical container image, or application container, consists of:


● The application code
● Configuration files
● Software dependencies
● Libraries
● Environment variables
In fact, it holds everything that is needed to run containerized applications irrespective of the
infrastructure that hosts them.

Container Orchestration
Container orchestration is the process of creating an environment that automates most of the
maintenance tasks for containerized workloads, applications, and services. Most companies rely
on container orchestration platforms that provide fully managed container services to their users.

Containers vs Virtual Machines – What’s the Difference?

Pricing on the cloud


Many things are taken into consideration when talking about pricing on the cloud, first thing is
that the service provider’s aims are to maximize the profit and the customers are looking for a
higher quality of services with a lower price. Second, selling services on the cloud is very
competitive due to the high number of providers that are selling the same services. In addition,
prices are influenced by:
The lease period, which can be considered as the contract time between the provider and the
customer.
● The initial cost of the resources
● The rate of depreciation, which means how many time these resources are being used
● The quality of service
● The age of the resources
● The cost of Maintenance

Pricing Models
There are many pricing models used on cloud, and they can be classified in two main types from
the perspective of the changing period: fixed and dynamic.

Fixed Pricing Models


Fixed pricing models are also called Static pricing models, due to the stability of the price for a
long time. The most famous service providers on the cloud such as Google, Amazon Web
Services, Oracle, Azure and others use fixed pricing models.
Fixed Pricing makes users aware of the cost of doing business and consuming a resource.
However, in the other hand this type of pricing is mostly unfair with the customers because they
can overpay or underpay for their needs. In addition, it is not affected with the demand.
There are many fixed pricing such as “pay-per-use”, subscription, price list … In this part I will
talk briefly about these pricing models:

1. Pay-per-use Model
In this model, user only have to pay for what they use. Customer pays in function of the time or
quantity he consumes on a specific service. Amazon Web Services (AWS) Salesforce as shown
in Table 1 and Table 2 use this model.

Table 1 — Amazon S3 storage pricing


Table 2 — Salesforce Cloud pricing
2. Subscription
In this model, users pay on a recurring basis to access software as an online service to profit from
a service. The customer subscribes to use a preselected combination of service units for a fixed
and a longer frame, usually monthly and yearly.

Dropbox — like shown in Table 3 — uses this model.

Table 3 — Dropbox pricing


3. Hybrid
This model is a combination of the pay-per-use and subscription pricing models, in this model all
services prices are set using the subscription model but when the use limitation exceed,
pay-per-use pricing is used.
This model is used by Google app engine like shown in Table 4.
Table 4 — Google app engine pricing
4. Pay for resources
In this model, a customer pays for resources utilized. This model is used by Microsoft Azure in
the Infrastructure as a service pricing as shown in Table 5

Table 5 — Windows Azure IaaS pricing

Dynamic pricing
Dynamic pricing models are also known as Real-time pricing, these models are very flexible, and
they can be considered as a result of a function that take as parameters the cost, time, and other
parameters like the location, user perceiving value and others.
In Dynamic pricing, the price is calculated based on pricing mechanism whenever there is a
request. As compared to fixed prices, the dynamic pricing that reflects the real-time supply
demand relationship represents a more promising charge strategy that can better exploit user
payment potentials and thus larger profit gains at the cloud provider.
As example Amazon changes there prices every 10 minutes, and Best buy and Walmart
implements price changes over 50000 time per month.
There are many dynamic pricing models like:
● Cost-based model, which merge the profit with the level of Cost.
● Value-Based, which take into consideration the basis of user perceiving value
● Competition-based, which take into consideration the competitor price of services
● Customer-Based, which consider what the customer is prepared to pay.
● Location-Based, where price is set according to the customer location.

Fixed Pricing VS Dynamic Pricing


In Table 6, the main difference between fixed and dynamic pricing are listed
Table 6 — Fixed pricing VS dynamic pricing

Service Level Agreements


A service-level agreement (SLA) is a commitment between a service provider and a client.
Particular aspects of the service, such as quality, availability, responsibilities are agreed upon
between the service provider and the service user. It defines:
● The metrics used to measure the level of service provided.
● Remedies or penalties resulting from failure to meet the promised service level
expectations.
The most common component of an SLA is that the services should be provided to the customer
as agreed upon in the contract. It is a critical component of any technology vendor contract. For
example, Internet service providers will commonly include service level agreements within the
terms of their contracts with customers to define the level of service being sold in plain language
terms. Usually, SLAs are between companies and external suppliers, but they may also be
between two departments within a company.
In this case, the SLA will typically have a technical definition in mean time between failures
(MTBF), mean time to repair or mean time to recovery (MTTR), identifying which party is
responsible for reporting faults or paying fees, responsibility for various data rates, throughput,
jitter, or similar measurable details. The Service Level Agreement includes:
● Detailed service overview
● Speed of service delivery
● Plan for performance monitoring
● Description of the reporting procedure
● List of penalties that will be applied in case of agreement violations
● Constraints
Types of SLA
The selection of the types of SLA in an organization depends on many significant aspects.

While some are targeted at individual customer groups, others discuss issues relevant to entire
companies. This is because the needs of one user differ from another. Here are some types of
SLAs used by businesses today and how each one is utilized for specific situations:
1. Customer-based SLA: This type of agreement is used for individual customers and
comprises all relevant services that a client may need while leveraging only one contract. It
contains details regarding the type and quality of service that has been agreed upon.
For example, a telecommunication service includes voice calls, messaging, and internet services,
but all exist under a single contract.
2. Service-based SLA: This SLA is a contract that includes one identical type of service for all
of its customers. Because the service is limited to one unchanging standard, it is more
straightforward and convenient for vendors.
For example, using a service-based agreement regarding an IT helpdesk would mean that the
same service is valid for all end-users that sign the service-based SLA.
3. Multi-level SLA: This agreement is customized according to the needs of the end-user
company. It allows the user to integrate several conditions into the same system to create a more
convenient service. This type of SLA can be divided into the following subcategories:
● Corporate level: This SLA does not require frequent updates since its issues are
typically unchanging. It includes a comprehensive discussion of all the relevant aspects
of the agreement and applies to all customers in the end-user organization.
● Customer level: This contract discusses all service issues that are associated with a
specific group of customers. However, it does not take into consideration the type of user
services.
For example, when an organization requests that the security level in one of its
departments is strengthened. In this situation, the entire company is secured by one
security agency but requires that one of its customers is more secure for specific reasons.
● Service level: In this agreement, all aspects attributed to a particular service regarding a
customer group are included.
Components of SLA
An SLA highlights what the client and the service provider want to achieve with their
cooperation and outlines the obligations of the participants, the expected performance level, and
the results of cooperation.
An SLA usually has a defined duration time that is provided in the document. The services that
the provider agrees to deliver are often described in detail to avoid misunderstanding, including
procedures of performance monitoring, assessment, and troubleshooting. Here are the following
components necessary for a good agreement:
● Document overview: This first section sets forth the basics of the agreement, including
the parties involved, the start date, and a general introduction of the services provided.
● Strategic goals: Description of the agreed purpose and objectives.
● Description of services: The SLA needs detailed descriptions of every service offered
under all possible circumstances, including the turnaround times. Service definitions
should include how the services are delivered, whether maintenance service is offered,
what the hours of operation are, where dependencies exist, an outline of the processes,
and a list of all technology and applications used.
● Exclusions:Specific services that are not offered should also be clearly defined to avoid
confusion and eliminate room for assumptions from other parties.
● Service performance: Performance measurement metrics and performance levels are
defined. The client and service provider should agree on a list of all the metrics they will
use to measure the provider's service levels.
● Redressing: Compensation or payment should be defined if a provider cannot properly
fulfill their SLA.
● Stakeholders:Clearly defines the parties involved in the agreement and establishes their
responsibilities.
● Security: All security measures that the service provider will take are defined. Typically,
this includes the drafting and consensus on anti poaching, IT security, and nondisclosure
agreements.
● Risk management and disaster recovery:Risk management processes and a disaster
recovery plan are established and communicated.
● Service tracking and reporting: This section defines the reporting structure, tracking
intervals, and stakeholders involved in the agreement.
● Periodic review and change processes. The SLA and all established key performance
indicators (KPIs) should be regularly reviewed. This process is defined as well as the
appropriate process for making changes.
● Termination process. The SLA should define the circumstances under which the
agreement can be terminated or will expire. The notice period from either side should
also be established.
● Finally, all stakeholders and authorized participants from both parties must sign the
document to approve every detail and process.
Common Metrics of SLA
Service-level agreements can contain numerous service-performance metrics with corresponding
service-level objectives. A common case in IT-service management is a call center or service
desk. Metrics commonly agreed to in these cases include:
● Abandonment Rate: Percentage of calls abandoned while waiting to be answered.
● ASA(Average Speed to Answer): Average time (usually in seconds) it takes for a call to
be answered by the service desk.
● Resolution time:The time it takes for an issue to be resolved once logged by the service
provider.
● Error rate:The percentage of errors in a service, such as coding errors and missed
deadlines.
● TSF(Time Service Factor): Percentage of calls answered within a definite timeframe,
e.g., 80% in 20 seconds.
● FCR(First-Call Resolution): A metric that measures a contact center's ability for its
agents to resolve a customer's inquiry or problem on the first call or contact.
● TAT(Turn-Around-Time): Time is taken to complete a particular task.
● TRT(Total Resolution Time): Total time is taken to complete a particular task.
● MTTR(Mean Time To Recover): Time is taken to recover after an outage of service.
● Security:The number of undisclosed vulnerabilities, for example. If an incident occurs,
service providers should demonstrate that they've taken preventive measures.
Uptime is also a common metric used for data services such as shared hosting, virtual private
servers, and dedicated servers. Standard agreements include the percentage of network uptime,
power uptime, number of scheduled maintenance windows, etc. Many SLAs track to the ITIL
specifications when applied to IT services.
Types of SLA Penalties
A natural reply to any violation is a penalty. An SLA penalty depends on the industry and
business. Here are the two most common SLA penalty types.

1. Financial penalty: This kind of penalty requires a vendor to pay the customer compensation
of damages equal to the one written in the agreement. The amount will depend on the extent of a
violation and damage and may not fully reimburse what a customer paid for the eCommerce
service or eCommerce support.
● License extension or support:It requires the vendor to extend the license term or offer
additional customer support without charge. This could include development and
maintenance.
2. Service credit: In this case, a service provider will have to provide a customer with
complimentary services for a specific time. To avoid any confusion or misunderstanding between
the two parties in SLA violation, such penalties must be clearly articulated in the agreement.
Otherwise, they won't be legitimate.
● Service availability:It includes factors such as network uptime, data center resources,
and database availability. Penalties should be added as deterrents against service
downtime, which could negatively affect the business.
● Service quality:It involves performance guarantee, the number of errors allowed in a
product or service, process gaps, and other issues that relate to quality.
These penalties must be specified in the language of the SLA, or they won't be enforceable. In
addition, some customers may not think the service credit or license extension penalties are
adequate compensation. They may question the value of continuing to receive a vendor's services
that cannot meet its quality levels.
Consequently, it may be worth considering a combination of penalties and including an
incentive, such as a monetary bonus, for more than satisfactory work.
Revising and Changing an SLA
Since business requirements are subject to change, it's important to revise an SLA regularly. It
will help to always keep the agreement in line with the business's service level objectives. The
SLA should be revised when changes of the following occur:
● A company's requirements
● Workload volume
● Customer's needs
● Processes and tools
The contract should have a detailed plan for its modification, including change frequency,
change procedures, and changelog.
1. SLA Calculation: SLA assessment and calculation determine a level of compliance with the
agreement. There are many tools for SLA calculation available on the internet.
2. SLA uptime: Uptime is the amount of time the service is available. Depending on the type of
service, a vendor should provide minimum uptime relevant to the average customer's demand.
Usually, a high uptime is critical for websites, online services, or web-based providers as their
business relies on its accessibility.
3. Incident and SLA violations: This calculation helps determine the extent of an SLA breach
and the penalty level foreseen by the contract. The tools usually calculate a downtime period
during which service wasn't available, compare it to SLA terms and identify the extent of the
violation.
4. SLA credit: If a service provider fails to meet the customer's expectations outlined in the
SLA, a service credit or other type of penalty must be given as a form of compensation. A
percentage of credit depends directly on the downtime period, which exceeded its norm indicated
in a contract.

Service Level Management

Service level management is the process of managing SLAs that helps companies to define,
document, monitor, measure, report, and review the performance of the provided services. The
professional SLA management services should include:
● Setting realistic conditions that a service provider can ensure.
● Meeting the needs and requirements of the clients.
● Establishing the right metrics for evaluating the performance of the services.
● Ensuring compliance with the terms and conditions agreed with the clients.
● Avoiding any violations of SLA terms and conditions.
An SLA is a preventive means to establish a transparent relationship between both parties
involved and build relationships in the cooperation. Such a document is fundamental to a
successful collaboration between a client and a service provider.
IaaS Networking Options
Networking is one of the fundamental elements of cloud computing and also one of the hazards
to users of cloud computing. Network performance degradation and instability can greatly affect
the consumption of cloud resources. Applications that are relatively isolated or are specially
designed to deal with network disruptions have an advantage running in the cloud.

From a different perspective, network resources can be virtualized and used in cloud computing
just as other resources are. In this section, we first discuss basic use of IP addresses in a cloud
context and then cover virtual networks.

Delivery of cloud services takes place over networks at different levels using different protocols.
This is one of the key differences in cloud models. In PaaS and SaaS clouds, delivery of services
is via an application protocol, typically HTTP. In IaaS, cloud services can be delivered over
multiple layers and protocols—for example, IPSec for VPN access and SSH for command-line
access.

Management of the different layers of the network system also is the responsibility of either the
cloud provider or the cloud consumer, depending on the type of cloud. In a SaaS model, the
cloud provider manages all the network layers. In an IaaS model, the cloud consumer manages
the network levels, except for the physical and data link layers. However, this is a simplification
because, in some cases, the network services relate to the cloud infrastructure and some services
relate to the images. The PaaS model is intermediate between IaaS and SaaS.
Table 1.5 summarizes the management of network layers in different cloud scenarios.
This table is a simplification of the many models on the market. However, it shows that an IaaS
gives cloud consumers considerably more flexibility in network topology and services than PaaS
and SaaS clouds (but at the expense of managing the tools that provide the flexibility).

IP Addresses
One of the first tasks in cloud computing is determining how to connect to the virtual machine.
Several options exist when creating a virtual machine: system generated, reserved, and VLAN IP
address solutions. System-generated IP addresses are analogous to Dynamic Host Control
Protocol (DHCP)–assigned addresses. They are actually static IP addresses, but the IaaS cloud
assigns them. This is the easiest option if all you need is a virtual machine that you can log into
and use.

Reserved IP addresses are addresses that can be provisioned and managed independently of a
virtual machine. Reserved IP addresses are useful if you want to assign multiple IP addresses to a
virtual machine.

IPv6 is an Internet protocol intended to supersede IPv4. The Internet needs more IP addresses
than IP v4 can support, which is one of the primary motivations for IPv6. The last top-level
block of IPv4 addresses was assigned in February 2011. The Internet Engineering Task Force
(IETF) published Request for Comments: 2460 Internet Protocol, Version 6 (IPv6), which was
the specification for IPv6 in 1998. IPv6 also provides other features not present in IPv4. Network
security is integrated into the design of IPv6, which makes IPSec a mandatory part of the
implementation. IPv6 does not specify interoperability with IPv4 and essentially creates an
independent network. Today usage rates of IPv6 are very low, and most providers operate in
compatibility/tolerance mode. However, that could change.

Network Virtualization
When dealing with systems of virtual machines and considering network security, you need to
manage networks. Network resources can be virtualized just like other cloud resources. To do
this, a cloud uses virtual switches to separates a physical network into logical partitions. Figure
shows this concept.
Figure 1.13. Physical and virtual networks in a cloud

VLANs can act as an extension of your enterprise’s private network. You can connect to it via an
encrypted VPN connection.

A hypervisor can share a single physical network interface with multiple virtual machines. Each
virtual machine has one or more virtual network interfaces. The hypervisor can provide
networking services to virtual machines in three ways:
● Bridging
● Routing
● Network address translation (NAT)
Bridging is usually the default mode. In this mode, the hypervisor works at the data link layer
and makes the virtual network interface externally visible at the Ethernet level. In routing mode,
the hypervisor works at the network layer and makes the virtual network interface externally
visible at the IP level.

In network address translation, the virtual network interface is not visible externally. Instead, it
enables the virtual machine to send network data out to the Internet, but the virtual machine is
not visible on the Internet. Network address translation is typically used to hide virtualization
network interfaces with private IP addresses behind a public IP address used by a host or router.
The NAT software changes the IP address information in the network packets based on
information in a routing table. The checksum values in the packet must be changed as well.

NAT can be used to put more servers on the network than the number of virtual machines you
have. It does this by port translation. This is one reason IPv6 is still not in wide use: Even though
the number of computers exceeds the number of IP addresses, you can do some tricks to share
them. For example, suppose that you have a router and three servers handling HTTP, FTP, and
mail, respectively. You can assign a public IP address to the router and private IP addresses to the
HTTP, FTP, and mail servers, and forward incoming traffic (see Table 1.6).
Table 1.6. Example of Network Address Translation

Virtual private cloud (VPC)


A VPC is a public cloud offering that lets an enterprise establish its own private cloud-like
computing environment on shared public cloud infrastructure. A VPC gives an enterprise the
ability to define and control a virtual network that is logically isolated from all other public cloud
tenants, creating a private, secure place on the public cloud.

Imagine that a cloud provider’s infrastructure is a residential apartment building with multiple
families living inside. Being a public cloud tenant is akin to sharing an apartment with a few
roommates. In contrast, having a VPC is like having your own private condominium—no one
else has the key, and no one can enter the space without your permission.

A VPC’s logical isolation is implemented using virtual network functions and security features
that give an enterprise customer granular control over which IP addresses or applications can
access particular resources. It is analogous to the “friends-only” or “public/private” controls on
social media accounts used to restrict who can or can’t see your otherwise public posts.

Features
VPCs are a “best of both worlds” approach to cloud computing. They give customers many of
the advantages of private clouds, while leveraging public cloud resources and savings. The
following are some key features of the VPC model:

● Agility: Control the size of your virtual network and deploy cloud resources whenever
your business needs them. You can scale these resources dynamically and in real-time.
● Availability: Redundant resources and highly fault-tolerant availability zone
architectures mean your applications and workloads are highly available.
● Security: Because the VPC is a logically isolated network, your data and applications
won’t share space or mix with those of the cloud provider’s other customers. You have
full control over how resources and workloads are accessed, and by whom.
● Affordability: VPC customers can take advantage of the public cloud’s
cost-effectiveness, such as saving on hardware costs, labor times, and other resources.

Benefits
Each VPC’s main features readily translate into a benefit to help your business achieve agility,
increased innovation, and faster growth.
● Flexible business growth: Because cloud infrastructure resources—including virtual
servers, storage, and networking—can be deployed dynamically, VPC customers can
easily adapt to changes in business needs.
● Satisfied customers: In today’s “always-on” digital business environments, customers
expect uptime ratios of nearly 100%. The high availability of VPC environments enables
reliable online experiences that build customer loyalty and increase trust in your brand.
● Reduced risk across the entire data lifecycle: VPCs enjoy high levels of security at the
instance or subnet level, or both. This gives you peace of mind and further increases the
trust of your customers.
● More resources to channel toward business innovation: With reduced costs and fewer
demands on your internal IT team, you can focus your efforts on achieving key business
goals and exercising core competencies.

Architecture
In a VPC, you can deploy cloud resources into your own isolated virtual network. These cloud
resources—also known as logical instances—fall into three categories.

Compute: Virtual server instances (VSIs, also known as virtual servers) are presented to the user
as virtual CPUs (vCPUs) with a predetermined amount of computing power, memory, etc.
Storage: VPC customers are typically allocated a certain block storage quota per account, with
the ability to purchase more. It is akin to purchasing additional hard drive space.
Recommendations for storage are based on the nature of your workload.
Networking: You can deploy virtual versions of various networking functions into your virtual
private cloud account to enable or restrict access to its resources. These include public gateways,
which are deployed so that all or some areas of your VPC environment can be made available on
the public-facing Internet; load balancers, which distribute traffic across multiple VSIs to
optimize availability and performance; and routers, which direct traffic and enable
communication between network segments. Direct or dedicated links enable rapid and secure
communications between your on-premises enterprise IT environment or your private cloud and
your VPC resources on public cloud.
Three-tier architecture in a VPC
The majority of today’s applications are designed with a three-tier architecture comprised of the
following interconnected tiers:
● The web or presentation tier, which takes requests from web browsers and presents
information created by, or stored within, the other layers to end users.
● The application tier, which houses the business logic and is where most processing takes
place.
● The database tier, comprised of database servers that store the data processed in the
application tier.
● To create a three-tier application architecture on a VPC, you assign each tier its own
subnet, which will give it its own IP address range. Each layer is automatically assigned
its own unique ACL.

Security
VPCs achieve high levels of security by creating virtualized replicas of the security features used
to control access to resources housed in traditional data centers. These security features enable
customers to define virtual networks in logically isolated parts of the public cloud and control
which IP addresses have access to which resources.

Two types of network access controls comprise the layers of VPC security:

● Access control lists (ACLs): An ACL is a list of rules that limit who can access a
particular subnet within your VPC. A subnet is a portion or subdivision of your VPC; the
ACL defines the set of IP addresses or applications granted access to it.
● Security group: With a security group, you can create groups of resources (which may
be situated in more than one subnet) and assign uniform access rules to them. For
example, if you have three applications in three different subnets, and you want them all
to be public Internet-facing, you can place them in the same security group. Security
groups act like virtual firewalls, controlling the flow of traffic to your virtual servers, no
matter which subnet they are in.

VPC vs. …
VPC vs. virtual private network (VPN)
A virtual private network (VPN) makes a connection to the public Internet as secure as a
connection to a private network by creating an encrypted tunnel through which the information
travels. You can deploy a VPN-as-a-Service (VPNaaS) on your VPC to establish a secure
site-to-site communication channel between your VPC and your on-premises environment or
other location. Using a VPN, you can connect subnets in multiple VPCs so that they function as
if they were on a single network.
VPC vs. private cloud
Private cloud and virtual private cloud are sometimes—and mistakenly—used interchangeably.
In fact, a virtual private cloud is actually a public cloud offering. A private cloud is a
single-tenant cloud environment owned, operated, and managed by the enterprise, and hosted
most commonly on-premises or in a dedicated space or facility. By contrast, a VPC is hosted on
multi-tenant architecture, but each customer’s data and workloads are logically separate from
those of all other tenants. The cloud provider is responsible for ensuring this logical isolation.

VPC vs. public cloud


A virtual private cloud is a single-tenant concept that gives you the opportunity to create a
private space within the public cloud’s architecture. A VPC offers greater security than
traditional multi-tenant public cloud offerings but still lets customers take advantage of the high
availability, flexibility, and cost-effectiveness of the public cloud. In some cases, there may be
different ways of how you scale a VPC and a public cloud account. For instance, additional
storage volumes may only be available in blocks of a certain size for VPCs. Not all public cloud
features are supported in all VPC offerings.

IaaS - storage

IaaS is an abbreviation that stands for Infrastructure as a Service (“infrastructure as a service”).


This model provides for a cloud provider to provide the client with the necessary amount of
computing resources - virtual servers, remote workstations, data warehouses, with or without the
provision of software - and software deployment within the infrastructure remains the client's
prerogative. In essence, IaaS is an alternative to renting physical servers, racks in the data center,
operating systems; instead, the necessary resources are purchased with the ability to quickly
scale them if necessary. In many cases, this model may be more profitable than the traditional
purchase and installation of equipment, here are just a few examples:

● if the need for computing resources is not constant and can vary greatly depending on the
period, and there is no desire to overpay for unused capacity;
● when a company is just starting its way on the market and does not have working capital
in order to buy all the necessary infrastructure - a frequent option among startups;
● there is a rapid growth in business, and the network infrastructure must keep pace with it;
● if you need to reduce the cost of purchasing and maintaining equipment;
● when a new direction is launched, and it is necessary to test it without investing
significant funds in resources.
Difference between object storage, file storage and block storage

object storage
Object storage is a system that divides data into separate, self-contained units that are re-stored in
a flat environment, with all objects at the same level. There are no folders or sub-directories like
those used with file storage. Additionally, object storage does not store all data together in a
single file. Objects also contain metadata, which is information about the file that helps with
processing and usability. Users can set the value for fixed-key metadata with object storage, or
they can create both the key and value for custom metadata associated with an object.

Instead of using a file name and path to access an object, each object has a unique number.
Objects can be stored locally on computer hard drives and cloud servers. However, unlike with
file storage, you must use an Application Programming Interface (API) to access and manage
objects.

Pros and cons of object storage


In recent years, many organizations replaced their on-premises tape storage drives with object
storage, which increased usability and security. Examples of object storage systems includes
SAN, iSCSI and local disks.

Pros
Handles large amounts of unstructured data: The format of object storage allows for easily
storing and managing a high volume of unstructured data, which is becoming increasingly
important with artificial intelligence, machine learning and big data analytics.
● Affordable consumption model: Instead of paying in advance for a set amount of storage
space, as is common with file storage, you purchase only for the object storage you need.
● Unlimited scalability: Because object storage uses a consumption model, you can add as
much additional storage as you need — even petabytes or more.
● Uses metadata: Because the metadata is stored with the objects, users can quickly gain
value from data and more easily retrieve the object they need.
● Advanced search capabilities: Object storage enables users to search for metadata, object
contents and other properties.

Cons
● Cannot lock files: All users with access to the cloud, network or hardware device can
access the objects stored there.
● Slower performance than other storage types: The file format requires more processing
time than file storage and block storage.
● Cannot modify a single portion of a file: Once an object is created, you cannot change the
object; you can only recreate a new object.
Use cases for object storage
● IoT data management: The ability to quickly scale and easily retrieve data makes object
storage a good choice for the rapidly increasing amounts of IoT data being gathered and
managed, especially in the manufacturing and healthcare industries.
● Email: Organizations that are required to store large volumes of emails for historical and
compliance purposes often turn to object storage as their primary repository for both
scalability and price.
● Backup/recovery: Organizations often turn to object storage for their backup and
recovery storage because performance is less of an issue for this use case.
● Video surveillance: Object storage provides an affordable option for organizations that
need to store many video recordings and keep the footage for several years.

What is file storage?


File storage is when all the data is saved together in a single file with a file extension type that’s
determined by the application used to create the file or file type, such as .jpg, .docx or .txt. For
example, when you save a document on a corporate network or your computer’s hard drive, you
are using file storage. Files may also be stored on a network-attached storage (NAS) device.
These devices are specific to file storage, making it a faster option than general network servers.
Other examples of file storage devices include cloud-based file storage systems, network drives,
computer hard drives and flash drives.

File storage uses a hierarchical structure where files are organized by the user in folders and
subfolders, which makes it easier to find and manage files. To access a file, the user selects or
enters the path for the file, which includes the sub-directories and file name. Most users manage
file storage through a simple file system, such as File Manager.

Pros and cons of file storage


Examples of data typically saved using file storage include presentations, reports, spreadsheets,
graphics, photos, etc. File storage is familiar to most users and allows access rights and limits to
be set by the user but managing large numbers of files and hardware costs can become a
challenge.

Pros
● Easy to access on a small scale: With a small-to-moderate number of files, users can
easily locate and click on a desired file, and the file with the data opens. Users then save
the file to the same or a different location when they’re finished with it.
● Familiar to most users: As the most common storage type for end users, most people with
basic computer skills can easily navigate file storage with little assistance or additional
training.
● Users can manage their own files: Using a simple interface, end users can create, move
and delete their files.
● Allows access rights/file sharing/file locking to be set at user level: Users and
administrators can set a file as write (meaning users can make changes to the file),
read-only (users can only view the data) or locked (specific users cannot access the file
even as read only). Files can also be password-protected.

Cons
Challenging to manage and retrieve large numbers of files: While hierarchical storage works well
for, say, 20 folders with 10 subfolders each, file management becomes increasingly complicated
as the number of folders, subfolders and files increases. As the volume grows, the amount of
time for the search feature to find a desired file increases and becomes a significant waste of time
spread over employees throughout an organization.
● Hard to work with unstructured data: While it’s possible to save unstructured data like
text, mobile activity, social media posts and Internet of Things (IoT) sensor data in file
storage, it is typically not the best option for unstructured data storage, especially in large
amounts.
● Becomes expensive at large scales: When the amount of storage space on devices and
networks reaches capacity, additional hardware devices must be purchased.
● Use cases for file storage
● Collaboration of documents: While it’s easy to collaborate on a single document with
cloud storage or Local Area Network (LAN) file storage, users must create a versioning
system or use versioning software to prevent overwriting each other’s changes.
● Backup and recovery: Cloud backup and external backup devices typically use file
storage for creating copies of the latest versions of files.
● Archiving: Because of the ability to set permissions at a file level for sensitive data and
the simplicity of management, many organizations use file storage for archiving
documents for compliance or historical reasons.

What is block storage?


Block storage is when the data is split into fixed blocks of data and then stored separately with
unique identifiers. The blocks can be stored in different environments, such as one block in
Windows and the rest in Linux. When a user retrieves a block, the storage system reassembles
the blocks into a single unit. Block storage is the default storage for both hard disk drive and
frequently updated data. You can store blocks on Storage Area Networks (SANs) or in cloud
storage environments.
Pros and cons of block storage
Block storage systems have been a mainstay in the tech industry for decades. However, many
organizations are transitioning away from block because of the limited scale and lack of
metadata.

Pros
● Fast: When all blocks are stored locally or close together, block storage has a high
performance with low latency for data retrieval, making it a common choice for
business-critical data.
● Reliable: Because blocks are stored in self-contained units, block storage has a low fail
rate.
● Easy to modify: Changing a block does not require creating a new block; instead, a new
version is created.

Cons
● Lack of metadata: Block storage does not contain metadata, making it less usable for
unstructured data storage.
● Not searchable: Large volumes of block data quickly become unmanageable because of
limited search capabilities.
● High cost: Purchasing additional block storage is expensive and often cost-prohibitive at
a high scale.

Use cases for block storage


● Databases: Because block storage has a high performance and is easily updatable, many
organizations use it for transactional databases.
● Email servers: High performance and reliability make block storage a common solution
for storing emails.
● Virtual machine file system (VMFS) volumes: Organizations often use block storage for
deploying VMFS across the enterprise. Take, for example, the deployment of virtual
machines across an enterprise. With block storage, you can easily create and format a
block-based storage volume to store the VMFS. A physical server can then attach to that
block, creating multiple virtual machines. What’s more, creating a block-based volume,
installing an operating system and attaching to that volume enables users to share files
using that native operating system.

What are the key differences between object storage, block storage and file storage?
When determining which type of storage to use for different types of data, consider the
following:
● Cost: Because the costs involved with block and file storage are higher, many
organizations choose object storage for high volumes of data.
● Management ease: The metadata and searchability make object storage a top choice for
high volumes of data. File storage, with its hierarchical organization system, is more
appropriate for lower volumes of data.
● Volume: Organizations with high volumes of data often choose object or block storage.
● Retrievability: Data is relatively retrievable from all three types of storage, though file
and object storage are typically easier to access.
● Handling of metadata: Although file storage contains very basic metadata, information
with extensive metadata is typically best served by object storage.
● Data protection: While the data is stored, its essential the data is protected from breaches
and cybersecurity threats.
● Storage use cases: Each type of storage is most effective for different use cases and
workflows. By understanding their specific needs, organizations can select the type that
fits the majority of their storage use cases.

What is Data Protection in the Cloud?


Cloud data protection practices leverage tools and techniques for the purpose of protecting
cloud-based corporate data. These practices should provide coverage for all data, including data
at-rest and data in-transit, whether the data is located in an in-house private store or managed by
a third-party contractor.Data protection for cloud environments is becoming critical as more
businesses transition from building and managing their own data centers to storing applications
and data in the cloud.

Why Companies Need Cloud Data Protection


Today’s organizations are generating, collecting, and storing huge amounts of data with a wide
range of sensitivity, from corporate trade secrets, to financial private consumer information, to
less important data.

Additionally, organizations are moving data to the cloud, storing it in diverse locations that
introduce complexities, from simple public cloud and private cloud repositories, to complicated
architectures like multiclouds, hybrid clouds, and Software as a Service (SaaS) platforms.

The complexity of cloud architectures, coupled with increasingly demanding data protection and
privacy regulations, vendors shared responsibility models, create many security challenges. Here
are key challenges you might encounter:

● Visibility—it’s difficult to keep an accurate inventory of all applications and data.


● Access—there are less controls for data and applications hosted on third-party
infrastructure, than on premise. It is not always possible to gain visibility into users
activity and learn how devices or data are being used.
● Controls—cloud vendors offer a “shared responsibility” model. This means that while
cloud users gain controls for some security aspects, others remain within the scope of the
vendor and users cannot ensure security.
● Inconsistencies—different cloud providers offer different capabilities, which can lead to
inconsistent cloud data protection and security.
All of these challenges can be exploited by threat actors, and may result in security breaches, loss
or theft of trade secrets and private or financial information, and malware or ransom infections.

Another major factor is compliance. Organizations are required to comply with data protection
and privacy laws and regulations, such as the European Union's General Data Protection
Regulation (GDPR), as well as the Health Insurance Portability and Accountability Act (HIPAA)
of 1996 (US).It can be very difficult for companies to consistently set and implement security
policies across multiple cloud environments, as well as prove auditors' compliance.

Data Protection in the Cloud: Key Best Practices

Know your Responsibilities in the Cloud: Using a cloud service does not mean that the cloud
provider is responsible for your data. You and your provider share responsibilities. This shared
responsibility model allows cloud providers to ensure that the hardware and software services
they provide are secure while cloud consumers remain responsible for the security of their data
assets.Cloud providers often offer better security than many companies can achieve on their own.
On the other hand, cloud consumers lose visibility because the cloud vendor is in charge of
infrastructure operations. Challenges can arise when development, operations, hosting, and
security responsibilities get mixed up between in-house teams and cloud vendors.

Ask About Provider’s Processes in Case of a Breach: Cloud vendors should provide transparent
and well-documented plans, outlining mitigation and support during breaches. A worst case
scenario can mean that multiple fail-safes and alarms are immediately triggered during a breach,
alerting all relevant parties that an attack is happening. Another emergency measure is triggering
a lockdown that secures data before it could be transferred or decrypted.

Identify Security Gaps Between Systems: Cloud environments are typically integrated with other
services, some in-house while some are third-party. The more systems and vendors you add to
the stack, the more gaps are created. Organizations need to identify each security gap and take
measures that ensure the security of the data and assets shared and used by these systems.While
some measures are implemented by third party vendors, organizations also need to implement
their own measures to ensure compliance and security. Each industry is required to uphold
certain security practices. Third-party vendors do not always offer the same level of compliance.

Utilize File-Level Encryption: Organizations should implement comprehensive file-level


encryption measures, even if cloud vendors provide basic encryption. File-level encryption can
serve as the basis of your cloud security, adding a layer of protection before uploading data to the
cloud. You can also "shard" data into fragments. Storing shards in different locations make it
difficult for threat actors to assemble the whole file, even if they manage to breach the system.

Transfer Data Securely: You can implement point-to-point security by combining additional
encryption with SSL for all communications. You can use secure email and file protection tools,
which enable you to track and control who can see your data, who can access that data, when and
how access is revoked (all actions or specific actions like forwarding). You can restrict the types
of data that are allowed to be transferred outside your organization ecosystems. You should also
restrict certain uses of data, and ensure that users and recipients comply with data protection
regulations.

Back Up Data Consistently: Create data replicates regularly and store them separately from the
main repository. Consistent backups can help protect your organization from critical data losses,
especially during a data wipeout or a lockdown. Data replicas enable you to continue working
off-line even when cloud assets are not available.

Expose Shadow IT in Your Cloud Deployment: A cloud security policy does not guarantee
proper use of cloud resources. Many employees are not well versed in security policies, or are
unaware of the security risks. When employees install software and download files without
consulting with the IT team, they create a shadow IT infrastructure that introduces many security
risks.There are certain measures organizations can take to protect against shadow IT risks. One
technique is to monitor firewalls, proxies, and SIEM logs to determine IT activity throughout the
organization. Next, you can assess activities and determine the risks introduced by users.Once
you gain a relatively accurate picture of activity and usage, you need to put in some measures
that prevent the transfer of corporate data from trusted systems and devices to unauthorized
endpoints. Another measure that can prevent risks is enforcing device security verification, to
prevent downloads from and to unauthorized devices.

IaaS Security: Threats and Protection Methodologies

Cloud services are becoming the main part of the infrastructure for many companies. Enterprises
should pay maximum attention to security issues, moving away from typical approaches used in
physical infrastructures, which are often insufficient in an atmosphere of constantly changing
business requirements. Although cloud providers do all they can to guarantee infrastructure
reliability, some of them limit their services to standard security measures, which can and should
be significantly expanded.

Typical Cloud Information Security Threats


According to the Cloud Security Alliance the list of the main cloud security threats includes the
following:
● Data Leaks: Data in the cloud is exposed to the same threats as traditional infrastructures.
Due to the large amount of data, platforms of cloud providers become an attractive target
for attackers. Data leaks can lead to a chain of unfortunate events for IT companies and
infrastructure as a service (IaaS) providers.
● Compromising Accounts And Authentication Bypass: Data leaks often result from
insufficient attention to authentication verification. More often than not, weak passwords
in conjunction with poor management of encryption keys and certificates are to blame. In
addition, IT organizations are faced with problems of managing rights and permissions
when users are assigned with much greater powers than they actually need. The problem
can also occur when a user takes another position or leaves the company: no one is in a
rush to update permissions under the new user roles. As a result, the account has rights to
more features than necessary. The threat may also come from current or former
employees, system administrators, contractors or business partners. Insiders may have
different motives, ranging from data theft to simple revenge. In the case of IaaS, the
consequences of such actions can even take the form of full or partial infrastructure
destruction, data access or even data destruction.
● Interface And API Hacking: Today, it is impossible to imagine cloud services and
applications without friendly user interfaces (UIs) and application program interfaces
(APIs). The security and availability of cloud services depends on reliable mechanisms of
data access control and encryption. Weak interfaces become bottlenecks in matters of
availability, confidentiality, integrity and security of systems and data.
● Cyberattacks: Targeted cyberattacks are common in our times. An experienced attacker,
who has secured his presence in a target infrastructure, is not so easy to detect. Remote
network attacks may have a significant impact on the availability of infrastructure in
general. Despite the fact that denial-of-service (DoS) attacks have a long history, the
development of cloud computing has made them more common. DoS attacks can cause
business critical services to slow down or even stop. DoS attacks consume a large amount
of computing power that comes with a hefty bill. Despite the fact that the principles of
DoS attacks are simple at first glance, you need to understand their characteristics at the
application level: the focus on the vulnerability of web servers, databases and
applications.
● Permanent Data Loss: Data loss due to malicious acts or accidents at the provider’s end is
no less critical than a leak. Daily backups and their storage on external protected
alternative platforms are particularly important for cloud environments. In addition, if
you are using encryption before moving data to the cloud, it is necessary to take care of
secure storage for encryption keys. As soon as keys fall into the wrong hands, data itself
becomes available to attackers, the loss of which can wreak havoc on any organization.
● Vulnerabilities: A common mistake when using cloud-based solutions in the IaaS model
is paying too little attention to the security of applications, which are placed in the secure
infrastructure of the cloud provider. And the vulnerability of applications becomes a
bottleneck in enterprise infrastructure security.
● Lack Of Awareness: Organizations moving to the cloud without understanding the
capabilities the cloud has to offer are faced with many problems. If a team of specialists
is not very familiar with the features of cloud technologies and principles of deploying
cloud-based applications, operational and architectural issues arise that can lead not only
to downtime but also to much more serious problems.
● Abuse Of Cloud Services: The cloud can be used by legal and illegal businesses. The
purpose of the latter is to use cloud resources for criminal activity: launching DoS
attacks, sending spam, distributing malicious content, etc. It is extremely important for
suppliers and service users to be able to detect such activities. To do this, detailed traffic
inspections and cloud monitoring tools are recommended.

Protection Methodology
In order to reduce risks associated with information security, it is necessary to determine and
identify the levels of infrastructure that require attention and protection. For example, the
computing level (hypervisors), the data storage level, the network level, the UI and API level,
and so on.
Next you need to define protection methods at each level, distinguish the perimeter and cloud
infrastructure security zones, and select monitoring and audit tools.

Enterprises should develop an information security strategy that includes the following, at the
very least:
● Regular software update scheduling
● Patching procedures
● Monitoring and audit requirements
● Regular testing and vulnerability analysis
● IaaS Information Security Measures

Some IaaS providers already boast advanced security features. It is necessary to carefully
examine the services and systems service providers offer at their own level, as well as conditions
and guarantees of these offerings. Alternatively, consider implementing and utilizing them on
your own.

● Data Encryption: Encryption is the main and also the most popular method of data
protection. Meticulously managing security and encryption key storage control is an
essential condition of using any data encryption method. It is worth noting that the IaaS
provider must never be able to gain access to virtual machines and customer data.
● Network Encryption: It is mandatory also to encrypt network connections, which is
already a gold standard for cloud infrastructure.
● Access Control: Attention must also be paid to access control, for example, by using the
concept of federated cloud. With the help of federated services, it’s easy to organize a
flexible and convenient authentication system of internal and external users. The use of
multi-factor authentication, including OTP, tokens, smart cards, etc., will significantly
reduce the risks of unauthorized access to the infrastructure. Be sure not to forget
hardware and virtual firewalls as a means of providing network access control.
● Cloud Access Security Broker (CASB): A CASB is a unified security tool that allows
administrators to identify potential data loss risks and ensure a high level of protection.
The solution works in conjunction with the IaaS provider’s cloud infrastructure by
enabling users to monitor shared files and prevent data leakage. This way, administrators
know where important content is stored and who has access to the data.
● Vulnerability Control: Implementing and using vulnerability control, together with
regular software updates, can significantly reduce the risks associated with information
security, and it is an absolute must for both the IaaS provider and its clients.
● Monitor, Audit And Identify Anomalies: Monitoring and auditing systems allow you to
track standard indicators of infrastructure performance and identify abnormalities related
to system and service security. Using deep packet inspection (DPI) or intrusion detection
and prevention solutions (IDS/IPS) helps detect network anomalies and attacks.
● Staff Training: Conducting specialized training and adopting a general attitude of focused
attention to the technical competence of staff that have access to the virtual infrastructure
will enhance the overall level of information security.

Office 365 Cloud


Office 365 is a cloud-based product suite containing Microsoft programs for office use that can
be run locally and synchronized to cloud storage. With the help of Office 365, you can work
from anywhere and share the work documents with your colleagues worldwide. It helps
individuals as well as businesses to easily work on different documents.

Devices supported by Office 365:


● Desktop Computers – installed as Microsoft Office 365
● Web – Office online allows you to create and edit documents using lightweight versions
of Office applications.
● Mobile Devices – smartphones and tablets

Excel is also part of the programs that are shipped with Office 365. You can use the powerful
features found in desktop versions of Office 365 Excel in the cloud based version too.

Benefits of Office 365


● Improved user productivity
● Ability to access documents from multiple devices and locations
● Easily share documents with colleagues
● 1 TB storage with OneDrive
● Easily recover data from the cloud if you lose your computer or it crashes etc.

Disadvantages of Office 365


● Since this is a cloud based service, there is a possibility of outages if something happens
with the cloud services for Office 365
● Ease of access from multiple devices/locations may also pose security risks if the users
do not follow security best practices

Google G Suite

Google G suite is one of the best app used by businesses to increase the productivity of the team.
It’s the best email software in the market and not only provides email solutions but also offers
Google drive storage facilities and tools like Google Sheets and Google Docs.

Although the efficiency of the G suite depends on the type of business, you need to consider the
pros and cons of using this web-based platform.

Pros:
1. Ease of use: The G suite has a user-friendly interface that enables users to access the software
suite tools, programs, and share information automatically with users in the same workgroup via
a single login.

2. Compatibility: G suite apps are compatible with almost all devices. You can open the apps
through the mobile device and also makes it easy to switch between accounts. If you have been
using Gmail personal account you can easily switch to a G Suite account.

3. Share information: Data within your Google app is shared with users in the same group.

4. Cost-effective: The basic G-suite package is free and can be used in small organizations with
less than 10 users. It offers an email solution, calendar, office, and shared website applications
for each user.

5. Reliability: Google G suite is one of the reliable business software with no scheduled
downtime for maintenance purposes. All the googles apps data centers are built with redundant
infrastructure to enhance the app’s Uptime.
6. Secure: G suite is built on the Google cloud platform making it the highly secure platform on
the market. Google products are the most trusted services across the world and a leading
company with a security-first mindset.

7. Customized email domain: Google G suite enables a business to create its own email domain
making the company look more professional. This can contribute to more traffic and lead
generations to the company’s website.

8. Integration: There are a lot of third-party web application tools integrated with the G Suite.
These tools help businesses create an online presence and save money and time.

9. Advanced features: eDiscovery, search, email archiving, and other advanced features are
available at a low cost.

10. Cloud-based apps: All the apps in the G suite are cloud-based and this encourages more use
of cloud-based applications and collaboration among users.

Cons:
1. No forms in emailing clients: If you need built-in forms and workflows for your business, you
need to look for other third-party apps that offer that application.

2. Document conversion issues: You may experience challenges converting Google Sheets and
Docs to Microsoft documents and PDF formats. You’re forced to look for a third-party app to
help in conversion.

3. Time-consuming: It may take some time to import data and documents into the system from
other external sources.

4. Limited Hangout features: Google G suite has limited hangout features than slack.

5. Internet connection: For the apps to function, you need internet connectivity 24/7. Slow
connection or no internet limits your access to some apps.

6. Storage facilities: The free package has limited storage space. Although paid packages have
higher storage capacity, they do not offer more space like Office 365.

7. Integration of desktop email with the mobile client: The mobile version of the app is clunky
making users disappointed due to the lack of proper integration.
8. Limited recovery period: If you delete Gmail, or Drive data, it can be recovered from the
admin console within a limited period of time.

9. Single source: The service relies heavily on the third party and should Google have any outage
or server issues, it will affect the running of your business.

10. Web-based option: G suite is purely a web-based platform and if you’re used to using
software like Microsoft Office, you will find G suite features like Google Docs and Sheet not as
effective as Office product.

SaaS Evaluation Criteria

1. What are the capabilities of the system?


Some SaaS vendors do a better job of delivering on this all-in-one promise than others. Your job
is to evaluate what you need, what you’ll use, and what the software does well. Email marketing
may mean sending outgoing emails only – or it may mean advanced marketing automation. The
details can get hidden in the fine print.

It can be very easy to get caught up in the bells-and-whistles of the software, or to make
assumptions about what a feature means, only to find out later that the software you just
purchased is missing a key function needed for day-to-day functioning.

2. What’s included at each pricing tier?


We hate to see clients run into a “that costs extra” situation, where they can’t get the functionality
they need unless they spend more money to buy upper-tier licenses. Unfortunately, it’s common
for marketing information to be unclear, often because the software is being consistently
improved.

Microsoft Office 365 is a great example of the need to understand licensing limitations. You may
read online that you can integrate VOIP calling functionality into Office 365, not realizing that
capability only exists if you buy a certain license type. A thorough evaluation and advice of an
IT professional is helpful when trying to determine licensing needs.

3. How do you know your data won’t be hacked?


Major vendors like Microsoft, Google, Amazon have multiple levels of security in place that is
continually and extensively tested to thwart cyber criminal activity. Studies have repeatedly
shown that – despite the increase in cyber security breaches – your data is actually much safer in
a cloud-based SaaS solution than it is on a physical server in your office.
If you’re working with a smaller SaaS vendor, you need to ask about their data security policies.
In addition, your company needs to take steps to create and follow IT policy and procedure best
practices. The best lock in the world won’t keep criminals out if you leave the door wide open.

4. What’s their privacy policy? How do you know they won’t sell your data?
As the saying goes, “If the software is free, you are the product.” Many social media sites are
monetized through advertising. In asking this question, your job is to understand how they will
use the information stored on their servers, either in aggregate or through targeted marketing.

5. How can you ensure the SaaS vendor won’t lose your data?
What are their backup and recovery policies? Do they offer a service level agreement with an
up-time guarantee?

Technology start-ups are particularly vulnerable to data loss. They may be cash-crunched and
cutting corners to put all their time and energy into feature enhancements to gain new customers.
Only when tragedy strikes – a hurricane, fire, flood or burglary – do they realize that their
backup process failed – or that it will take weeks to get back up and running on a new server.
There are countless stories of software companies that have vanished overnight, leaving their
customers without critical accounting, customer or sales data.

With so many readily available, and affordable cloud-hosting backup solutions available, data
loss is inexcusable. Don’t let a SaaS vendor’s mistake cost you your business.

6. How easy is it to setup the new system?


Moving to a new software system usually isn’t as simple as just downloading your existing
records and then re-uploading the data into the new system.
● What needs to be done to clean up existing data before it’s imported into the new system?
Migrating to a new software program presents a great opportunity to clean out old and
unnecessary information. Do you need to be able to access historical information? If so,
how will you accomplish that goal?
● What new opportunities exist with this software that’s weren’t possible in the past?
Where do you need to change your procedures to capitalize on your technology
investment?
● What decisions need to be made up front that will be hard (or impossible) to change
later?
● Who will do the setup? Can you setup the system on your own? Does the SaaS vendor or
its partners offer technology consulting services to help get the system properly
configured?

7. How easy is it for users to learn the system?


User adoption is critical – yet training for new software is often overlooked. Software companies
go to great lengths to make their software easy to use, and especially to look easy in a demo. The
software may in fact be easy to use – once you know where to look.

● Naming conventions may be different. What’s an account vs. a customer? What’s a lead
vs. a list?
● Functions may be hidden in unexpected places. If you’ve been using QuickBooks
forever, switching to FreshBooks or Wave may leaving you scratching your head on
where to find features that you know must exist, but they’re not on the page where you’d
expect to find them.
● The software is always evolving. As updates are published, how are new features
communicated to users? Do they send out emails, create walk-throughs, or expect you to
regularly visit their user forum?
● Is onboarding available? Many SaaS vendors offer a series of videos and walk-throughs
to orient new users to the system.
8. Is software customer support included? What types?
Software support can be free or paid; self-service or on-demand. Before you become a customer,
ask about customer support options like:
● Live chat
● Help “More Info” Icons within the software itself
● Phone support
● Support hours (Is it 24/7/365? Is it OK if it’s not?)
● Blogs and forums
● Help desk ticketing
● Facebook community pages
● Vendor or partner consulting services
9. Can we connect this software program to other software systems? Can we extend or customize
the software?
In a prior post, we shared how important it is to select small business software with API
Integration. Small business data silos create problems because it’s so easy to lose sight of the
“truth” and only see one aspect of the business. Ask if the software works natively with Zapier,
PieSync or other integration tools. Look and see if they have software partners that extend the
functionality of this solution.

10. What happens if I leave?


Can I take my data with me? How can I download it? You don’t want to spend years and years
building your business online, only have to have it disappear. Even social media sites like
Facebook, LinkedIn and Twitter provide you with options to download your history.
Cloud Delivery Models Considerations

Key considerations to determining Cloud deployment models include:

● According to the latest research from the Cloud Industry Forum, 89 per cent of
organisations surveyed have an in-house server room, although this dropped to 73 per
cent for those employing less than 20 people.
● The combination of committed capital and potential of available labour supporting it has
to be assessed to determine if a new solution is most effectively housed on premise.
● Equally, if an organisation is considering expanding a legacy or bespoke application, or
creating an integration with such a system from a new application (e.g. a new workflow
service for an existing order management system), then the level of integration or the
nature of the legacy application may restrict choice of deployment options, to being on-
premise or hosted in a private Cloud or dedicated infrastructure. -
● Available network bandwidth to access the internet was not as high a practical issue - 80
per cent of organisations (across all size ranges) already had sufficient bandwidth for
normal business tasks at a contracted rate of 10Mbit/s or greater.
● Other commercial considerations organisations have to take into account are matters such
as the urgency of the solution (i.e. Time to Market) where a hosted or Cloud solution has
clear advantages for urgent or temporary projects, balanced against contributing
considerations of available experienced staff in the solution area and/or the ability to
identify a trusted partner for delivery via the Cloud.
● The technical considerations need to start at the planning and strategy phase. CIF's
research identified that 85 per cent of organisations now formally include consideration
of Cloud based solutions within their IT strategy, which is very positive as this number-is
some 32 per cent above the actual number of organisations that have adopted at least one
Cloud service formally.
● The technical delivery team inside any organisation is going to have look at initiatives
based on attributes such as their temporality/duration, urgency (time to market),
frequency in changes of scale in operations, skills and manpower requirements in order to
determine if a solution can be delivered in-house or should be delivered as a service.
● Other criteria will apply when considering issues such as Test or Back-Up/ Disaster
Recovery solutions where arguably an off-site presence is more appropriate to reflect and
mitigate the real world risks. -
● Over-riding all commercial and technical considerations are the Governance, Policy and
Regulatory constraints that the organisation must take into account. -Top of this list will
relate to issues of external regulation and notably around matters such as Data
Sovereignty.
● 42 per cent of organisations participating in the research stated they were subject to
external regulation, 12 per cent were subsidiaries of organisations that were
headquartered internationally.
● In the survey 90 per cent of respondents wanted their data kept on-premise or hosted in
the UK and not held within the wider EEA or other geographies.

You might also like