0% found this document useful (0 votes)
51 views77 pages

CS Notes Unit-4

Uploaded by

preetikaler09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views77 pages

CS Notes Unit-4

Uploaded by

preetikaler09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 77

Planning/Disaster Recovery

What is cloud disaster recovery (cloud DR)?


Cloud disaster recovery (cloud DR) is a combination of strategies and services
intended to back up data, applications and other resources to public cloud or
dedicated service providers. When a disaster occurs, the affected data, applications
and other resources can be restored to the local data center -- or a cloud provider --
to resume normal operation for the enterprise.

The goal of cloud DR is virtually identical to traditional DR: to protect valuable


business resources and ensure protected resources can be accessed and recovered
to continue normal business operations.

Cloud disaster recovery, often called cloud DR, is a comprehensive approach


encompassing strategies and services for safeguarding data, applications,
and assets by replicating them to public cloud environments or dedicated
service providers. In a disaster, this replicated data can be used to restore
affected systems to a local data center or a cloud provider, enabling the
enterprise to resume its standard operations.
The core objective of cloud DR mirrors that of traditional cloud disaster
recovery strategy: to protect critical business resources and guarantee their
accessibility and recoverability, thereby ensuring uninterrupted business
continuity.

There are two main types of cloud DR:

1. Disaster recovery as a service (DRaaS): DRaaS is a managed service that


provides a complete disaster recovery solution, including
backup, replication, and restoration.
2. Cloud-based disaster recovery: Cloud-based disaster recovery involves using
cloud infrastructure and services to build and manage your own disaster
recovery solution.

Importance of cloud DR
DR is a central element of any business continuity (BC) strategy. It
entails replicating data and applications from a company's primary infrastructure to
a backup infrastructure, usually situated in a distant geographical location.

Before the advent of cloud connectivity and self-service technologies, traditional


DR options were limited to local DR and second-site implementations. Local DR
didn't always protect against disasters such as fires, floods and earthquakes. A
second site -- off-site DR -- provided far better protection against physical
disasters, but implementing and maintaining a second data center imposed
significant business costs.

With the emergence of cloud technologies, public cloud and managed service
providers could create a dedicated facility to offer a wide range of effective backup
and DR services and capabilities.

The following reasons highlight the importance of cloud storage and disaster
recovery:

1. Cloud DR ensures business continuity in the event of natural disasters


and cyber attacks, which can disrupt business operations and result in
data loss.

2. With a cloud disaster recovery strategy, critical data and applications can be
backed up to a cloud-based server. This enables quick data recovery for
businesses in the wake of an event, thus reducing downtime and
minimizing the effects of the outage.

Cloud-based DR offers better flexibility, reduced complexities, more cost-


effectiveness and higher scalability compared with traditional DR methods.
Businesses receive continuous access to highly automated, highly scalable, self-
driven off-site DR services without the expense of a second data center and
without the need to select, install and maintain DR tools.

Selecting a cloud DR provider


An organization should consider the following five factors when selecting a cloud
DR provider:

1. Distance. A business must consider the cloud DR provider's physical


distance and latency. Putting DR too close increases the risk of shared
physical disaster, but putting the DR too far away increases latency and
network congestion, making it harder to access DR content. Location can
be particularly tricky when the DR content must be accessible from
numerous global business locations.

2. Reliability. Consider the cloud DR provider's reliability. Even a cloud


experiences downtime, and service downtime during recovery can be
equally disastrous for the business.

3. Scalability. Consider the scalability of the cloud DR offering. It must be


able to protect selected data, applications and other resources. It must
also be able to accommodate additional resources as needed and provide
adequate performance as other global customers use the services.

4. Security and compliance. It's important to understand the security


requirements of the DR content and be sure the provider can offer
authentication, virtual private networks, encryption and other tools
needed to safeguard the business's valuable resources. Evaluate
compliance requirements to ensure the provider is certified to meet
compliance standards that relate to the business, such as ISO 27001,
SOC 2 and SOC 3, and Payment Card Industry Data Security Standard
(PCI DSS).

5. Architecture. Consider how the DR platform must be architected. There are


three fundamental approaches to DR, including cold, warm and hot
disaster recovery. These terms loosely relate to the ease with which a
system can be recovered.
Cloud-based DR approaches include managed primary and DR instances, cloud-based backup and
restore, and replication in the cloud.

Approaches to cloud DR
The following are the three main approaches to cloud disaster recovery:

1. Cold DR typically involves storage of data or virtual machine (VM) images.


These resources generally aren't usable without additional work such as
downloading the stored data or loading the image into a VM. Cold DR is
usually the simplest approach -- often just data storage -- and the least
expensive approach, but it takes the longest to recover, leaving the
business with the longest downtime in a disaster.

2. Warm DR is generally a standby approach where duplicate data and


applications are placed with a cloud DR provider and kept up to date
with data and applications in the primary data center. But the duplicate
resources aren't doing any processing. When disaster strikes, the warm
DR can be brought online to resume operations from the DR provider --
often a matter of starting a VM and redirecting IP addresses and traffic to
the DR resources. Recovery can be quite short, but still imposes some
downtime for the protected workloads.

3. Hot DR is typically a live parallel deployment of data and workloads


running together in tandem. That is, both the primary data center and the
DR site use the same workload and data running in synchronization --
both sites sharing part of the overall application traffic. When disaster
strikes one site, the remaining site continues without disruption to handle
the work. Users are ideally unaware of the disruption. Hot DR has no
downtime, but it can be the most expensive and complicated approach.

It's possible to mix approaches, enabling higher-priority workloads to employ a hot


approach while lower-priority workloads or data sets use a warm or even cold
approach. However, it's important for organizations to determine the best approach
for each workload or resource and to identify a cloud DR provider that can
adequately support the desired approaches.

Benefits of cloud DR
Cloud DR and backups provide several benefits when compared with more
traditional DR strategies:

Pay-as-you-go options. Organizations that deploy do-it-yourself DR facilities face


significant capital costs while engaging managed colocation providers for off-site
DR services that often lock organizations into long-term service agreements.
A major advantage of cloud services is the pay-as-you-go model, which enables
organizations to pay a recurring monthly charge only for the resources and services
they use. As resources are added or removed, the payments change accordingly.

In effect, the cloud model of service delivery turns upfront capital costs into
recurring operational expenses. However, cloud providers frequently offer
discounts for long-term resource commitments, which can be more attractive to
larger organizations with static DR needs.
Flexibility and scalability. Traditional DR approaches, usually implemented in
local or remote data centers, often impose limitations in flexibility and scalability.
The business must buy the servers, storage, network gear and software tools
needed for DR, and then design, test and maintain the infrastructure needed to
handle DR operations -- substantially more if the DR is directed to a second data
center. This typically represents a major capital and recurring expense for the
business.

Cloud DR options, such as public cloud services and disaster recovery as a service
(DRaaS), can deliver enormous amounts of resources on demand, enabling
businesses to engage as many resources as necessary -- usually through a self-
service portal -- and then adjust those resources when business demands change,
such as when new workloads are added or old workloads and data are retired.

High reliability and geo-redundancy. One essential hallmark of a cloud provider


is a global footprint, ensuring multiple data centers support users across major
global geopolitical regions. Cloud providers use this to improve service reliability
and ensure redundancy. Businesses can readily take advantage of geo-redundancy
to place DR resources in another region -- or even multiple regions -- to
maximize availability. The quintessential off-site DR scenario is a natural trait of
the cloud.

Easy testing and fast recovery. Cloud workloads routinely operate with VMs,
making it easy to copy VM image files to in-house test servers to validate
workload availability without affecting production workloads. In addition,
businesses can select options with high bandwidth and fast disk input/output to
optimize data transfer speeds in order to meet recovery time objective (RTO)
requirements. However, data transfers from cloud providers impose costs, so
testing should be performed with those data movement -- cloud data egress -- costs
in mind.

Not bound to the physical location. With a cloud DR service, organizations can
choose to have their backup facility situated virtually anywhere in the world, far
away from the organization's physical location. This provides added protection
against the possibility that a disaster might jeopardize all servers and pieces of
equipment located inside the physical building.

Drawbacks of cloud DR
The following are some drawbacks of cloud DR:

1. Complexity. Setting up and maintaining cloud disaster recovery can be


challenging and require specialized expertise.

2. Internet connectivity. Cloud DR needs consistent internet access, which


might be difficult in places with poor internet connectivity.

3. Migration cost. Transferring large volumes of data to the cloud can be


expensive.

4. Security and privacy concerns. With cloud DR, there's always the danger
of user data getting into the hands of unauthorized personnel, since cloud
providers have access to customer data. This can sometimes be avoided
by opting for zero-knowledge providers that maintain a high level of
confidentiality.

5. Vendor lock-in. Once the data is migrated to a cloud-based DR service, it


can be difficult for organizations to avoid vendor lock-in or switch to
another provider.

6. Dependence on third-party providers. As with any third-party vendor,


there's a risk of dependence on their service and a loss of control over the
disaster recovery process.

Cloud Disaster Recovery vs Traditional Disaster


Recovery
A traditional disaster recovery process stores redundant copies of data
in a secondary data center. Here are key elements of traditional on-
premises data recovery:
1. A dedicated facility—for all needed IT infrastructure, including
equipment and staff.
2. Server capacity—designed to provide a high level of performance
and scalability.
3. Internet and bandwidth—to provide remote access to the
secondary data center.
4. Network infrastructure—provides a reliable connection between
the two data centers, and ensures data availability.

Here are several disadvantages of a traditional DR:

1. Highly complex—a local data recovery site can be complex to


manage and monitor.
2. High costs—setting up and maintaining a local site can be time
consuming and expensive.
3. Less scalability—to expand the server capacity of your local site,
you need to purchase additional equipment. This expansion can
cost a lot of time and money.

A cloud DR can solve many of these issues. Here is how:

1. No local site—cloud DR does not require a local site. You can make
use of existing cloud infrastructure and use these resources as a
secondary site.
2. Scalability—cloud resources can be quickly scared up or down
based on demand. There is no need to purchase any equipment.
3. Flexible pricing—cloud vendors offer flexible pricing models,
including on-demand pay-as-you-go resources and discounts for
long term commitments.
4. Quick disaster recovery—cloud DR enables you to roll back in a
matter of minutes, typically from any location, provided you have a
working Internet connection.
5. No single point of failure—the cloud lets you store backup data
across multiple geographical locations.
Network infrastructure—cloud vendors continuously work to
improve and secure their infrastructure, provide support and
maintenance, and release updates as needed.

Cloud DR vs. traditional DR


Cloud-based DR services and DRaaS offerings can provide cost benefits,
flexibility and scalability, geo-redundancy, and fast recovery. But cloud DR might
not be appropriate for all organizations or circumstances.

The following are a few situations where more traditional DR approaches might be
beneficial, even essential, for the business:

1. Compliance requirements. Cloud services are increasingly acceptable for


enterprise usage where well-established regulatory oversight is required,
such as the Health Insurance Portability and Accountability Act and PCI
DSS. However, some organizations might still face prohibitions when
storing certain sensitive data outside an immediate data center -- or in
any resource or infrastructure that isn't under the organization's direct
control, such as a public cloud, which is a third-party infrastructure. In
these cases, the business could be obligated to implement local or owned
off-site DR to satisfy security and compliance demands.

2. Limited connectivity. Cloud resources and services depend on wide area


network connectivity such as the internet. DR use cases put a premium
on connectivity because a reliable, high-bandwidth connection is critical
for quick uploads, synchronization and fast recovery. Although reliable,
high-bandwidth connectivity is common in global urban and most
suburban areas, it's hardly universal. Remote installations such as edge
computing sites often exist -- at least, in part -- because of limited
connectivity, so it might make perfect sense to implement data backups,
workload snapshots and other DR techniques at local sites where
connectivity is questionable. Otherwise, the business risks data loss and
problematic RTOs.

3. Optimum recovery. Clouds offer powerful benefits, but users are limited to
the infrastructure, architecture and tools that the cloud provider offers.
Cloud DR is constrained by the provider and the service-level
agreement. In some cases, the recovery point objective (RPO) and RTO
offered by the cloud DR provider might not be adequate for the
organization's DR needs -- or the service level might not be guaranteed.
By owning the DR platform in house, a business can implement and
manage a custom DR infrastructure that can best guarantee DR
performance requirements.

4. Use existing investments. DR needs have been around much longer than
cloud services, and legacy DR installations -- especially in larger
businesses or where costs are still being amortized -- might not be so
easily displaced by newer cloud DR offerings. That is, a business that
already owns the building, servers, storage and other resources might not
be ready to abandon that investment. In these cases, the business can
adopt cloud DR more slowly and cautiously, systematically adding
workloads to the cloud DR provider as an avenue of routine technology
refresh, rather than spending another round of capital.

It's worth noting that choosing between traditional DR and cloud DR isn't mutually
exclusive. Organizations might find that traditional DR is best for some workloads,
while cloud DR can work quite well for other workloads. Both alternatives can be
mixed and matched to provide the best DR protection for each of the organization's
workloads.

Cloud disaster recovery and business continuity


The terms business continuity and disaster recovery -- together referred to as
BCDR or BC/DR -- describe a collection of procedures and methods that can be
used to aid an organization's recovery from a disaster and the continuation or
restart of regular business activities.

Business continuity

BC basically refers to the plans and technologies put in place to ensure business
operations can resume with minimum delay and difficulty following the onset of
an incident that could disrupt the business.

By this definition, BC is a broad topic area that involves a multitude of subjects


including security, business governance and compliance, risk assessment and
management, change management, and disaster preparedness and recovery. For
example, BC efforts might consider and plan for a broad range of catastrophes
such as epidemics, earthquakes, floods, fires, service outages, physical or cyber
attacks, theft, sabotage, and other potential incidents.

BC planning typically starts with risk recognition and assessment: What risks is the
business planning for, and how likely are those risks? Once a risk is understood,
business leaders can design a plan to address and mitigate the risk. The plan is
budgeted, procured and implemented. Once implemented, the plan can be tested,
maintained and adjusted as required.

Disaster recovery

Disaster recovery, which also includes cloud-based DR, is part of a broader BC


umbrella. It typically plays a central role in many avenues of BC planning, such as
for floods, earthquakes and cyber attacks. For example, if the business operates on
a known earthquake fault, the risk of damage from an earthquake would pose a
potential risk that would be analyzed to formulate a mitigation plan. Part of the
mitigation plan might be to adopt cloud DR in the form of a second hot site located
in a region free of earthquake danger.

Thus, the BC plan would rely on redundancy of the cloud DR service to seamlessly
continue operations in the event that the primary data center became unavailable,
continuing business operations. In this example, DR would only be a small part of
the BC plan, with additional planning detailing corresponding changes in
workflows and job responsibilities to maintain normal operations -- such as taking
orders, shipping products and handling billing -- and work to restore the affected
resources.

Creating a cloud-based disaster recovery plan


Building a cloud DR plan is virtually identical to more traditional local or off-
site disaster recovery plans. The principal difference between cloud DR and more
traditional DR approaches is the use of cloud technologies and DRaaS to support
an appropriate implementation. For example, rather than backing up an important
data set to a different disk in another local server, cloud-based DR would back up
the data set to a cloud resource such as an Amazon Simple Storage Service bucket.
As another example, instead of running an important server as a warm VM in a
colocation facility, the warm VM could be run in Microsoft Azure or through any
number of different DRaaS providers. Thus, cloud DR doesn't change the basic
need or steps to implement DR, but rather provides a new set of convenient tools
and platforms for DR targets.

There are three fundamental components of a cloud-based disaster recovery plan:


analysis, implementation and testing.

Analysis. Any DR plan starts with a detailed risk assessment and analysis, which
basically examines the current IT infrastructure and workflows, and then considers
the potential disasters that a business is likely to face. The goal is to identify
potential vulnerabilities and disasters -- everything from intrusion vulnerabilities
and theft to earthquakes and floods -- and then evaluate whether the IT
infrastructure is up to those challenges.

An analysis can help organizations identify the business functions and IT elements
that are most critical and predict the potential financial effects of a disaster event.
Analysis can also help determine RPOs and RTOs for infrastructure and
workloads. Based on these determinations, a business can make more informed
choices about which workloads to protect, how those workloads should be
protected and where more investment is needed to achieve those goals.

Implementation. The analysis is typically followed by a careful implementation


that details steps for prevention, preparedness, response and recovery. Prevention
is the effort made to reduce possible threats and eliminate vulnerabilities. This
might include employee training in social engineering and regular operating
system updates to maintain security and stability. Preparedness involves outlining
the necessary response -- who does what in a disaster event. This is fundamentally
a matter of documentation. The response outlines the technologies and strategies to
implement when a disaster occurs. This preparedness is matched with the
implementation of corresponding technologies, such as recovering a data set or
server VM backed up to the cloud. Recovery details the success conditions for the
response and steps to help mitigate any potential damage to the business.
The goal here is to determine how to address a given disaster, should it occur, and
the plan is matched with the implementation of technologies and services built to
handle the specific circumstances. In this case, the plan includes cloud-based
technologies and services.

Testing. Any DR plan must be tested and updated regularly to ensure IT staff are
proficient at implementing the appropriate response and recovery successfully and
in a timely manner, and that recovery takes place within an acceptable time frame
for the business. Testing can reveal gaps or inconsistencies in the implementation,
enabling organizations to correct and update the DR plan before a real disaster
strikes.

Cloud disaster recovery providers, vendors


At its heart, cloud DR is a form of off-site DR. An off-site strategy enables
organizations to guard against incidents within the local infrastructure, and then
either restore the resources to the local infrastructure or continue running the
resources directly from the DR provider. Consequently, countless vendors have
emerged to provide off-site DR capabilities.

The most logical avenue for cloud DR is through major public cloud providers. For
example, AWS offers the CloudEndure Disaster Recovery service, Microsoft
Azure provides Azure Site Recovery, and Google Cloud Platform offers Cloud
Storage and Persistent Disk options for protecting valued data. Enterprise-class DR
infrastructures can be architected within all three major cloud providers.

Beyond public clouds, an array of dedicated DR vendors now offers DRaaS


products, essentially providing access to dedicated clouds for DR tasks.

DRaaS providers and their products include the following:

1. Bluelock.

2. Expedient.

3. IBM Cloud Disaster Recovery.

4. Iland.
5. Recovery Point Systems.

6. Sungard Availability Services.

7. TierPoint.

8. VMware Site Recovery Manager.

In addition, more traditional backup vendors now have DRaaS offerings:

1. Acronis.

2. Arcserve Unified Data Protection.

3. Carbonite.

4. Databarracks.

5. Datto.

6. Unitrends.

7. Zerto.

Given the proliferation of DRaaS offerings, it's critical for organizations to


evaluate each potential offering for factors such as reliability, recurring costs, ease
of use and provider support. Any DR platform must be updated and tested
regularly to ensure DR is available and will function as expected.

To ensure data center operations can be resumed as fast and effectively as possible
after an incident, organizations should create a complete checklist for disaster
recovery planning.

A Cloud Disaster Recovery service offers organizations


several benefits, including:
1. Saves Time/Capital
2. More Data Backup Location Options
3. Easy to Implement with High Reliability
4. Scalability
For organizations considering cloud disaster recovery for the first time and are
wondering where to start, here’s an easy cloud disaster recovery plan that will
help you plan an effective disaster recovery service:

Step 1: Understand Your Infrastructure and Outline Any Risks


It is essential to consider your IT infrastructure, including the assets,
equipment, and data you possess to create an effective disaster recovery
process.
It’s also essential to assess where all this is stored and how much it is all worth,
and only then can you tailor a good cloud disaster recovery plan once you’ve
got this aspect sorted. You now need to evaluate the risks that might affect all
this. Risks can include natural disasters, data theft, and power outages.
Now that you have an account of all your assets, their quantities, and possible
disaster threats to them, you are in a better position to design your data
disaster recovery strategy to eliminate/minimize these risks.
Step 2: Conduct a Business Impact Analysis
A business impact analysis is next on the list. This will give you an
understanding of the limitations of your business operations once disaster
strikes, and you can consider them while forming the cloud disaster recovery
plan.
The following two parameters help you assess this factor:
a) Recovery Time Objective (RTO)
b) Recovery Point Objective (RPO)

a) Recovery Time Objective (RTO)


Regarding cloud disaster recovery, RTO is the maximum time your application
can stay offline before beginning to affect your business operations.
Scenario 1: If your company is dedicated to fast-paced service delivery, an
application failure can cost you heavy losses.
Moreover, you’ll have to invest heavily in an IT disaster recovery process to
resume business operations in minutes.
Scenario 2: If you have a medium-paced business and disaster affects your
operations, you can still find alternative ways to carry out business operations.
Therefore, you can set your RTO for as long as one week in your DR plan. In
such a case, you will not have to invest many resources into data disaster
recovery planning, thus saving ample time to acquire sufficient disaster
recovery cloud solution resources after the disaster strikes.
Knowing your RTO is very important as it is equivalent to the resources you
have to invest in your disaster recovery as a service plan, as the time lost in
the RTO can be used to gather backup resources.
b) Recovery Point Objective (RPO)
RPO is the maximum amount of time you can bear data loss from your
application due to a significant crisis.
Points to consider for determining RPO:
1. Possible data loss when disaster strikes
2. Possible time loss before the data compromise
If you apply the scenario mentioned above, your RPO can be as little as five
minutes, as your business is critical and cannot afford more than the specified
time-lapse.
Whereas for Scenario 2, you may want to backup your data, but since the data
isn’t time-sensitive, you will not have to invest heavily in cloud disaster recovery
solutions.
Step 3: Creating a Disaster Recovery plan based on your RPO and RTO
Now that you have determined your RPO and RTO, you can focus on designing
a system to meet your IT Disaster Recovery planning goals.
You can choose from the below range of Disaster Recovery strategies to implement
your IT Disaster Recovery plan:
1. Backup and Restore
2. Pilot Light Approach
3. Warm Standby
4. Full replication in the Cloud
5. Multi-Cloud Option
You can use a combination of these approaches to your benefit or exclusively
as per your business requirement.
Step 4: Approach the Right Cloud Partner
After you have considered creating a cloud disaster recovery plan, the next
step should be to look for a trusted cloud service provider that will help in the
deployment.
If you plan to use the full replication in the Cloud, then you would like to consider the
following factors to assess an ideal cloud provider:
1. Reliability
2. Speed of Recovery
3. Usability
4. Simplicity in Setup and Recovery
5. Scalability
6. Security ComplianceFactors to Assess an Ideal Cloud Provider
Big cloud service providers, including AWS, Microsoft Azure, Google Cloud, and
IBM, have cloud disaster recovery solutions. Besides these big firms, medium
and small firms offer quality Disaster Recovery-as-a-Service (DRaaS).

Useful Link: What is a Disaster Recovery Plan? How Confident Are You in
Implementing it?

Step 5: Build Your Cloud DR Infrastructure


After consulting a cloud disaster recovery service partner, you can work with
the provider to implement your design and set up your DR plan.
Based on the disaster recovery strategies you select, there are several logistical
aspects to consider:
1. What is the quantity of infrastructure components you will require?
2. By what means will you copy the data to the cloud?
3. What are the best ways to approach user authentication and access management?
4. What security and compliance best practices will you need to set up?
5. What security measures will you implement to minimize the likelihood of disasters?
Remember! Ensuring your DR plan is aligned with your RTO and RPO
specifications for smooth business operations is crucial.
Step 6: Put Your Disaster Recovery Plan on Paper
It is essential to have a standard guideline or process flowchart with specific
instructions for each and everyone involved in your IT disaster recovery plan.
When a disaster occurs, each individual should be ready to take charge of the
responsibility as per his role in the cloud disaster recovery services.
Moreover, every instruction should be clearly stated on paper, with the finest
details mentioned.
These steps ensure the effectiveness of the cloud disaster recovery plan.
Step 7: Test Your Disaster Recovery Plan Often
Since your cloud disaster recovery plan is on paper, the next step would
involve testing your IT discovery recovery plan more often. This helps to ensure
that there are no loopholes.
On paper, the DR plan may look like the most comprehensive one, but you will
know its credibility only after testing.
Your first test may not go as likely as you thought it would it may be worse. But
then you will learn from these experiences and will upgrade your disaster
recovery as a service to better brace up your infrastructure against potential
disasters.
The bigger your disaster recovery plan, the more important it becomes to test
it. Coming to the frequency of your tests, it is recommended that you run your
DR tests every quarter.
Meanwhile, you can monitor and analyze your backup infrastructure
performance daily or weekly.
Your organization will always witness change in terms of people, processes,
and technologies. Testing your cloud disaster recovery solutions throughout
these changes is good to ensure that the business is ever-ready for an
emergency.
Conclusion
A complete knowledge of the industry’s best practices keeps your
organization on the safer side. Have you identified your cloud platform?
Looking for a trusted DRaaS provider? Get in touch with Veritis, Stevie Award and
Globee Business Award winner, to understand the various disaster recovery
strategies and identify the one that suits your business requirements!
How to Design a Cloud-Based Disaster
Recovery Plan
After considering the benefits of cloud computing in disaster recovery, it is time
to design a comprehensive DR plan. In fact, you can read one of our blog posts
which walks you through the entire process of a creating a DR plan. Below, we
are going to discuss how to create a DR plan which works in the cloud
environment.
As a rule, an effective cloud-based DR plan should include the following steps:

1. Perform a risk assessment and business impact analysis.


2. Choose prevention, preparedness, response, and recovery measures.
3. Test and update your cloud-based DR plan.
Let’s discuss how disaster recovery planning works in cloud computing.

Perform a risk assessment and business


impact analysis
The first step in a disaster recovery planning in cloud computing is to assess your
current IT infrastructure, as well as identify potential threats and risk factors
that your organization is most exposed to.

A risk assessment helps you discover vulnerabilities of your IT infrastructure and


identify which business functions and components are most critical. At the same
time, a business impact analysis allows you to estimate how unexpected service
disruption might affect your business.

Based on these estimations, you can also calculate the financial and non-financial
costs associated with a DR event, particularly Recovery Time Objective (RTO)
and Recovery Point Objective (RPO). The RTO is the maximum amount of time
that IT infrastructure can be down before any serious damage is done to your
business. The RPO is the maximum amount of data which can be lost as a result
of service disruption. Understanding the RTO and RPO can help you decide which
data and applications to protect, how many resources to invest in achieving DR
objectives, and which DR strategies to implement in your cloud-based DR plan.

Implement prevention, preparedness,


response, and recovery measures
The next step is to decide which prevention, preparedness, response, and
recovery (PPRR) measures should be implemented in disaster recovery of the
cloud computing environment. In a nutshell, PPRR measures can accomplish the
following:

1. Prevention allows you to reduce possible threats and eliminate system


vulnerabilities in order to prevent a disaster from occurring in the first
place.
2. Preparedness entails creating the outline of a DR plan which states what
to do during an actual DR event. Remember to document every step of the
process to ensure that the DR plan is properly executed during a disaster.
3. Response describes which DR strategies should be implemented when a
disaster strikes in order to address an incident and mitigate its impact.
4. Recovery determines what should be done to successfully recover your
infrastructure in case of a disaster and how to minimize the damage.
After you have determined which approach to disaster recovery to implement,
you should choose a data protection solution capable of putting your DR plan
into action and achieving DR objectives. Choose the solution which meets your
business needs and complies with your infrastructure requirements. For this
purpose, consider the following criteria:

1. Available services
2. Hardware capacity
3. Bandwidth
4. Data security
5. Ease of use
6. Service scalability
7. Cost
8. Reputation

Test and update your cloud-based DR


plan
After you have created and documented the DR plan, you should run regular tests
to see if your plan actually works. You can test whether business-critical data and
applications can be recovered within the expected time frame.

Testing a cloud-based DR plan can help you identify any issues and
inconsistencies in your current approach to disaster recovery in cloud
computing. After the test run, you can decide what your DR plan lacks and how it
should be updated in order to achieve the required results and eliminate existing
issues.

How does cloud disaster recovery work?


Cloud disaster recovery takes a very different approach than traditional DR.
Instead of dedicated servers staged with the OS and application software and
patching to the last configuration used in production, cloud disaster recovery
captures the entire server Image in storage, which includes the operating
system, applications, patches, and data into a single software bundle or virtual
server image, waiting to be deployed in the event of a disaster. The virtual server
Image in the Cloud can be delta synced with the origin server during steady
state and most importantly restored, or spun up, on a virtual machine in
minutes. Since the virtual server Image is not dependent on pre-installed
hardware, the operating system, applications, patches, and data can be
migrated from one data center to the Cloud much faster than traditional DR
approaches.

Types of Cloud-Based Disaster Recovery Solutions


There are several types of cloud-based disaster recovery solutions that
businesses can leverage, including:

1. Public Cloud Disaster Recovery: Public cloud disaster recovery solutions


are provided by third-party cloud providers. These solutions are cost-
effective and scalable, making them an ideal choice for small to medium-
sized businesses.
2. Private Cloud Disaster Recovery: Private cloud disaster recovery
solutions are hosted in-house by businesses. These solutions offer
businesses greater control and security over their disaster recovery
infrastructure.
3. Hybrid Cloud Disaster Recovery: Hybrid cloud disaster recovery
solutions combine the benefits of both public and private cloud disaster
recovery solutions. This approach provides businesses with greater
flexibility and control over their disaster recovery infrastructure.

Conclusion
In conclusion, cloud computing can play a significant role in disaster recovery
planning, providing businesses with the flexibility, scalability, and cost-effective
solutions they need to recover their critical IT systems and data quickly and
efficiently. By leveraging cloud-based disaster recovery solutions, businesses
can minimize the impact of any unexpected events and ensure business
continuity.

Don't wait until it's too late! Download our disaster recovery checklist to ensure
your business is prepared for any emergency.

4 Data Disasters to Look Out for!


1. Natural Disasters
These can be in the form of floods, fires, storms or earthquakes. Without cloud
computing disaster management, you may find it difficult to resume operations,
thus, putting your company at risk.
2. Hardware Failure
You may also lose your data to hardware failure, which may occur due to any surge
in power. Even with cooling systems in place it is important that you regularly take
your data back up using either cloud-based or on-site storage.
3. Human Errors
Even with the best of intentions and procedures in place, mistakes do happen. It may
be that your employees forget to save changes, accidentally delete crucial data or hit
the wrong button without even realizing it. While continuous training helps to keep
staff updated on best practices, it is important that you take cloud backup of your
data regularly.
4. Cyber-attacks
Even with all security measures in place, your data may fall victim to cyber-attacks
with viruses and other malware holding your data hostage, causing immense
financial damage and loss of reputation. Your disaster recovery plan should include
steps to keep data safe and recover from any attempt at hacking.
Cloud computing disaster management can therefore help limit any loss that may
occur due to data leak or loss. It is important to note that while you can store your
data on-site, cloud backup can support your business and see you through any
unplanned downtime. Cloud backup solutions offer faster and assured recovery,
while also reducing costs and freeing up your IT staff to focus on core business
functions. The other advantage of cloud data recovery solutions is the fact that they
automatically store the data safely off-site for disaster recovery purposes.
Related Read: How Can Businesses Ensure Cloud Platform Security?

Why You Need a Cloud Backup and Data Disaster


Recovery Plan
Here are 5 Reasons why you must have a robust Cloud Backup and Disaster
Recovery Plan:

1. Achieve Secure Off-site Backup


You may be checking everything on your data recovery plan – taking regular
backups, checking that your backup equipment is functioning properly and all
configurations are up to date. However, a fire in the building or a pipe burst that
drowns your equipment is all that is needed to rob you of your critical data.
Once that occurs, you will also end up losing precious time recreating the data,
which could cost you in terms of revenue and customer goodwill.
The way forward then is to store your data off-site and keep it safe from any kind of
disaster. A disk-based cloud server automatically transfers data off-site for data
protection and recovery. So, no matter in which form the disaster strikes, you can
have control over your data within a few moments.
Also Read: How Can Enterprises Benefit from a Successful Cloud Migration
2. Free Up Time From Manual Backup Tasks
Tape-based data backup is time-consuming and complex. As an organization, you
may have limited IT staff and rather than putting them to use in taking manual data
backups, they would be more gainfully employed on strategic projects crucial to your
business.
Cloud disaster recovery plans, on the other hand, take backups automatically. These
solutions standardize and automate the entire data backup process, and so you
don’t need IT staff at each location. Besides, these applications also equip your IT
teams with the tools to manage and monitor all aspects of server data protection.
3. Simpler Budgeting
Cloud backup involves monthly service fee unlike traditional data backup solutions
that involve acquiring software licenses for specific servers. With cloud computing
disaster management, you are saved from associated costs such as software,
media, backup hardware or maintenance.
With cloud backup, it is the service provider who bears the cost of the entire
infrastructure and you benefit from a monthly service fee which is simple to budget
and predict.
4. Guaranteed Data Recovery
Even if lightning strikes your building, you can rest easy knowing your data on the
cloud is safe. It is not always possible to test your internal backups. Cloud backup
automatically transmits changes in files and documents to a secure, off-site facility
to ensure continuous backup at all times.
Modern cloud disaster recovery solutions not only protects recently changed files
you have closed but also captures changes in open files ensuring that there is no
disruption in the process flow.
5. Minimize Risk And Cost of Downtime
Cloud backup is not a single activity or a one-time event. Rather it consists of several
interconnected processes, some of which are:
1. Replicate backup to another device
2. Transfer the backup to an off-site location to protect from both natural and
human-made disasters
3. Organize the data in a way that makes recovery quick and easy
4. Recover replicated data from storage as and when needed and where ever
needed

Conclusion
If you have lost data to a natural disaster or human error, or fallen victim to a cyber
attack, traditional tape recovery methods will not hold you in good stead. In these
competitive times, no organization, large or small, can afford to lose access to their
critical data even for a few hours. To protect yourself from such vulnerabilities you
need a cloud disaster recovery plan that includes cloud backup of data.
Cloud computing management solutions offer ease of use and a wide range of
control – you can choose the functions to automate and yet retain the ability to
manage and monitor the entire data backup process from anywhere and anytime.
You can provide access to authorized users, who can simply login and through a
user-interface, select the files to recover and restore them to any location.
These solutions automatically recreate data and capture system information,
enabling you to easily restore a full system to an alternative hardware in any location
of your choice with minimal IT assistance. In addition, cloud disaster recovery plan
allows you several levels of protection which means you can protect data for
specific types on a per-server or per-folder basis.

Protecting Data Effortlessly with Cloudian


If you need to backup data to on-premises storage, Cloudian offers low-
cost disk-based storage with capacity up to 1.5 Petabytes. You can also
set up a Cloudian appliance in a remote site and save data directly to the
remote site using our integrated data management tools.

Alternatively, you can use a hybrid cloud setup. Backup data to a local
Cloudian appliance, and configure it to replicate all data to the cloud.
This allows you to access data locally for quick recovery, while keeping a
copy of data on the cloud in case a disaster affects the on-premise data
center.
Disaster Recovery as a Service (DRaaS): Why,
Where and How
What is Disaster Recovery as a Service?
Disaster Recovery as a Service (DRaaS) is disaster recovery hosted by a
third party. It involves replication and hosting of physical or virtual
servers by the provider, to provide failover in the event of a natural
disaster, power outage, or other disaster that affects business continuity.

The basic premise of DRaaS is that In the event of a real disaster, the
remote vendor, which typically has a globally distributed architecture, is
less likely to be impacted compared to the customer. This allows the
vendor to support the customer in a worst case disaster recovery
scenario, in which a disaster results in complete shutdown of the
organization’s physical facilities or computing resources.

Third-party DRaaS vendors can provide failover for on-premise or cloud


computing environments, billed either on-demand, according to actual
usage, or through ongoing retainer agreements. DRaaS requirements and
expectations are typically recorded in service level agreements (SLAs).
What is disaster recovery as a service (DRaaS)?

Disaster recovery as a service(DRaaS) is a cloud computing service model that allows


an organization to back up its data and IT infrastructure in a third party cloud
computing environment and provide all the DR orchestration, all through a SaaS
solution, to regain access and functionality to IT infrastructure after a disaster. The as-a-
service model means that the organization itself doesn’t have to own all the resources
or handle all the management for disaster recovery, instead relying on the service
provider.

Disaster recovery planning is critical to business continuity. Many disasters that have
the potential to wreak havoc on an IT organization have become more frequent in
recent years:

1. Natural disasters such as hurricanes, floods, wildfires and


earthquakes
2. Equipment failures and power outages
3. Cyberattacks

In this article you will learn:

1. DRaaS Operating Models


1. Managed DRaaS
2. Assisted DRaaS
3. Self-Service DRaaS
2. How Does DRaaS Work?
3. BaaS vs DRaaS
4. What Should You Consider When Choosing a DRaaS?
1. Reliability
2. Access
3. Assistance

This article is part of a series on Disaster Recovery.

DRaaS Operating Models


There are three primary models used by disaster recovery as a service
providers—managed, assisted, and self-service.

Managed DRaaS
In the managed DRaaS model, third parties take full responsibility for
disaster recovery. Choosing this option requires organizations to work
closely with DRaaS providers to keep all infrastructure, application, and
service changes up to date. If you don’t have the expertise and time to
manage your own disaster recovery, this is the best option.

Assisted DRaaS
If you want to take responsibility for certain aspects of your disaster
recovery plan, or if you have custom applications that may be difficult for
a third party to take over, supported DRaaS may be a better choice. In
this model, the service provider provides services and expertise that can
help optimize the disaster recovery process, but the customer is
responsible for implementing some or all of the disaster recovery plans.

Related content: read our guide to IT disaster recovery plans and disaster
recovery policy

Self-Service DRaaS
The cheapest option is a self-service DRaaS, where customers are
responsible for planning, testing, and managing disaster recovery, and
the vendor provides backup management software, and hosts backups
and virtual machines in remote locations. This model is offered by all
major cloud providers—Amazon, Microsoft Azure and Google Cloud.

When using this model, careful planning and testing is required to


ensure that operations can be immediately failed over to the vendor’s
remote data center, and easily recovered when local resources are
restored. This option is ideal for organizations with in-house disaster
recovery and cloud computing expertise.

How Does DRaaS Work?


The DRaaS provider provides infrastructure that serves as the customer’s
disaster recovery site when a disaster happens. The service offered by
the provider typically includes a software application or hardware
appliance that can replicate data and virtual machines to a private or
public cloud operated by the provider.

In managed DRaaS, the provider is responsible for the failover process,


ensuring users are redirected from the primary environment to the
remote environment. DRaaS providers also monitor disaster recovery
operations and help customers recover systems and resume normal
operation. In other forms of DRaaS, your organization will need to
assume responsibility for some of these tasks.

Hosted DRaaS is especially useful for small businesses that lack in-house
experts to design and execute disaster recovery plans. The ability to
outsource infrastructure is another benefit for smaller organizations,
because it avoids the high cost of equipment needed to run a disaster
recovery site.

BaaS vs DRaaS
Backup as a Service (BaaS) allows businesses to back up files, folders
and entire data stores to remote secure data centers. It is provided by
third-party managed service providers (MSP). It is the MSP’s
responsibility to maintain and manage backups, rather than having the IT
department manage them locally.

There are three primary differences between BaaS and DRaaS:

1. BaaS only backs up data, whereas DRaaS is responsible for backing


up data and infrastructure. In a DRaaS service, the MSP is
responsible for deploying entire servers and ensuring they are
available to users.
2. BaaS can perform data recovery, but the RPO (Recovery Point
Objective) and RTO (Recovery Time Objective) are typically measured
in hours or days. This is because for large datasets, it can take a long
time to transfer data back from the MSP to your on-premises data
center. With DRaaS solutions, you can measure RPO and RPO in
minutes or even seconds, because a secondary version of your
servers are ready to run on a remote site.
3. BaaS costs are significantly lower than DRaaS, because the main cost
is storage resources used by your backups. In DRaaS you need to pay
for additional resources including replication software, compute and
networking infrastructure.

What Should You Consider When Choosing a DRaaS?


The following are key considerations when selecting a DRaaS provider for
your organization.

Reliability
In the early days of DRaaS, there were concerns about the resources
available to the DRaaS provider, and its ability to service a certain
number of customers in case of a widespread regional disaster.

Today, most DRaaS services are based on public cloud providers, which
have virtually unlimited capacity. At the same time, even public clouds
have outages, and it is important to understand what happens if, when
disaster strikes, the DRaaS vendor is unable to provide services. Another,
more likely scenario is that the DRaaS vendor will perform its duties, but
will not meet its SLAs. Understand what are your rights under the
contract, and how your organization will react and recover, in each
situation.

Access
Work with your DRaaS provider to understand how users will access
internal applications in a crisis, and how VPN will work—whether it will
be managed by the provider or rerouted. If you use virtual desktop
infrastructure (VDI), check the impact of a failover event on user access,
and determine who will manage the VDI during a disaster.

If you have applications accessed over the Internet, coordinate with


providers, customers, partners, and users how DNS will work in a crisis—
whether it should be transitioned to DNS managed by the provider, or
kept with the same DNS (this also depends on whether your DNS is
hosted or self-managed). DNS is a mission critical service, and if it
doesn’t work smoothly during a disaster, even if systems are successfully
transitioned, they will be offline.

Assistance
Ask prospective DRaaS providers about the standard process and
support they provide, during normal operations and during a crisis.
Determine:

1. What is the disaster recovery procedure


2. What professional services the provider offers in time of disaster
3. What responsibility lies with the provider vs. your organization
4. What is the testing process—determine if you can run tests for
backup and recovery internally, and whether testing or disaster
“drills” are conducted by the provider
5. After declaring a disaster, how long can the provider run your
workloads before recovering (to account for long term disaster
scenarios)

Using DRaaS to prepare for a disaster


True DRaaS mirrors a complete infrastructure in fail-safe mode on virtual servers,
including compute, storage and networking functions. An organization can continue to
run applications—it just runs them from the service provider’s cloud or hybrid cloud
environment instead of from the disaster-affected physical servers. This means
recovery time after a disaster can be much faster, or even instantaneous. Once the
physical servers are recovered or replaced, the processing and data is migrated back
onto them. Customers may experience higher latency when their applications are
running from the cloud instead of from an on-site server, but the total business cost of
downtime can be very high, so it’s imperative that the business can get back up and
running.

What are Cloud Security Standards?


Cloud Security Standards – what are they all about? These standards
are rules, best practices, and guidelines created by industry organizations,
global entities, and governmental bodies. Their main goal is to create a
foundational level of security for cloud services. They play a critical role in
the protection of cloud data, privacy safeguards, ensuring regulatory
adherence and risk management related to cloud computing. They’re vast
in scope, tackling everything from data protection to access control,
identity verification, incident response, and even encryption protocols.

But the emphasis of these standards isn’t solely on the technology. They
also incorporate operational and organizational elements of security,
touching on aspects like risk management, security in human resources,
supply chain security, and the formulation of security policies. The aim is
to provide a holistic approach to creating a secure, reliable cloud
environment.

However, cloud security standards are not universally applicable.


Different organizations or specific use cases may require different
standards. Certain standards are designed specifically for handling
specific types of data – healthcare, financial, or government, for example.
Therefore, understanding cloud security standards and their relevant use
cases is vital for organizations to choose and implement the ones that
cater to their specific needs and regulatory requirements.

Why are Cloud Security Standards


Important?
Cloud Security Standards are more than just beneficial—they’re crucial
in today’s escalating cyber threats. They serve several key purposes that
make them indispensable for organizations.

These standards offer a structured path for companies to secure their


cloud-based data and services effectively. They act as a blueprint for
constructing sturdy security infrastructures capable of fending off
numerous threats, from data breaches to DoS attacks. Importantly, as
these standards evolve, they help organizations keep pace with the
newest security best practices.

Compliance is another area where cloud security standards shine. Strict


data protection and privacy regulations bind industries like healthcare,
finance, and government. Organizations can meet these regulatory
demands and avoid the heavy fines linked with non-compliance by
sticking to the appropriate cloud security standards.

Moreover, these standards build credibility among stakeholders, such as


customers, partners, and regulators. They assure these parties of an
organization’s dedication to data protection and secure cloud
environments, thereby fostering trust and confidence. In a marketplace
where a data breach can spell disaster in terms of reputation and
customer trust, not to mention financial losses, this can serve as a
significant competitive edge.

These standards assist organizations in devising an effective strategy for


responding to incidents. Regardless of the strength of security measures
in place, incidents can still happen. A detailed, standard-based response
plan can help limit the damage, shorten downtime, and promote quick
recovery in such events.

Top 12 Cloud Security Standards


Navigating the complex landscape of cloud security can seem like a
daunting task. Understanding and implementing the right cloud security
standards is crucial in this journey. Let’s delve into the top 12 Cloud
Security Standards to help secure your cloud data, ensure compliance,
and foster stakeholder trust.

#1. ISO 27017


The ISO/IEC 27017 standard acts as a guide focusing on information
security relevant to cloud computing. It suggests security controls for
both parties – the cloud service providers and the customers. This
standard extends the reach of ISO/IEC 27002, adjusting it to cater to the
specific needs of cloud services. When organizations incorporate ISO/IEC
27017, they can bolster their cloud services’ security, dependability, and
compliance, aligning with international best practices.

ISO/IEC 27017 discusses a variety of controls, like the ownership of assets,


management of user access, and division of duties, among others.
Defining roles and responsibilities helps in avoiding security loopholes
and overlapping, making it an invaluable resource for managing and
lessening risks associated with the cloud.

#2. ISO 27018


Being the pioneer international standard that deals with personal data
protection in cloud computing, ISO/IEC 27018 establishes universally
recognized control objectives and protocols. These controls are aimed at
implementing measures to safeguard Personally Identifiable Information
(PII), keeping in sync with the privacy principles stated in ISO/IEC 29100.

ISO/IEC 27018 carries immense relevance for businesses that deal with
personal data via cloud-based platforms. When organizations implement
this standard, it acts as a testament to their commitment to data privacy
and protection, strengthening customer trust. Additionally, it aids in
ensuring adherence to privacy laws such as GDPR and CCPA.

#3. Cloud Security Alliance (CSA) STAR


Program
The STAR Program is an acronym for Security, Trust & Assurance Registry,
a project by the Cloud Security Alliance. It leans on three pillars:
transparency, in-depth audits, and bringing diverse standards together.
This program offers a sturdy structure for cloud service providers to
scrutinize their security protocols.

As a customer, the CSA STAR can be your guiding star when you need to
evaluate how good a cloud service provider is when it comes to security.
It comes equipped with two useful tools: the Consensus Assessments
Initiative Questionnaire (CAIQ) and the Cloud Controls Matrix (CCM).
Together, these tools form a broad security controls framework custom-
built for cloud-based IT systems.

#4. SOC 2 Type II


Introduced by the American Institute of Certified Public Accountants
(AICPA), this standard assesses non-financial controls within a business,
concerning key areas such as security, availability, processing integrity,
confidentiality, and privacy – collectively known as the Trust Services
Criteria.

A Type II report holds a lot of weight. Why, you ask? Well, it’s proof that an
external auditor has meticulously reviewed an organization’s systems,
practices, and controls. More than that, it’s evidence that these controls
were properly designed and were consistently effective over a specified
period. For any organization, that’s serious about showing off a gold-
standard level of security assurance to customers and other
stakeholders, a Type II certification is highly desirable.

#5. NIST 800-53


Crafted by the National Institute of Standards and Technology (NIST), the
NIST 800-53 protocol is a wide-ranging list of security measures
designed for federal information systems and organizations. An important
thing about it is that it offers a rich array of security and privacy controls
that can be tweaked to suit the unique requirements of different systems
and organizations.

Although it was originally designed with U.S. federal government agencies


in mind, the principles laid out in NIST 800-53 have proven universal.
They can be effectively adopted by a variety of sectors and by
businesses of all sizes. If you’re looking to put in place and evaluate
security procedures in order to enhance your company’s overall
cybersecurity stance, NIST 800-53 could be a great resource for you.

#6. PCI DSS


Ever purchased with a credit card? There’s a good chance the company
you dealt with followed the Payment Card Industry Data Security
Standard (PCI DSS) rules. It’s not just some abstract concept; it’s a reality
for businesses around the world. The PCI DSS ensures that any outfit
accepting, processing, storing or zapping around credit card info keeps
things under proper security.

If a company is dealing with cardholder data, they’ve got to stick to the


PCI DSS. No two ways about it. Apart from making sure they stay in line
with the law and avoid hefty fines, it also helps them dodge payment card
fraud. Plus, in an age where data breaches are more common than we’d
like, it’s a pretty neat way for companies to show their customers they
mean business when it comes to security.

#7. HIPAA/HITECH
If you’re a healthcare provider or deal with health plans and you’re tossing
around Protected Health Information (PHI), you’ve got to pay attention to
the Health Insurance Portability and Accountability Act (HIPAA) and the
Health Information Technology for Economic and Clinical Health (HITECH)
Act. We’re talking U.S. laws here, folks. They’re not optional. They’re all
about making sure that PHI is handled properly.

Sticking to the HIPAA/HITECH guidelines is a big deal if you’re dealing with


PHI in the cloud. It’s not just about doing the right thing; it’s also a great
way to show patients and partners that you’re serious about keeping
sensitive health information under wraps. Not to mention, you’re going to
avoid potential legal issues.

#8. FedRAMP (Federal Risk and


Authorization Management Program)
FedRAMP sweeps across the U.S. government scene, laying down the law
for a uniform way to evaluate security, grant approvals, and keep a
watchful eye on cloud products and services.

For those cloud service providers with dreams of mingling with U.S.
federal agencies, FedRAMP authorization isn’t a luxury, it’s a must-have.
But don’t be mistaken – even if your ties with the U.S. government aren’t
direct, marching to the beat of FedRAMP standards is a bold statement of
your dedication to top-notch security.

#9. General Data Protection Regulation


(GDPR)
GDPR is an ace up the European Union’s sleeve, setting down firm
demands for safeguarding data and preserving privacy for every
individual residing within the European Union and the European Economic
Area. It doesn’t stop there though; it also delves into the transfer of
personal data beyond these borders.

While it may not be cut from the same cloth as the usual cloud security
standards, any organization that uses cloud services to process, store, or
shuffle around the personal data of EU residents can’t afford to ignore
GDPR. Straying from its guidelines can lead to weighty financial blows,
making GDPR an unmissable stop on any cloud security strategy’s
itinerary.

#10. California Consumer Privacy Act


(CCPA)
CCPA walks a similar path as the GDPR, but it’s designed to boost privacy
rights and consumer protection specifically for the people of California,
United States. It arms California’s residents with the right to know what
personal details are being harvested, whether these details are being sold
or disclosed, and to whom.

CCPA’s influence, however, isn’t confined to the Golden State. Given the
borderless nature of cloud services, it casts a wider net. Compliance with
CCPA isn’t just a legal necessity; it’s a message to customers and
partners that your organization is steadfast in its commitment to data
privacy.

#11. Cybersecurity Maturity Model


Certification (CMMC)
This standard operates as a unifying beacon for cybersecurity in the
defense industrial network, forming the U.S. Department of Defense
supply chain. It gauges cybersecurity maturity across five tiers and maps
a series of processes and practices against the nature and sensitivity of
the data needing protection and the array of associated threats.

If your organization aims to work with the Department of Defense,


securing the right CMMC level becomes pivotal. It showcases that the
company has the required controls to safeguard sensitive data,
potentially encompassing Federal Contract Information and Controlled
Unclassified Information.

#12. Amazon Web Services (AWS) Well-


Architected Framework
Although not a traditional standard, the AWS Well-Architected
Framework represents a comprehensive guide from Amazon, aimed at
facilitating the creation of secure, high-performing, and cost-efficient
systems on the AWS platform. It paves the way for customers to
consistently assess architectures and put into effect designs that will
dynamically scale over time.

For organizations utilizing AWS cloud services, embracing this framework


could provide substantial benefits. It lays down best practices across five
key aspects: operational excellence, security, reliability, performance
efficiency, and cost optimization. This aids organizations in constructing
the most secure, efficient, high-performing, and resilient infrastructure for
their applications.

Conclusion
Wrapping up, navigating the intricacies of cloud security is both complex
and paramount. Organizations that adhere to relevant Cloud Security
Standards can safeguard their data, meet regulatory compliance, and
build trust with stakeholders. That said, executing and maintaining cloud
security can pose significant challenges.

This is where PingSafe, a comprehensive cloud security solution, steps in


to simplify the process. Equipped with unique features such as Cloud
Misconfigurations, Vulnerability Management, Offensive Security Engine,
Cloud Credential Leakage detection, and Cloud Detection and Response
(CDR), PingSafe empowers you to spot vulnerabilities, stay on top of
threats, manage vulnerabilities effectively, and secure your overall cloud
environment.

To delve deeper into how PingSafe can bolster the security of your cloud
environment, reach out to us today.

A Quick Look at Cloud Security Standards Best


Practices
There are a number of best practices of cloud security that
organizations can adhere to amidst expanding workloads in their
respective cloud environments. Although these best practices have no
foundation as such, it has been observed that following them can
safeguard data in cloud environments. CSPs (Cloud Service
Providers) use the shared responsibility model to maintain security
and accept the responsibility for some security aspects. Other aspects
are shared between the organization and the CSP or just solely
remain the organization’s responsibility. Some of the key best
practices for cloud security are explained below.

Performing Due Diligence

It is imperative for the cloud users to understand their applications and


networks completely. This is for determining the way of providing
functionality, security and resilience to the cloud-deployed systems.
Due diligence should be performed across the systems’ and
applications’ lifecycle that are being deployed in the cloud. This due
diligence involves planning, operations, development, deployment and
decommissioning.

Access Management

Organizations need to maintain complete control over their encryption


keys. Three capabilities are a must-have in access management.
These capabilities include:

1. The ability to identify & authenticate users


2. The ability to assign access rights to users
3. The ability to develop and enact access control policies for all
resources

Data Protection

There are three separate challenges involved in data protection,


which go beyond access controls. These are

1. Data protection against unauthorized access


2. To ensure ceaseless access to crucial data in the case of
failures and errors
3. Prevention of the accidental data disclosure, which was
presumably deleted

Monitoring and Safeguarding

The responsibilities of CSPs and consumers for monitoring the cloud-


deployed systems and applications are divided. The CSPs are
responsible for monitoring the services and infrastructure offered to
consumers, but not for monitoring application security and systems
created by consumers using provided services. Consumers need to
design & implement additional monitoring carefully, ensuring that it is
completely integrated with cloud automation and is capable of being
scaled up or down devoid of manual intervention.

Looking At The Prospects

The developments made by the regulatory bodies as well as


organizations point the CSPs and cloud users in the right direction.
They lay the groundwork for a stable and secure cloud computing
environment. The incidents in cloud security services observed in the
past couple of years show that mishaps could have been avoided if
right security tools were used by consumers. For example, using
properly configured access control, multi-factor authentication
provided by CSPs, and precise encryption of data. It is believed that,
for SMEs, approaching well-established CSPs will help reduce the
risks associated with moving data and applications to the cloud.

Cloud-Specific Security Frameworks and Benchmarks

Here are some frameworks to help organizations maintain a high level of


cloud security.

CIS Cloud Security Benchmarks

The CIS Foundations Benchmarks are a component of the cybersecurity


standards overseen by the Center for Internet Security (CIS). CIS
Benchmarks are vendor-agnostic, consensus-based safe configuration
guidelines for the most prevalent technologies and systems.

There are over 100 freely available CIS Benchmarks dealing with dozens of
vendor product groups, including servers, operating systems, mobile
devices, cloud proviers, network devices, and desktop software. The CIS
Foundations Benchmarks offer help for public cloud environments at the
level of the account.

The CIS Foundations Benchmarks deal with:

1. Oracle Cloud Infrastructure


2. IBM Cloud
3. Amazon Web Services
4. Microsoft Azure
5. Google Cloud Platform
6. Alibaba Cloud
CIS Benchmarks provide security configuration outlines based on best
practices and are approved by business, government, academia, and
industry bodies. The CIS Foundations Benchmarks are meant for application
and system administrators, security experts, and auditors, as well as for
platform deployment, help desk, and individual DevOps personnel who
wish to create, deploy, secure, or evaluate solutions within the cloud. They
are available free of charge and can be downloaded as PDF documents.

CSA Controls Matrix

This group of security controls, implemented by the Cloud Security Alliance


(CSA), offers a fundamental outline for security vendors, increasing the
robustness of security control environments and streamlining audits. This
framework also helps prospective customers assess the risk posture of
potential cloud vendors.

The Cloud Security Alliance has created a certification initiative known as


STAR. The CSA STAR certification demonstrates an exceptional cloud
security stance, which is respected by customers. This set of standards
could be the top asset for customers assessing a vendor’s dedication to
security, and is a must for every organization seeking to ensure customer
trust.

The STAR registry outlines the privacy and security controls offered by
common cloud computing features, so cloud customers may evaluate their
security providers to form solid purchasing choices.

Related content: Read our explainer on Cloud Security Controls.

Cloud Architecture Frameworks

These frameworks may be viewed as best practice guidelines for cloud


architects, regularly dealing with operational security, efficiency, and cost-
value analysis. Here are three frameworks that cloud architects should be
aware of:
1. AWS Well-Architected framework — helps Amazon Web Services
architects create applications and workloads in the Amazon cloud.
This framework outlines questions for evaluating cloud environments
and offers customers a reliable resource for architecture analysis. Five
core principles guide Amazon architects — security, operational
excellence, performance efficiency, reliability, and cost optimization.
2. Google cloud-architected framework — offers a foundation for
enhancing and constructing Google Cloud features. This framework
helps architects by dealing with four central principles — security and
compliance, operational excellence, performance cost optimization,
and reliability.
3. Azure architecture framework — helps architects develop cloud-
based features in Microsoft Azure. This guide helps optimize
architecture workloads and is founded on similar principles to the
Google Cloud and AWS Frameworks, such as data security, cost
optimization, dependability, performance efficiency and operational
excellence, which can help organizations retain system functionality
and recover from incidents.

Cloud Security with Exabeam

Even if an all-cloud initiative is not in motion, it’s likely your organization


will be moving operations into the cloud in the near future. Before taking
this step, it’s critical to assess how you will go about securing cloud
operations by understanding related security and compliance issues.
Fortunately, a modern security information and event management (SIEM),
or extended detection and response (XDR) solution will let your analysts
address enterprise cloud security with advanced monitoring, behavioral
analytics and automation.

A modern approach automatically collects alert data from across multiple


clouds, detects deviations in normal user and entity activity using
behavioral analytics, and helps analysts quickly respond to attacks on cloud
applications and infrastructure. A modern SIEM or XDR can help you
combat increasingly targeted and complex attacks and insider threats by
augmenting other cloud security solutions like Identity and Access
Management (IAM) and Cloud Access Security Broker (CASB), and Secure
Access Service Edge (SASE) to better detect, investigate and respond to
cloud-based attacks, all while minimizing the detection of false positives.

As cloud-delivered offerings, Exabeam Fusion SIEM and XDR address cloud


security in multiple ways to ensure the protection of sensitive data,
applications and infrastructure. As the leader in Next-Gen SIEM and XDR
Exabeam dramatically improves SOC productivity, allowing teams to detect,
investigate and respond to cyberattacks in 51 percent less time. Here are a
few of the ways Exabeam supports Cloud Security:

1. Collects alert data by direct ingestion from dozens of cloud security


tools and popular cloud-based services across multiple enterprise
clouds, in addition to hundreds of other products
2. Detects new and emerging threats with behavioral analytics
3. Provides machine-built timelines to improve analyst productivity and
reduce response times by automating incident investigation
4. Includes response playbooks using pre-built connectors and
hundreds of actions to contain and mitigate threats
5. Offers pre-built compliance packages (Exabeam Fusion SIEM)
6. Supports detection and investigation with mappings to MITRE
ATT&CK and the availability of the Exabeam Threat Intelligence
Service, a daily updated stream of indicators of compromise (IoCs)
such as malicious IP addresses and domains
7. Augments other cloud security solutions like IAM and CASB to better
detect, investigate and respond to cloud-based attacks while
minimizing the detection of false positives
See Exabeam in
Top cloud security standards
and frameworks to consider
Cloud security standards and frameworks are key to securing
systems and maintaining privacy. Read up on available options and
advice for selecting the best for your organization.

1.
2.
3.
4.
5.

By

1. Paul Kirvan

Published: 21 Jan 2022

Security standards are lists of best practices and processes defined by


industry organizations to help organizations ensure their security posture
and protect their data and systems.

While many security standards overlap with cloud security standards,


confusion abounds around the shared responsibility model. Customers are
often unsure where a cloud provider's security responsibility ends and
where theirs begins. This makes selecting standards difficult.

The following is a list of professional and technical organizations that work


to address cloud security issues. It includes organizations responsible for
issuing cybersecurity standards and, by extension, cloud security
standards. Also, read guidance on how to select a standard and how to
prepare for potential audits.

Professional and technical organizations


The following groups, task forces and associations offer resources and
standards on cloud security.

Distributed Management Task Force


DMTF develops standards for existing and new technologies, such as the
cloud. Its working groups address cloud issues in greater detail, including
the Open Cloud Standards Incubator, Cloud Management Working Group
and Cloud Auditing Data Federation.

European Telecommunications Standards Institute


ETSI primarily develops telecommunications standards. Among its cloud-
focused activities are the Cloud Standards Coordination working group and
Technical Committee Cloud. Both of these groups address different cloud
technology issues.

Open Grid Forum


OGF develops standards for grid computing, cloud, and advanced digital
networking and distributed computing technologies. Among its cloud-
focused activities is the Open Cloud Computing Interface set of
specifications, which include the OCCI Core specification and OCCI
Infrastructure specification.

Open Commons Consortium


OCC, formerly known as the Open Cloud Consortium, offers an open
knowledge repository of cloud computing and data commons resources via
a variety of academic and scientific research initiatives.

Organization for the Advancement of Structured Information


Standards
OASIS is a nonprofit that develops open standards for security, cloud
technology, IoT, content technologies and emergency management. Its
cloud technical committees include the OASIS Cloud Application
Management for Platforms, OASIS Identity in the Cloud, and OASIS
Topology and Orchestration Specification for Cloud Applications.

Storage Networking Industry Association


SNIA developed the Cloud Data Management Interface (CDMI), which
defines an interface to access cloud storage and to manage the data stored
within the cloud resource. It is typically used by cloud storage systems
developers. CDMI is now an ISO standard, ISO/IEC 17826:2016
Information technology -- CDMI.

The Open Group


This consortium of technology industry organizations develops standards
and accreditations for a variety of IT issues. Its Open Platform 3.0 Forum is
a working group whose activities focus on mobility, social networks, big
data analytics, cloud computing and IoT.

TM Forum
TM Forum is a global consortium of technology firms that offers a
collaborative platform for addressing technology issues. Its Cloud Services
Initiative provides resources on creating cloud standards for both
technology firms and users.

Standards organizations
The following standards organizations create standards, frameworks and
other documents that can be applied to cloud applications. Also included in
this list are regulations and frameworks related to cloud security.

National Institute of Standards and Technology


NIST develops and distributes standards primarily for government use, but
they are widely used by private industry, too. Its Special Publication (SP)
series of standards is used extensively in public and private sectors.
1. NIST SP 500-291 (2011), NIST Cloud Computing Standards
Roadmap provides a compilation of available standards on cloud
computing and examines standards priorities and where gaps in
the standards exist.

2. NIST SP 500-293 (2014), U.S. Government Cloud Computing


Technology Roadmap provides a detailed framework and
structure for cloud computing infrastructures. While it's designed
for government applications, it can also be used in the private
sector.

3. NIST SP 800-53 Rev. 5 (2020), Security and Privacy Controls for


Information Systems and Organizations is a widely used
standard for information system security and is applicable to cloud
security.

4. NIST SP 800-144 (2011), Guidelines on Security and Privacy in


Public Cloud Computing provides guidance and
recommendations on implementing a secure environment in
public cloud services.

5. NIST SP 800-145 (2011), The NIST Definition of Cloud


Computing describes important aspects of cloud computing and
serves as a benchmark for comparing cloud services and
deployment strategies. It also provides a foundation for
discussions on cloud computing and how to use it.

6. NIST SP-800-210 (2020), General Access Control Guidance for


Cloud Systems describes cloud access controls, security
controls and guidance for cloud-based delivery options, such as
IaaS and PaaS.

7. NIST Standards Acceleration to Jumpstart Adoption of Cloud


Computing performs three activities that work together to
encourage greater use of cloud:

1. NIST recommends existing standards.


2. NIST coordinates contributions from various
organizations into cloud specifications.

3. NIST identifies gaps in cloud standards and encourages


outside firms to fill the gaps.

8. NIST Cloud Computing Program (NCCP) defines a model and


framework for building a cloud infrastructure. NCCP is composed
of five advanced technology characteristics: on-demand self-
service, broad network access, resource pooling, rapid elasticity
and measured service. It covers SaaS, PaaS and IaaS models, as
well as private, public and hybrid cloud deployment models.

9. NIST Cybersecurity Framework is a voluntary framework primarily


intended for critical infrastructure organizations to manage and
mitigate cybersecurity risks based on existing best practices. It
can be used by non-U.S. and non-critical infrastructure
organizations.
International Organization for Standardization
ISO develops standards for many kinds of systems and technologies,
including the following for cloud environments:

1. ISO/IEC 17789:2014, Information technology -- Cloud computing


-- Reference architecture defines cloud computing roles, cloud
computing activities, and cloud computing functional components
and how they interact.

2. ISO/IEC 17826:2016, Information technology -- CDMI, as


mentioned above, defines an interface to access cloud storage
and to manage the data stored within the cloud resource.

3. ISO/IEC 18384:2016, Information Technology -- Reference


Architecture for Service Oriented Architecture defines
vocabulary, guidelines and general technical principles underlying
service-oriented architectures, which are often deployed in cloud
platforms.
4. ISO/IEC 19086:2016, Information technology -- Cloud computing-
- Service level agreement framework provides the framework
for preparing SLAs for cloud services.

5. ISO/IEC 19941:2017, Information technology -- Cloud computing


-- Interoperability and portability specifies the interoperability
and portability aspects of cloud computing.

6. ISO/IEC 19944:2020, Cloud computing and distributed platforms


-- Data flow, data categories and data use describes how data
moves among cloud service vendors and users of cloud services.

7. ISO/IEC 22123:2021, Information technology -- Cloud computing


-- Part 1: Vocabulary and Part 2: Concepts provides the
fundamental terms and definitions in cloud computing.

8. ISO/IEC Technical Report 22678:2019, Information technology --


Cloud computing -- Guidance for policy development provides
guidance for developing cloud-focused policies.

9. ISO/IEC Technical Specifications 23167:2020, Information


technology -- Cloud computing -- Common technologies and
techniques describes technologies and techniques used in cloud
computing, such as VMs, microservices and containers.

10. ISO/IEC 27001:2013, Information technology -- Security


techniques -- Information security management systems --
Requirements provides the framework and guidance for creating
an information security management system that is applicable to
cloud and noncloud applications. It's also a framework for
conducting cloud security audits.

11. ISO/IEC 27002: 2013, Information Technology -- Security


techniques -- Code of practice for information security
controls is the companion standard to ISO 27001. It supports and
facilitates ISO 27001 implementation by providing best practice
guidance on applying the security controls listed in the standard.
12. ISO/IEC 27017:2015, Information technology -- Security
techniques -- Code of practice for information security
controls based on ISO/IEC 27002 for cloud services provides
guidance on the information security aspects of cloud computing
and cloud-specific information security controls.

13. ISO/IEC 27018:2019, Information technology -- Security


techniques -- Code of practice for protection of personally
identifiable information in public clouds acting as PII
processors provides guidance on ensuring privacy within public
cloud environments that process PII.
ISACA
ISACA, previously known as the Information Systems Audit and Control
Association, is a professional organization that addresses information
assurance, governance and security for audit professionals. It created the
Control Objectives for Information and Related Technologies (COBIT)
framework. COBIT is widely used in IT governance and security.

Payment Card Industry Data Security Standard


PCI DSS applies to organizations that process, store or transmit cardholder
data. It is applicable to cloud service providers (CSPs).

General Data Protection Regulation


GDPR is a global data protection regulation developed by the European
Union. It addresses the need for a broad range of data protection activities,
especially cybersecurity.

Health Insurance Portability and Accountability Act Security Rule


The HIPAA Security Rule is used as an audit and assessment standard for
healthcare and nonhealthcare institutions. Part 164, in particular, includes
requirements for protecting the security and integrity of electronic personal
health information.

Federal Risk and Authorization Management Program


FedRAMP is a framework that provides standardized guidelines to help
federal agencies and the private sector evaluate cyberthreats and cyber
risks to infrastructure platforms and cloud-based services and software
options.

Federal Information Security Management Act


FISMA is a framework and set of compliance rules that define security
actions government agencies can use to enhance their cybersecurity
posture and protect critical information systems from different types of
attacks.

How to select an appropriate standard


With so many standards, regulations, frameworks and other practice
documents, IT professionals often have difficulty selecting the most
relevant option for their organization.

If your organization is looking to deploy its own cloud services, review the
aforementioned standards, conduct research into the various cloud working
groups and technical committees, and examine the standards being used
by major CSPs, such as AWS and Microsoft Azure. Chances are IT
departments will have already performed considerable due diligence on
these issues, so achieving compliance with standards will be an important
outcome.

When using a third-party cloud provider, check how it achieves compliance


with cloud security standards. Ask qualified individuals about security
compliance as part of the evaluation process. Alternately, examine a cloud
vendor's most recent System and Organization Controls Type 2 (SOC 2)
reports. These reports examine the controls used by vendors to protect
customer data and verify the operational effectiveness of those controls.
For CSPs, SOC 2 reports should document the standards and practices the
vendor uses to protect the security and privacy of user data.

How to prepare for a cloud security audit


Depending on who is performing the audit -- the IT department, the internal
audit department or an external IT auditor -- ensure existing security
controls, especially those applicable to cloud services, are documented and
periodically reviewed and updated. Make sure the audit entity has
experience with cloud services and cloud security controls.

To start, identify the controls that need to be addressed by security policies


and procedures. As with any audit, preparation is essential. Evidence
supporting the performance of security controls is essential for a smooth
and hassle-free audit experience.

Organizations should select the cloud security standards that are most relevant to
their industry and business needs. Compliance with cloud security standards can
help organizations to:

1. Reduce the risk of data breaches and other security incidents


2. Improve customer trust and confidence
3. Meet regulatory requirements

It is important to note that cloud security standards are not static. They are
constantly evolving to keep up with the latest threats and technologies.
Organizations should regularly review their cloud security posture and update their
security controls to ensure compliance with the latest standards.

OpenID (OpenID Connect)


1.
2.
3.
4.
5.
By

1. TechTarget Contributor

OpenID is an open specification for authentication and single sign-on


(SSO).

OpenID, which was first created in 2005, allows web sites and
authentication services to exchange security information in a standardized
way. In February 2014, the OpenID Foundation launched a new version of
the protocol called OpenID Connect. OpenID Connect builds on
the OAuth 2.0 authentication framework to improve identity management,
interoperability and support for developing mobile applications.

The goal of OpenID Connect is to allow an end user to log in once and
access multiple, disparate resources on and off the Web. The specification,
which has the backing of numerous cloud providers, including Google and
Microsoft, is expected to pave the way for companies to replace their on-
premise identity and access management (IAM) systems with cloud
offerings.

what is open ID in cloud computing

OpenID is an open standard for authentication and single sign-on (SSO) in cloud
computing. It allows users to log in to multiple cloud applications using a single set of
credentials. This is done by using a trusted third-party identity provider (IDP), such
as Google or Microsoft.
When a user wants to log in to an application that supports OpenID, they are
redirected to the IDP's website. The user then authenticates with the IDP using their
existing credentials. Once the user is authenticated, the IDP provides the application
with a token that can be used to verify the user's identity.

The use of OpenID in cloud computing has several benefits:

1. Convenience: Users only need to remember one set of credentials to log in to


multiple applications.
2. Security: Users' passwords are stored centrally with the IDP, which reduces
the risk of compromise.
3. Scalability: OpenID is a scalable solution that can be used to support a large
number of users and applications.

Many popular cloud computing platforms, such as Amazon Web Services (AWS),
Microsoft Azure, and Google Cloud Platform (GCP), support OpenID. This makes it
easy for organizations to adopt OpenID as their authentication solution for their cloud
applications.

Here are some examples of how OpenID can be used in cloud computing:

1. A user can use their Google credentials to log in to a cloud-based CRM


application.
2. An employee can use their corporate IDP credentials to log in to a cloud-
based productivity suite.
3. A developer can use their GitHub credentials to log in to a cloud-based
container registry.

Overall, OpenID is a powerful tool that can be used to improve the security,
convenience, and scalability of authentication in cloud computing.

Sure. Here is a more detailed elaboration of OpenID in cloud computing:

How OpenID works in cloud computing

OpenID works in cloud computing by using a trusted third-party identity provider


(IDP) to authenticate users. The IDP is responsible for verifying the user's identity
and providing the cloud application with a token that can be used to verify the user's
identity.

The following steps describe the OpenID authentication process in cloud computing:

1. The user visits a cloud application that supports OpenID.


2. The application redirects the user to the IDP's website.
3. The user authenticates with the IDP using their existing credentials.
4. Once the user is authenticated, the IDP redirects the user back to the cloud
application.
5. The IDP also provides the cloud application with a token that can be used to
verify the user's identity.
6. The cloud application verifies the token and grants the user access to the
application.

Benefits of using OpenID in cloud computing

There are several benefits to using OpenID in cloud computing, including:

1. Convenience: Users only need to remember one set of credentials to log in to


multiple cloud applications.
2. Security: Users' passwords are stored centrally with the IDP, which reduces
the risk of compromise.
3. Scalability: OpenID is a scalable solution that can be used to support a large
number of users and applications.
4. Flexibility: OpenID can be used with a variety of cloud computing
platforms, including AWS, Azure, and GCP.
5. Reduced costs: Organizations can save money on IT costs by using OpenID
to manage user authentication for their cloud applications.

Use cases for OpenID in cloud computing

Here are some examples of how OpenID can be used in cloud computing:

1. Single sign-on (SSO): OpenID can be used to implement SSO for cloud
applications. This allows users to log in to multiple cloud applications with a
single set of credentials.
2. Identity federation: OpenID can be used to federate identity between cloud
applications and on-premises applications. This allows users to use the same
set of credentials to log in to both cloud and on-premises applications.
3. API access control: OpenID can be used to control access to cloud APIs. This
allows organizations to restrict access to their APIs to authorized users.

Overall, OpenID is a powerful and versatile tool that can be used to improve the
security, convenience, and scalability of authentication in cloud computing.
Open ID connect versus Open ID

OpenID Connect (OIDC) is an extension of OpenID that provides additional features,


such as:

1. Support for user profile information


2. Support for native and mobile applications
3. A simpler and more REST-like protocol

OIDC is also built on top of the OAuth 2.0 framework, which provides additional
features for authorization.

Here is a table that summarizes the key differences between OpenID and OIDC:

Feature OpenID OpenID Connect

XML- JSON/REST-
Protocol
based based

Authentication Yes Yes

Authorization No Yes

User profile information No Yes

Support for native and mobile


Limited Good
applications

Extensibility Good Good

drive_spreadsheetExport to Sheets

Overall, OIDC is a more modern and feature-rich version of OpenID. It is the


preferred choice for most new applications that need to support user authentication
and authorization.

Here are some examples of when to use OIDC instead of OpenID:

1. If you need to support user profile information


2. If you need to support native and mobile applications
3. If you need a simpler and more REST-like protocol
4. If you need to support authorization

If you are unsure which protocol to use, it is generally recommended to use OIDC,
unless you have a specific reason to use OpenID.

What is OpenID Connect

OpenID Connect is an interoperable authentication protocol based on the OAuth 2.0

framework of specifications (IETF RFC 6749 and 6750). It simplifies the way to verify

the identity of users based on the authentication performed by an Authorization

Server and to obtain user profile information in an interoperable and REST-like

manner.

OpenID Connect enables application and website developers to launch sign-in flows

and receive verifiable assertions about users across Web-based, mobile, and

JavaScript clients. And the specification suite is extensible to support a range of

optional features such as encryption of identity data, discovery of OpenID Providers,

and session logout.

For developers, it provides a secure and verifiable answer to the question “What is

the identity of the person currently using the browser or mobile app that is

connected?” Best of all, it removes the responsibility of setting, storing, and

managing passwords which is frequently associated with credential-based data

breaches.
How OpenID Connect Works

OpenID Connect enables an Internet identity ecosystem through easy integration and

support, security and privacy-preserving configuration, interoperability, wide support of

clients and devices, and enabling any entity to be an OpenID Provider (OP).

The OpenID Connect protocol, in abstract, follows these steps:


1. End user navigates to a website or web application via a browser.

2. End user clicks sign-in and types their username and password.

3. The RP (Client) sends a request to the OpenID Provider (OP).

4. The OP authenticates the User and obtains authorization.

5. The OP responds with an Identity Token and usually an Access Token.

6. The RP can send a request with the Access Token to the User device.

7. The UserInfo Endpoint returns Claims about the End-User.

Authentication
The secure process of establishing and communicating that the person operating an
application or browser is who they claim to be.
Client
A client is a piece of software that requests tokens either for authenticating a user or
for accessing a resource (also often called a relying party or RP). A client must be
registered with the OP. Clients can be web applications, native mobile and desktop
applications, etc.
Relying Party (RP)
RP stands for Relying Party, an application or website that outsources its user
authentication function to an IDP.
OpenID Provider (OP) or Identity Provider (IDP)
An OpenID Provider (OP) is an entity that has implemented the OpenID Connect and
OAuth 2.0 protocols, OP’s can sometimes be referred to by the role it plays, such as:
a security token service, an identity provider (IDP), or an authorization server.
Identity Token
An identity token represents the outcome of an authentication process. It contains at
a bare minimum an identifier for the user (called the sub aka subject claim) and
information about how and when the user authenticated. It can contain additional
identity data.
User
A user is a person that is using a registered client to access resources.
Frequently Asked Questions
Why should developers use OpenID Connect?

It is easy, reliable, secure, and eliminates storing and managing people’s passwords.
It improves the user experience of sign-up and registration and reduces website
abandonment. Furthermore, Public-key-encryption-based authentication frameworks
like OpenID Connect increase the security of the whole Internet by putting the
responsibility for user identity verification in the hands of the most expert service
providers.

What Does OpenID Mean?


OpenID is a unified user identification method released as an open standard that
essentially acts as a single user identification system that can be used across
multiple websites. OpenID is a way to eliminate multiple user accounts across
different websites, which often leads to confusion on the part of the user, especially
when trying to remember all the different usernames and password combinations
that have come and gone. OpenID allows users to log on to virtually any website that
supports the standard with a single ID, eliminating the agony of the sign-up process
and simplifying signing in to any affiliate website. As of 2012, OpenID is supported
by at least 27,000 sites, including Google, Yahoo, PayPal and VeriSign.

Here are some examples of how OpenID can be used in cloud computing:

1. A user can use their Google credentials to log in to a cloud-based CRM


application.
2. An employee can use their corporate IDP credentials to log in to a cloud-
based productivity suite.
3. A developer can use their GitHub credentials to log in to a cloud-based
container registry.

If you are considering using OpenID for your cloud applications, I recommend that
you consult with your cloud computing provider to learn more about their OpenID
support and to get assistance with implementing OpenID.
OIDC Use Cases

Brute force attack prevention


OIDC protects against brute force attacks by using a challenge-response
mechanism. In this type of authentication, the user must prove their
identity by solving a challenge such as a security question. This
additional step makes it more difficult for an attacker to guess a valid
user credential successfully.

Phishing attack prevention


Because OIDC allows users to sign on to an application through a trusted
third party such as Google, users can skip the step of signing up for an
account on that application. If no account exists for that user, there are
no login credentials for a hacker to exploit.

Privacy protection
OIDC provides privacy protection by allowing users to control which
claims—that is, individual pieces of information—are released to the
relying party (RP) that provides access to an application. By specifying
which claims get shared with the RP, the user can ensure that only the
necessary information is shared. For example, a user may choose to
share only their name and email address, but not their birthdate or home
address.

Is OpenID Connect Past Its Prime?


No, OpenID Connect is not on the decline. While other authentication
protocols have gained popularity in recent years, OpenID Connect is still
a widely used and supported standard. In fact, many newer
authentication protocols have been built on top of OpenID Connect or
leverage its capabilities.

OAuth Vs. OpenID? Which is


better?
Authorization and authentication processes need to be more solid and safe than ever.
OAuth and OpenID are two well-known names in this field. Before we get into the
OAuth vs. OpenID debate, it’s essential to know what OAuth and OpenID are, how they
work, and why they’re essential in current applications.

OAuth, which stands for “Open Authorization,” is a framework for authorization. Its
primary goal is to let third-party apps access resources on behalf of a user with the
user’s permission. OpenID, on the other hand, is primarily concerned with
authentication. It enables users to authenticate their identity across several websites
or applications in a standardized manner. The primary point in the OAuth vs. OpenID
debate is when and how to employ these protocols in various applications. While
OAuth is concerned with issuing rights and authorizations, OpenID is concerned with
user authentication. Choosing amongst them entails considering elements including
the application’s use case, security needs, and user experience.

In this article, we will delve deeper into the workings of OAuth and OpenID, examine
their strengths and weaknesses, and ultimately provide insights to help you decide
which protocol best meets your specific authentication and authorization needs.

Click Here to read how SecureW2 helped manage the guest network of a Florida
County School District.

Understanding OAuth
OAuth is an open standard procedure that lets applications safely access resources
on behalf of a user without revealing the user’s credentials. Its main goal is to allow
third-party apps, with the user’s permission, restricted access to a user’s protected
resources on a resource server. This method improves security by making it harder
for people to share private login credentials. It also makes it easier for different web
services and apps to work together smoothly.

Key Components
Client: The term “client” refers to the software requesting authorization to use a
restricted resource, such as a web service or mobile app.

Resource Owner: The owner of the information or system that the client is attempting
to access.

Authorization Server: The authorization Server is in charge of user authentication and


client authorization. It is responsible for issuing authentication tokens.

Resource Server: It refers to the server where the encrypted data resides. It verifies
client access tokens and makes authorization decisions.

Pros and Cons of OAuth


Pros
Widely adopted and supported: OAuth is an industry standard that is known and used
all over the world. It is used by some of the biggest tech companies, which makes it a
good choice for safe entry control.
Fine-grained access control: With OAuth, you can be very specific about what a client
can do. This level of detail improves security by limiting access to only the necessary
tools and acts.

Cons
Complexity: Implementing OAuth can be challenging, especially for coders who have
not worked with the system. It can take a lot of work to keep track of all the different
flows and security concerns.

Limited User Information: OAuth is mostly about giving permission, so it doesn’t tell
much about the person. It verifies that the user has given permission to access but
doesn’t give specific information about the user. This can be a problem in some
situations, like personalization, where you want to know more about the user.

OAuth is a powerful system that allows current applications to access resources


safely. However, because it is complicated and focuses on permissions, you must
consider whether it will work for your unique use case and needs.

OpenID Explained
OpenID is an authentication protocol that has been specifically developed to
streamline the user authentication procedure across a multitude of websites and
apps. Its main objective is to let users log in to many websites or online services using
a single set of credentials issued by an identity provider. OpenID mitigates users’ need
to generate and recall several usernames and passwords, augmenting the overall user
experience and bolstering security measures.

Think about how you use social media sites like Facebook or Google. You can often
use your Facebook or Google account to access third-party websites or apps through
these services. Here is where OpenID is essential.

OpenID is used when you sign in with your Facebook account on a trip planning
website. The booking site is the Relying Party (RP), while Facebook is the OpenID
Provider (OP).

In this situation:

OpenID Provider (Facebook):

The online travel agency can rest easy knowing you are who you say you are since
Facebook verifies your identity and sends a unique token to prove it.

Relying Party (Travel Booking Website):


The online booking service for vacations recognizes your Facebook login and allows
you to sign in without entering any further information.

The authentication procedure is made easier for users in this manner. Use your current
Facebook login information rather than establishing a new one only for the trip
booking website. By consolidating your identity management with a reliable provider,
OpenID reduces the number of login credentials you need to remember and increases
your online safety.

Key Components
OpenID Provider: This entity is the authentication service and confirms the user’s
identity. The system generates identification tokens.

Relying Party: The website or application that depends on the OpenID Provider for
user authentication. Access is granted based on the identity tokens supplied by the
OpenID Provider.
User: The one who is requesting access to a Relying Party. The user employs their
OpenID credentials as a means of verifying their identity.

Pros and Cons of OpenID


Pros
Simplifies User Authentication: OpenID makes it much easier for users to log in,
making it easier for them and lower the risk of security problems caused by
passwords.

Allows users to control their identity: With OpenID, users have more power over their
digital identity because they can choose their identity provider and handle their
accounts there.

Cons
Limited support: Although OpenID is useful for identification, it may not be as widely
supported and used as OAuth. In environments where OAuth is already familiar, this
could be a problem.

Less fine-grained authorization: OpenID is mostly about identification, and its


permission features may not be as advanced as OAuth’s, especially in situations
needing fine-grained access control.

OpenID focuses on the user when it comes to authentication. This makes logging in
more manageable and gives users power over their digital IDs. However, because it
only focuses on authentication, it may need to be combined with OAuth or other
methods for fine-grained authorization, based on your needs.

OAuth vs. OpenID: A Detailed Comparison


OAuth is primarily concerned with issuing rights to apps and services. It does not
handle user authentication but assures authorized entities may act on the user’s
behalf.

OpenID focuses on user authentication. It enables users to authenticate their identity


across numerous websites or applications in a standardized manner.

When to use OAuth?


When your application requires third-party access to user data or you need to
safeguard APIs, OAuth is the way to go. OAuth shines in instances involving social
media connections by allowing external apps to access user data without disclosing
sensitive user credentials. This allows for easy connection with significant platforms
such as Facebook, Twitter, and Google, delivering a pleasant user experience while
protecting user data.

Furthermore, OAuth excels at safeguarding APIs by offering effective access control


techniques. OAuth allows you to govern who may access your resources, whether
you’re constructing a web service, a mobile app, or any application that provides data
over the internet. It allows you to set particular permission scopes, guaranteeing that
only authorized entities have access, improving security and privacy.

When to use OpenID?


OpenID comes into play when you have Single Sign-On (SSO) needs or need to build
federated identities across several domains or organizations. OpenID is an excellent
tool for SSO systems since it simplifies user authentication across numerous sites or
apps. Users may access diverse services using a single set of credentials from their
preferred identity provider, minimizing the difficulty of managing multiple usernames
and passwords.

Furthermore, OpenID thrives where trust must be established and user identities must
be shared across domains or organizations. OpenID allows federated identity in
circumstances requiring seamless user authentication and identity verification across
organizational boundaries, improving user experiences and establishing confidence
between domains. In such cases, it becomes an excellent tool for improving
cooperation and user comfort.

Security Considerations
With access tokens, token termination, and scope-based access control, OAuth
ensures that only authorized entities can access protected resources. OpenID
Connect takes these security measures from OAuth and adds identity tokens, UserInfo
APIs, and ID tokens to strengthen user registration and identity verification.

Scalability and Performance


Even though using OAuth to protect APIs may add some extra work because you have
to get and confirm access tokens, its effects on speed are usually manageable with
the right optimization. OpenID’s primary focus is authentication, so it usually has little
effect on a system’s general performance when used correctly. When you understand
these things, you can choose the correct protocol or a mix of protocols to meet your
application’s security and authorization needs.

Final Verdict: OAuth or OpenID?


Factors to Consider
Several important things come into play when deciding whether OAuth or OpenID is
better for your application. First of all, think about your unique use case and needs.
OpenID is great for identity and Single Sign-On (SSO), while OAuth is great for
situations that need permission and API security. Check your current system to see
how each protocol fits your technology stack and integration ability. Also, put user
experience and privacy at the top of your list. Figure out which protocol best fits your
user authentication and data safety needs.

Making an Informed Decision


Start by picking the right tool for the job if you want to make a good choice. Choose
the protocol that best fits your primary goals, whether to secure APIs, make user
registration easier, or both. Sometimes, you may need to use OAuth and OpenID to
find the right mix between authentication and authorization. Both methods can help
build a strong and complete security and identity system when it makes sense.

Choosing Your Path to Secure


Authentication: OAuth, OpenID, and
Beyond

The debate between OAuth and OpenID shows the importance of choosing the right
tool for your application needs. We looked at the main differences between OAuth and
OpenID, paying particular attention to their strengths and how they can be used. OAuth
is great at authorizing and securing APIs, while OpenID makes user registration easier
and supports shared identity solutions. Your choice should be based on the needs of
your project, the resources you already have, and the user experience. As you move
through the constantly changing world of digital security and user identity, remember
that these rules don’t contradict each other; they can work together when necessary.

SecureW2 is a reliable partner for those who want expert advice and custom
identification and identity management solutions. JoinNow NetAuth is an example of
one of our cutting-edge solutions that can help you set up safe, easy-to-use login and
authorization options for your business. JoinNow NetAuth makes it easier to handle
guest access by giving you a robust and flexible way to offer scalable guest wifi,
whether secured or not. It makes it easier for users to invite guests, works smoothly
with your directory system, puts security first, and simultaneously improves the user
experience.

Contact us to take the next step towards protecting your digital identities.
KEY TAKEAWAYS:
1. OpenID is used for authentication while OAuth is used for authorization

2. If authentication is the main goal, there is no better method than X.509 digital
certificates

What is OpenID Connect Authentication


and Benefits of Using OpenID Connect?
Introduction
OpenID Connect has revolutionized the authentication process and quickly made
its way to the top in a very short span of time. It is mainly used in the single sign-
on and identity provision on the Web. The main reason for its success is
its Simple JSON-based ID tokens (JWT) delivered via the OAuth 2.0 process
flow which is already designed for all the internet using devices like browsers
and mobile devices.
The Era before OpenId Connect?
Before the OpenID, we have used Local user Authentication for identifying the
users in which we create a local database for the users’ accounts and credentials.
This process is very simple and convenient for personal use, However, local
authentication can be bad for business:
1. People find the sign-up and account creation tedious, and rightly so. Consumer
web sites and apps may suffer abandoned shopping carts because of that, which
means loss of business and sales.
1. For enterprises with many apps, the maintenance of separate user databases can
easily become an administrative and security nightmare. You may want to put
your IT resources to better use.
The established solution to these problems is to delegate user authentication
and provisioning to a dedicated, purpose-built service, called an Identity
Provider (IdP).
Google, Facebook, and Twitter, where many people on the internet are
registered, offer such IdP services for their users. A consumer web site can
greatly streamline user onboarding by integrating login with these IdPs.

Entry Of OpenId Connect


OpenID Connect, published in 2014, is not the first standard for IdP, but definitely
the best in terms of usability and simplicity, having learned the lessons from past
efforts such as SAML and OpenID 1.0 and 2.0.
What is the formula for the success of OpenID Connect?
1. Easy to consume identity tokens: Clients receive the user’s identity encoded in a
secure JSON Web Token (JWT), called an ID token. JWTs are appreciated for their
elegance and portability, and for their ready support for a wide range of signature
and encryption algorithms. All that makes JWT outstanding for the ID token job.
2. Based on the OAuth 2.0 protocol: The ID token is obtained via a standard OAuth
2.0 flow, with support for web applications as well as native/mobile apps. OAuth
2.0 also means having one protocol for authentication and authorization (obtaining
access tokens).
3. Simplicity: OpenID Connect is simple enough to integrate with basic apps, but it
also has the features and security options to match demanding enterprise
requirements.
The Work-flow

When your user signs in to your application using an OIDC IdP, this is the
authentication flow
1. Your user starts with the Amazon Cognito built-in sign-in page and is given the
option to sign in through an OIDC IdP such as Salesforce. Here the OIDC IdP
mechanism is being provided by the salesforce.
2. Your user is redirected to the OIDC IdP’s authorization endpoint provided by the
salesforce.
3. When your user is authenticated by the salesforce, the OIDC IdP redirects to
Amazon Cognito with an authorization code.
4. Amazon Cognito exchanges the authorization code with the OIDC IdP for an
access token.
5. Amazon Cognito creates or updates the user account in your user pool for that
user.
6. Amazon Cognito lends your application bearer tokens, which can include identity,
access, and refresh tokens.

Benefits of Using OpenID Connect


The major factor of using OpenID Connect is that it provides a complete
standardized setup, with no additional worries. Since it is built on the top of
OAuth 2.0 it is API ready, but adds the extra information with OAuth so that the
client can know who logged in, how strongly, etc.
OpenID Connect never defines how authentication is done but it provides a
standardized law on how to ask for it, and how the result of authentication should
be displayed for the client.
The response -request format of the OpenID connect is in JSON which is human-
readable as well as it works really well for the data-interchange operations.
OpenID Connect is a protocol designed in such a way that it equally supports
mobile applications with web applications. It works well in both mobile apps and
web apps. It synchronized itself greatly with the Single Sign-On (SSO).

Conclusion
Public-key-encryption-based authentication frameworks like OpenID Connect
(and its predecessors) globally ensure the security of the whole Internet by
taking the responsibility for user identity verification in the hands of the most
trusted and reliable service providers. Compared with the one which is available
earlier, OpenID Connect is a really easier approach to implement and integrate
and can expect to receive much wider adoption.

is open ID a stand of cloud security

edit

View other drafts

1.

2.

3.

1.

2.

3.

1.

2.

3.

4.

1.

2.

3.
1.

2.

3.

1.

2.

3.

volume_up

OpenID is not a standard of cloud security in itself. However, it can be used to implement a number
of cloud security best practices, such as:

1. Single sign-on (SSO): SSO allows users to log in to multiple cloud applications with a single set
of credentials. This reduces the number of passwords that users need to manage and makes
it more difficult for attackers to gain access to user accounts.

2. Identity federation: Identity federation allows organizations to use their existing identity
management systems to authenticate users to cloud applications. This reduces the need to
manage separate user accounts for each cloud application and makes it easier to manage
user access to cloud resources.

3. API access control: OpenID can be used to control access to cloud APIs. This allows
organizations to restrict access to their APIs to authorized users and applications.

By implementing these cloud security best practices, OpenID can help to improve the overall security
of your cloud environment.

In addition to the above, OpenID also provides a number of security features that can help to protect
your cloud applications, such as:

1. Strong authentication: OpenID supports a variety of strong authentication methods, such as


multi-factor authentication (MFA). This helps to protect user accounts from unauthorized
access.

2. Secure token exchange: OpenID uses a secure token exchange protocol to exchange
authentication tokens between the client and the server. This helps to protect tokens from
being intercepted or tampered with.

3. End-to-end encryption: OpenID supports end-to-end encryption of user data. This helps to
protect user data from unauthorized access, even if the data is intercepted in transit.

Overall, OpenID is a powerful tool that can be used to improve the security of your cloud
applications. However, it is important to note that OpenID is not a silver bullet. You should also
implement other cloud security best practices, such as strong password management, vulnerability
scanning, and security monitoring.
Here are some additional tips for using OpenID securely in cloud computing:

1. Choose a reputable IDP: Make sure to choose an IDP that has a good reputation and that
implements strong security measures.

2. Keep your IDP credentials secure: Your IDP credentials are the keys to your kingdom, so make
sure to keep them safe. Use strong passwords and enable MFA if possible.

3. Keep your cloud applications up to date: Make sure to install all security updates for your
cloud applications promptly.

4. Monitor your cloud environment for suspicious activity: Monitor your cloud environment for
any suspicious activity, such as unusual login attempts or spikes in traffic.

By following these tips, you can help to ensure that your cloud applications are secure when using
OpenID.

You might also like