0% found this document useful (0 votes)
42 views

cloud computing unit-I,II

cloud computing material

Uploaded by

jacqulinehani
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

cloud computing unit-I,II

cloud computing material

Uploaded by

jacqulinehani
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 22

What is cloud computing?

Cloud computing is a general term for the delivery of hosted computing services and IT
resources over the internet with pay-as-you-go pricing. Users can obtain technology services
such as processing power, storage and databases from a cloud provider, eliminating the need
for purchasing, operating and maintaining on-premises physical data centers and servers.
A cloud can be private, public or a hybrid. A public cloud sells services to anyone on the
internet. A private cloud is a proprietary network or a data center that supplies hosted services
to a limited number of people, with certain access and permissions settings. A hybrid cloud
offers a mixed computing environment where data and resources can be shared between both
public and private clouds. Regardless of the type, the goal of cloud computing is to provide
easy, scalable access to computing resources and IT services.

What are the different types of cloud computing services?

Cloud services can be classified into three general service delivery categories:

Infrastructure as a service (IaaS)


IaaS providers, such as Amazon Web Services (AWS), supply a virtual
server instance and storage, as well as application programming interfaces (APIs) that let
users migrate workloads to a virtual machine (VM). Users have an allocated storage capacity
and can start, stop, access and configure the VM and storage as desired. IaaS providers offer
small, medium, large, extra-large and memory- or compute-optimized instances, in addition
to enabling customization of instances for various workload needs. The IaaS cloud model is
closest to a remote data center for business users.

Infrastructure as a service or IaaS is a type of cloud computing in which a service


provider is responsible for providing servers, storage, and networking over a virtual interface.
In this service, the user doesn’t need to manage the cloud infrastructure but has control over
the storage, operating systems, and deployed applications.
Instead of the user, a third-party vendor hosts the hardware, software, servers, storage, and
other infrastructure components. The vendor also hosts the user’s applications and maintains
a backup.
Platform as a service (PaaS)
In the PaaS model, cloud providers host development tools on their infrastructures.
Users access these tools over the internet using APIs, web portals or gateway software. PaaS
is used for general software development and many PaaS providers host the software after it's
developed. Examples of PaaS products include Salesforce Lightning, AWS Elastic Beanstalk
and Google App Engine.

Platform as a service or PaaS is a type of cloud computing that provides a


development and deployment environment in cloud that allows users to develop and run
applications without the complexity of building or maintaining the infrastructure. It provides
users with resources to develop cloud-based applications. In this type of service, a user
purchases the resources from a vendor on a pay-as-you-go basis and can access them over a
secure connection.

PaaS doesn’t require users to manage the underlying infrastructure, i.e., the network,
servers, operating systems, or storage, but gives them control over the deployed applications.
This allows organizations to focus on the deployment and management of their applications
by freeing them of the responsibility of software maintenance, planning, and resource
procurement.

Software as a service (SaaS)


SaaS is a distribution model that delivers software applications over the internet;
these applications are often called web services. Users can access SaaS applications and
services from any location using a computer or mobile device that has internet access. In the
SaaS model, users gain access to application software and databases. An example of a SaaS
application is Microsoft 365 for productivity and email services.

SaaS or software as a service allows users to access a vendor’s software on cloud on a


subscription basis. In this type of cloud computing, users don’t need to install or download
applications on their local devices. Instead, the applications are located on a remote cloud
network that can be directly accessed through the web or an API.
In the SaaS model, the service provider manages all the hardware, middleware, application
software, and security. Also referred to as ‘hosted software’ or ‘on-demand software’, SaaS
makes it easy for enterprises to streamline their maintenance and support.
Function as a service (FaaS)
FaaS, also known as serverless computing, lets users run code in the cloud without having to
worry about the underlying infrastructure. Users can create and deploy functions that respond
to events or triggers. FaaS abstracts server and infrastructure management, letting developers
concentrate solely on code creation.

Benefits of cloud computing


Compared to traditional on-premises IT that involves a company owning and
maintaining physical data centers and servers to access computing power, data storage and
other resources (and depending on the cloud services you select), cloud computing offers
many benefits, including the following:

Cost-effectiveness
Cloud computing lets you offload some or all of the expense and effort of purchasing,
installing, configuring and managing mainframe computers and other on-premises
infrastructure. You pay only for cloud-based infrastructure and other computing resources as
you use them.

Increased speed and agility


With cloud computing, your organization can use enterprise applications in minutes
instead of waiting weeks or months for IT to respond to a request, purchase and configure
supporting hardware and install software. This feature empowers users—
specifically DevOps and other development teams—to help leverage cloud-based software
and support infrastructure

Unlimited scalability
Cloud computing provides elasticity and self-service provisioning, so instead of
purchasing excess capacity that sits unused during slow periods, you can scale capacity up
and down in response to spikes and dips in traffic. You can also use your cloud provider’s
global network to spread your applications closer to users worldwide.

Enhanced strategic value


Cloud computing enables organizations to use various technologies and the most up-
to-date innovations to gain a competitive edge. For instance, in retail, banking and other
customer-facing industries, generative AI-powered virtual assistants deployed over the cloud
can deliver better customer response time and free up teams to focus on higher-level work. In
manufacturing, teams can collaborate and use cloud-based software to monitor real-time data
across logistics and supply chain processes.

Types of Cloud Computing

Cloud computing can either be classified based on the deployment model or the type of
service. Based on the specific deployment model, we can classify cloud as public, private,
and hybrid cloud. At the same time, it can be classified as infrastructure-as-a-service (IaaS),
platform-as-a-service (PaaS), and software-as-a-service (SaaS) based on the service the cloud
model offers.
Types of Cloud Computing

Private cloud:

In a private cloud, the computing services are offered over a private IT network for
the dedicated use of a single organization. Also termed internal, enterprise, or corporate
cloud, a private cloud is usually managed via internal resources and is not accessible to
anyone outside the organization. Private cloud computing provides all the benefits of a public
cloud, such as self-service, scalability, and elasticity, along with additional control, security,
and customization.
Private clouds provide a higher level of security through company firewalls and internal
hosting to ensure that an organization’s sensitive data is not accessible to third-party
providers. The drawback of private cloud, however, is that the organization becomes
responsible for all the management and maintenance of the data centers, which can prove to
be quite resource-intensive.

Public cloud

Public cloud refers to computing services offered by third-party providers over the
internet. Unlike private cloud, the services on public cloud are available to anyone who wants
to use or purchase them. These services could be free or sold on-demand, where users only
have to pay per usage for the CPU cycles, storage, or bandwidth they consume.
Public clouds can help businesses save on purchasing, managing, and maintaining on-
premises infrastructure since the cloud service provider is responsible for managing the
system. They also offer scalable RAM and flexible bandwidth, making it easier for
businesses to scale their storage needs.
Hybrid cloud

Hybrid cloud uses a combination of public and private cloud features. The “best of
both worlds” cloud model allows a shift of workloads between private and public clouds as
the computing and cost requirements change. When the demand for computing and
processing fluctuates, hybrid cloudOpens a new window allows businesses to scale their
on-premises infrastructure up to the public cloud to handle the overflow while ensuring that
no third-party data centers have access to their data.

In a hybrid cloud model, companies only pay for the resources they use temporarily instead
of purchasing and maintaining resources that may not be used for an extended period. In
short, a hybrid cloud offers the benefits of a public cloud without its security risks.

WHAT IS VIRTUALIZATION?
Virtualization primarily refers to sharing all hardware resources while running several
operating systems on a single machine. Additionally, it aids in providing a pool of IT
resources that we may share for mutually beneficial business outcomes. Cloud computing is
built on the virtualization technique, making it possible to use actual computer hardware
more effectively.
Through software, virtualization can divide the hardware components of a single
computer, such as its processors, memory, storage, and other components, into several virtual
computers, also known as virtual machines (VMs). Despite only using a small percentage of
the underlying computer hardware, each virtual machine (VM) runs its operating system (OS)
and functions as a separate computer. As a result, virtualization allows for a more significant
return on an organization’s hardware investment and more effective use of physical computer
systems.
Now a days, enterprise IT architecture uses virtualization as a best practise. The economics of
cloud computing are likewise based on this technology. Cloud users can buy only the
computing resources they require at the time they need them, and they can scale those
resources affordably as their workloads increase thanks to virtualization, which enables cloud
providers to provide services using their existing physical computer hardware.

What is Virtualization in Cloud Computing?


Virtualization in cloud computing is the process of creating a virtual version of a
physical computing resource such as a server, network, storage device, or operating system. It
allows one physical computing resource to be split into multiple virtual resources, each of
which can be used for different purposes. This allows multiple applications and services to be
run on the same physical server, which helps to reduce hardware costs and improve
scalability. Virtualization also helps to improve security by isolating applications and services
on different virtual machines, so that a security breach in one application or service doesn’t
affect the others.
According to NITS (National Institute of Standards), Cloud computing allows ubiquitous,
convenient, on-demand network access to a shared pool of configurable computing resources

Below mentioned all five of the following qualities should be considered for a cloud:
Self-service on-demand: A customer can voluntarily provision computing resources like
server time and network storage.
Broad network access: Various clients and devices can access the network’s capabilities.
Resources Pooling: A multi-tenant model is used to pool the provider’s computer resources
to service a large number of customers.
Rapid elasticity: Through software, users can increase or decrease capacity.
Manages Services: Service metrics Resources are managed and optimized automatically,
with information on who is using what and how much.

The main usage of Virtualization Technology is to provide the applications with the
standard versions to their cloud users, suppose if the next version of that application is
released, then cloud provider has to provide the latest version to their cloud users and
practically it is possible because it is more expensive.

To overcome this problem we use basically virtualization technology, By using


virtualization, all severs and the software application which are required by other cloud
providers are maintained by the third party people, and the cloud providers has to pay the
money on monthly or annual basis.
Role of Virtualization in Cloud Computing
cloud computing technology, virtualization plays a very crucial role. Typically, users
share the data in the clouds, such as applications, but with virtualization, users share the
Infrastructure.
A single user of a personal computer can access all of the data and computing power
of the device. In contrast, cloud computing involves many users engaging with resources that
may be found on a single physical server.
The primary function of virtualization technology is to give standard versions of
applications to cloud users; if the next version of that application is released, the cloud
provider must give those users the most recent version, which is technically feasible because
it is more expensive.
To solve this issue, virtualization technology is being used. With virtualization, all
servers and software programs needed by other cloud providers are maintained by outside
parties, who are paid monthly or annually by the cloud providers.
Cloud providers employ virtualization to create environments that can fulfill each
user’s unique needs. Cloud providers can spin up more virtual instances to meet demand as
more users come in. Virtualization is an efficient way of managing computing resources,
maximizing utilization, and minimizing downtime.

BENEFITS OF VIRTUALIZATION IN CLOUD COMPUTING


Virtualization provides numerous benefits to cloud computing, making it an essential
technology for modern IT infrastructure. Here are some of the key benefits of virtualization
in cloud computing:

 More flexible and efficient allocation of resources.


 High availability and disaster recovery.
 Enhance development productivity.
 It lowers the cost of IT infrastructure.
 Enables running multiple operating systems.
 Remote access and rapid scalability.
 Pay peruse of the IT infrastructure on demand.

TYPES OF VIRTUALIZATION IN CLOUD COMPUTING

1. Application Virtualization: Existing operating system and its hardware resources are
used in traditionally running applications. Essentially, you are running the application
on top of your computer. Application virtualization compacts the application and
separates it from the underlying operating system. Application virtualization helps a
user to have remote access to an application from a server that stores all personal
information and other application characteristics but can still run on a local
workstation through the internet.
2. Network Virtualization: Network virtualization provides a facility to create and
provision virtual networks—logical switches, routers, firewalls, load balancers, VPN,
and workload security within days or weeks. It is the ability to run multiple virtual
networks with separate control and data plan. It co-exists on top of one physical
network. It can be managed by potentially confidential individual parties.

3. Desktop Virtualization: Desktop virtualization allows users’ operating system to be


remotely stored on a server in the data centre. It allows accessing their desktop
virtually, from any location. If a user want specific operating systems other than
Windows Server will require a virtual desktop. The main benefits of desktop
virtualization are user mobility, portability, easy software installation, updates, and
patch management.

4. Storage Virtualization: Storage virtualization is an array of servers managed by a


virtual storage system. The servers aren’t aware of where their data is stored and
instead function more like worker bees in a hive. It makes managing storage from
multiple sources to be managed and utilized as a single repository. Storage
virtualization software maintains smooth operations, consistent performance, and a
continuous suite of advanced functions despite changes and differences in the
underlying equipment.

5. Server Virtualization: Each physical server is typically dedicated to one application


for streamlining purposes. However, this can become inefficient since each server will
only use a fraction of its available processing resources. Server virtualization allows
an administrator to convert a server into multiple virtual machines. The central server
is divided into multiple virtual servers by changing the identity number and
processors. Physical servers are potent machines with multiple processors hosting
files and applications on a computer network.
UNIT-II
Cloud Computing Architecture
Architecture of cloud computing is the combination of both SOA (Service Oriented
Architecture) and EDA (Event Driven Architecture). Client infrastructure, application,
service, runtime cloud, storage, infrastructure, management and security all these are the
components of cloud computing architecture.
The cloud architecture is divided into 2 parts, i.e.
1. Frontend
2. Backend
The below figure represents an internal architectural view of cloud computing.

Architecture of Cloud Computing

1. Frontend
Frontend of the cloud architecture refers to the client side of cloud computing system.
Means it contains all the user interfaces and applications which are used by the client to
access the cloud computing services/resources. For example, use of a web browser to
access the cloud platform.
2. Backend
Backend refers to the cloud itself which is used by the service provider. It contains the
resources as well as manages the resources and provides security mechanisms. Along with
this, it includes huge storage, virtual applications, virtual machines, traffic control
mechanisms, deployment models, etc.
Components of Cloud Computing Architecture
Following are the components of Cloud Computing Architecture

1. Client Infrastructure – Client Infrastructure is a part of the frontend component.


It contains the applications and user interfaces which are required to access the cloud
platform. In other words, it provides a GUI( Graphical User Interface ) to interact with
the cloud.

2. Application : Application is a part of backend component that refers to a software


or platform to which client accesses. Means it provides the service in backend as per
the client requirement.

3. Service: Service in backend refers to the major three types of cloud based services
like SaaS, PaaS and IaaS. Also manages which type of service the user accesses.

4. Runtime Cloud: Runtime cloud in backend provides the execution and Runtime
platform/environment to the Virtual machine.

5. Storage: Storage in backend provides flexible and scalable storage service and
management of stored data.

6. Infrastructure: Cloud Infrastructure in backend refers to the hardware and


software components of cloud like it includes servers, storage, network devices,
virtualization software etc.

7. Management: Management in backend refers to management of backend


components like application, service, runtime cloud, storage, infrastructure, and other
security mechanisms etc.

8. Security: Security in backend refers to implementation of different security


mechanisms in the backend for secure cloud resources, systems, files, and
infrastructure to end-users.

9. Internet: Internet connection acts as the medium or a bridge between frontend and
backend and establishes the interaction and communication between frontend and
backend.

10. Database: Database in backend refers to provide database for storing structured
data, such as SQL and NOSQL databases. Example of Databases services include
Amazon RDS, Microsoft Azure SQL database and Google CLoud SQL.

11. Networking: Networking in backend services that provide networking


infrastructure for application in the cloud, such as load balancing, DNS and virtual
private networks.

12. Analytics: Analytics in backend service that provides analytics capabilities for data
in the cloud, such as warehousing, business intelligence and machine learning.
Benefits of Cloud Computing Architecture
 Makes overall cloud computing system simpler.
 Improves data processing requirements.
 Helps in providing high security.
 Makes it more modularized.
 Results in better disaster recovery.
 Gives good user accessibility.
 Reduces IT operating costs.
 Provides high level reliability.
 Scalability.

Application availability
Application availability can be affected as much by system infrastructure as by
software architecture so, to reach the desired level of availability it is important to take an
holistic view of uptime.

As we all know the results of downtime can be catastrophic. Every hour of


unavailability can lead to lost revenue for online retailers while for ISVs, the result can be
customer defection and reputational damage.

It is therefore important to work with your hosting provider to ensure you get the
application availability and uptime assurances you need.

The uptime equation

Uptime is the measure of the availability of a component or system and in the context of
cloud hosting is usually used to indicate the availability of the data centre facilities being
used. However there are two major components to the uptime equation: the application as
well as the infrastructure. Both should be optimised to prevent it downtime. There are also
points of overlap, where the benefits of high availability measures applied to the
infrastructure level can only be realised if complementary steps are taken within the software
architecture. It is important to understand that other factors can affect availability in addition
to your uptime guarantee.

The cost

Although the ‘holy grail’ is zero downtime, this is almost impossible to achieve within
acceptable financial and environmental constraints. When planning for high application
availability, there are choices and, ultimately, some compromises to be made. The fact is that
you need to decide what amount of downtime is acceptable to your organisation. How
quickly do you need your systems and applications to recover in the event of an IT failure
and what amount of data loss can you cope with? Understanding your recovery time objective
(RTO) and recovery point objective (RPO) will help you make the right choices.
Highly available infrastructure

When tackling the infrastructure aspects of your application’s availability it all begins with
the data centre tier upon which your cloud infrastructure is built. There are four tiers of data
centres offering different levels of uptime. Consider what tier best suits your availability
requirements.

Single points of failure

Single points of failure can take many forms and might not necessarily be under your direct
control. Device level redundancy is the ideal to offset the risk of any application outage
arising from a single component’s failure, although again this depends very much on your
attitude to how much downtime you can tolerate and how quickly your hosting provider can
replace broken components.

Scalability

Aapplication needs to be able to cope with increases in demand. The ability to scale out (add
new servers/instances) or scale up (add new resources to a server) will determine your
uptime. Good capacity planning within your business will allow you to determine what
scalability you need within the infrastructure and therefore what you pay for.

Application architecture

However if we have not optimised your application to use redundant or scalable resources
none of the above will improve your uptime and availability. Making it stateless is the ideal
though this can be difficult and you may need to look at achieving uptime through other
measures.

Application performance

Even when demand is within capacity and infrastructure is functioning normally,


application performance can still derail you. Monitoring resource utilisation such as CPU,
RAM, Disk and bandwidth while planning for spikes in demand and organic growth are
critical in maintaining availability to all users.

Assessing the different factors that affect application availability is no mean feat. By
working with our hosting provider you can achieve a better understanding of what can be
done to achieve maximum uptime.

Cloud Performance Metrics – How do you track them?

To keep tabs on your cloud resources and usage, you need cloud performance metrics.
Similar to the check engine light on your car, cloud performance metrics let you know when
your cloud performance needs a tune up. There are several cloud performance metrics that
you can use to get a holistic view of your entire cloud deployment model.

IOPS – I/O Operations per Second


IOPS measures how many operations your cloud platform can execute each second.
Essentially, IOPS is the rate at which your cloud platform can read and write to and from
your application and/or database. IOPS is a complex measurement. It is impacted by the size
of the data being read or written as well as the number of pending read/write operations
waiting to be processed. Although cloud service providers may boast a fixed IOPS, it will
ultimately depend on your actual workload.

Latency
Latency describes the speed at which operations can be executed on your cloud platform. In a
perfect world, your cloud platform should support processing speeds at levels seen with on-
premises servers. Rarely is that the case though due to throttling performed by cloud service
providers. Since the public cloud is a shared environment, cloud service providers may limit
speeds for the benefit of all users.

Resource Availability
Resource availability lets you know if your cloud instances are running as expected, or if
cloud platform requests are hanging in the balance. High availability is the goal of every
business operating on the cloud. High availability means that your applications are always
available whenever and wherever your customers and internal users need them most.

Capacity
Capacity is the ability of your cloud platform to provide adequate storage that best suits your
business needs. Capacity directly impacts the ability of the cloud platform to process
requests. Higher available capacity is often correlated with higher cloud performance.

Testing your Cloud Performance

Now that you have a better understanding of the various metrics used to determine cloud
performance, how exactly do you test it? There are a number of tests you can perform:
o Load test – measures the performance of the application under both normal and peak
conditions
o Stress test – measures the performance of the application under extreme conditions
outside of the bounds of normal operation
o Browser test – confirms that the application works as intended when accessed
through different web browsers
o Latency test – measures the amount of time it takes to move data from one point to
another within the network
o Targeted infrastructure test – isolates and measures component of the application to
test its performance
o Failover test – confirms the application’s ability to automatically provide extra
resources and move to a back-up system in the event of server or system failure
o Capacity test – measures how many users the application can handle before
performance suffers
o Soak test – measures the performance of the application under a high load for an
extended period.

Cloud Performance Strategies

There are many ways to achieve the performance that you want on the cloud. Note that you
may have to resort to a combination of techniques to achieve desired performance.
Select the Right Cloud Data Platform

There are a variety of cloud service providers to choose from, including Microsoft
Azure, Google Cloud Platform and Amazon Web Services (AWS). Each provider offers a
slightly different set of features on the cloud. Certain cloud platforms offer better storage
capacities. While other cloud platforms are optimized to handle the processing demands of
mission-critical workloads. It is important to perform in-depth research and develop an
effective cloud strategy before selecting a cloud service provider.

Pick the Right Compute Instance


Selecting the right compute instance once on your desired cloud service platform is
another way to potentially boost performance. Compute instances within the cloud platform
are optimized for certain features such as storage capacity or processing data-intensive
workloads. However, knowing which instance does what can add an unexpected level of
complexity to your otherwise seamless cloud experience.

Right Sizing
Right sizing refers to scaling your cloud computing resources automatically to match
spikes in customer demand. Customer traffic typically ebbs and flows depending on the time
of year, like the holiday shopping season for retail. As much as you need your cloud
resources to scale up during peaks, you also need it to scale back down just as quickly.Or else
you will be left paying for an unusually high cloud bill for idle resources that you no longer
use.

Hire a third-party cloud performance management vendor


Third-party vendors can provide the cloud performance management services that
you’re looking for. They either manage your cloud performance for you or give you the tools
so you can manage it yourself. With a third-party vendor, however, you may be locked into a
vendor-specific tool. If your cloud computing needs change, these tools may no longer work
for your new cloud platform or multi cloud strategy.

What is cloud security?

Cloud security is the set of control-based security measures and technology protection,
designed to protect online stored resources from leakage, theft, and data loss. Protection
includes data from cloud infrastructure, applications, and threats. Security applications
uses a software the same as SaaS (Software as a Service) model.

How to manage security in the cloud?

Cloud service providers have many methods to protect the data.

Firewall is the central part of cloud architecture. The firewall protects the network and the
perimeter of end-users. It also protects traffic between various apps stored in the cloud.

Access control protects data by allowing us to set access lists for various assets. For example,
you can allow the application of specific employees while restricting others. It's a rule that
employees can access the equipment that they required. We can keep essential documents
which are stolen from malicious insiders or hackers to maintaining strict access control.

Data protection methods include Virtual Private Networks (VPN), encryption, or masking. It
allows remote employees to connect the network. VPNaccommodates the tablets and
smartphone for remote access. Data masking maintains the data's integrity by keeping
identifiable information private. A medical company share data with data masking without
violating the HIPAA laws.

For example, we are putting intelligence information at risk in order of the importance of
security. It helps to protect mission-critical assets from threats. Disaster recovery is vital for
security because it helps to recover lost or stolen data.

Benefits of Cloud Security System

We understand how the cloud computing security operates to find ways to benefit your
business.

Cloud-based security systems benefit the business by:

o Protecting the Business from Dangers


o Protect against internal threats
o Preventing data loss
o Top threats to the system include Malware, Ransomware, and
o Break the Malware and Ransomware attacks
o Malware poses a severe threat to the businesses.
More than 90% of malware comes via email. It is often reassuring that employee's download
malware without analysingit. Malicious software installs itself on the network to steal files or
damage the content once it is downloaded.

Ransomware is a malware that hijacks system's data and asks for a financial ransom.
Companies are reluctant to give ransom because they want their data back

Types of cloud security solutions

 Identity and access management (IAM): IAM services and tools allow
administrators to centrally manage and control who has access to specific cloud-based and
on-premises resources. IAM can enable you to actively monitor and restrict how users
interact with services, allowing you to enforce your policies across your entire organization
 Data loss prevention (DLP): DLP can help you gain visibility into the data you store
and process by providing capabilities to automatically discover, classify, and de-identify
regulated cloud data.
 Security information and event management (SIEM): SIEM solutions combine
security information and security event management to offer automated monitoring,
detection, and incident response to threats in your cloud environments. Using AI and ML
technologies, SIEM tools allow you to examine and analyze log data generated across your
applications and network devices—and act quickly if a potential threat is detected.
 Public key infrastructure (PKI): PKI is the framework used to manage secure,
encrypted information exchange using digital certificates. PKI solutions typically provide
authentication services for applications and verify that data remains uncompromised and
confidential through transport. Cloud-based PKI services allow organizations to manage and
deploy digital certificates used for user, device, and service authentication.

Cloud security risks and challenges


Cloud suffers from similar security risks that you might encounter in traditional
environments, such as insider threats, data breaches and data loss, phishing, malware, DDoS
attacks, and vulnerable APIs.

However, most organizations will likely face specific cloud security challenges, including:

Lack of visibility

Cloud-based resources run on infrastructure that is located outside your corporate network
and owned by a third party. As a result, traditional network visibility tools are not suitable for
cloud environments, making it difficult for you to gain oversight into all your cloud assets,
how they are being accessed, and who has access to them.

Misconfigurations

Misconfigured cloud security settings are one of the leading causes of data breaches
in cloud environments. Cloud-based services are made to enable easy access and data
sharing, but many organizations may not have a full understanding of how to secure cloud
infrastructure. This can lead to misconfigurations, such as leaving default passwords in place,
failing to activate data encryption, or mismanaging permission controls.

Access management

Cloud deployments can be accessed directly using the public internet, which enables
convenient access from any location or device. At the same time, it also means that attackers
can more easily gain authorized resources with compromised credentials or improper access
control.

Dynamic workloads

Cloud resources can be provisioned and dynamically scaled up or down based on your
workload needs. However, many legacy security tools are unable to enforce policies in
flexible environments with constantly changing and ephemeral workloads that can be added
or removed in a matter of seconds.

Compliance

The cloud adds another layer of regulatory and internal compliance requirements that
you can violate even if you don’t experience a security breach. Managing compliance in the
cloud is an overwhelming and continuous process. Unlike an on-premises data center where
you have complete control over your data and how it is accessed, it is much harder for
companies to consistently identify all cloud assets and controls, map them to relevant
requirements, and properly document everything.

What is cloud disaster recovery (cloud DR)?


Cloud disaster recovery (cloud DR) is a combination of strategies and services
intended to back up data, applications and other resources to public cloud or dedicated
service providers. When a disaster occurs, the affected data, applications and other resources
can be restored to the local data center -- or a cloud provider -- to resume normal operation
for the enterprise.

The goal of cloud DR is virtually identical to traditional DR: to protect valuable business
resources and ensure protected resources can be accessed and recovered to continue normal
business operations.

Importance of cloud DR
DR is a central element of any business continuity (BC) strategy. It entails replicating
data and applications from a company's primary infrastructure to a backup infrastructure,
usually situated in a distant geographical location.

Before the advent of cloud connectivity and self-service technologies, traditional DR options
were limited to local DR and second-site implementations. Local DR didn't always protect
against disasters such as fires, floods and earthquakes. A second site -- off-site DR --
provided far better protection against physical disasters, but implementing and maintaining a
second data center imposed significant business costs.

The following reasons highlight the importance of cloud storage and disaster recovery:

 Cloud DR ensures business continuity in the event of natural disasters and cyber
attacks, which can disrupt business operations and result in data loss.
 With a cloud disaster recovery strategy, critical data and applications can be backed
up to a cloud-based server. This enables quick data recovery for businesses in the wake
of an event, thus reducing downtime and minimizing the effects of the outage.

Cloud-based DR offers better flexibility, reduced complexities, more cost-effectiveness and


higher scalability compared with traditional DR methods. Businesses receive continuous
access to highly automated, highly scalable, self-driven off-site DR services without the
expense of a second data center and without the need to select, install and maintain DR tools.

Selecting a cloud DR provider


An organization should consider the following five factors when selecting a cloud DR
provider:

1. Distance. A business must consider the cloud DR provider's physical distance


and latency. Putting DR too close increases the risk of shared physical disaster, but
putting the DR too far away increases latency and network congestion, making it harder
to access DR content. Location can be particularly tricky when the DR content must be
accessible from numerous global business locations.
2. Reliability. Consider the cloud DR provider's reliability. Even a cloud experiences
downtime, and service downtime during recovery can be equally disastrous for the
business.
3. Scalability. Consider the scalability of the cloud DR offering. It must be able to
protect selected data, applications and other resources. It must also be able to
accommodate additional resources as needed and provide adequate performance as other
global customers use the services.
4. Security and compliance. It's important to understand the security requirements of
the DR content and be sure the provider can offer authentication, virtual private
networks, encryption and other tools needed to safeguard the business's valuable
resources. Evaluate compliance requirements to ensure the provider is certified to meet
compliance standards that relate to the business, such as ISO 27001, SOC 2 and SOC 3,
and Payment Card Industry Data Security Standard (PCI DSS).
5. Architecture. Consider how the DR platform must be architected. There are three
fundamental approaches to DR, including cold, warm and hot disaster recovery. These
terms loosely relate to the ease with which a system can be recovered.
Creating a cloud-based disaster recovery plan
Building a cloud DR plan is virtually identical to more traditional local or off-
site disaster recovery plans. The principal difference between cloud DR and more traditional
DR approaches is the use of cloud technologies and DRaaS to support an appropriate
implementation. For example, rather than backing up an important data set to a different disk
in another local server, cloud-based DR would back up the data set to a cloud resource such
as an Amazon Simple Storage Service bucket. As another example, instead of running an
important server as a warm VM in a colocation facility, the warm VM could be run in
Microsoft Azure or through any number of different DRaaS providers. Thus, cloud DR
doesn't change the basic need or steps to implement DR, but rather provides a new set of
convenient tools and platforms for DR targets.

There are three fundamental components of a cloud-based disaster recovery plan: analysis,
implementation and testing.

Analysis. Any DR plan starts with a detailed risk assessment and analysis, which basically
examines the current IT infrastructure and workflows, and then considers the potential
disasters that a business is likely to face. The goal is to identify potential vulnerabilities and
disasters -- everything from intrusion vulnerabilities and theft to earthquakes and floods --
and then evaluate whether the IT infrastructure is up to those challenges.

An analysis can help organizations identify the business functions and IT elements
that are most critical and predict the potential financial effects of a disaster event. Analysis
can also help determine RPOs and RTOs for infrastructure and workloads. Based on these
determinations, a business can make more informed choices about which workloads to
protect, how those workloads should be protected and where more investment is needed to
achieve those goals.

Implementation. The analysis is typically followed by a careful implementation that details


steps for prevention, preparedness, response and recovery. Prevention is the effort made to
reduce possible threats and eliminate vulnerabilities. This might include employee training
in social engineering and regular operating system updates to maintain security and stability.
Preparedness involves outlining the necessary response -- who does what in a disaster event.
This is fundamentally a matter of documentation. The response outlines the technologies and
strategies to implement when a disaster occurs. This preparedness is matched with the
implementation of corresponding technologies, such as recovering a data set or server VM
backed up to the cloud. Recovery details the success conditions for the response and steps to
help mitigate any potential damage to the business.

The goal here is to determine how to address a given disaster, should it occur, and the
plan is matched with the implementation of technologies and services built to handle the
specific circumstances. In this case, the plan includes cloud-based technologies and services.

Testing. Any DR plan must be tested and updated regularly to ensure IT staff are proficient
at implementing the appropriate response and recovery successfully and in a timely manner,
and that recovery takes place within an acceptable time frame for the business. Testing can
reveal gaps or inconsistencies in the implementation, enabling organizations to correct and
update the DR plan before a real disaster strikes.

Approaches to cloud DR
The following are the three main approaches to cloud disaster recovery:

 Cold DR typically involves storage of data or virtual machine (VM) images. These
resources generally aren't usable without additional work such as downloading the stored
data or loading the image into a VM. Cold DR is usually the simplest approach -- often
just data storage -- and the least expensive approach, but it takes the longest to recover,
leaving the business with the longest downtime in a disaster.
 Warm DR is generally a standby approach where duplicate data and applications are
placed with a cloud DR provider and kept up to date with data and applications in the
primary data center. But the duplicate resources aren't doing any processing. When
disaster strikes, the warm DR can be brought online to resume operations from the DR
provider -- often a matter of starting a VM and redirecting IP addresses and traffic to the
DR resources. Recovery can be quite short, but still imposes some downtime for the
protected workloads.
 Hot DR is typically a live parallel deployment of data and workloads running
together in tandem. That is, both the primary data center and the DR site use the same
workload and data running in synchronization -- both sites sharing part of the overall
application traffic. When disaster strikes one site, the remaining site continues without
disruption to handle the work. Users are ideally unaware of the disruption. Hot DR has
no downtime, but it can be the most expensive and complicated approach.
Gen Cloud Computing
1. Unikernels – These are the specialized operating systems, which render enhanced
security, fine-grained optimization, and a smaller footprint required for micro-services. They
are made up of the library OS technology and can be customized on the basis of different
programs and hardware. Unikernels are in the form of executable image, which can be
natively executed on specific hypervisor. It does not require any extra supporting OS.
Unikernels comprises of library OS that is nothing just a library collection, which represents
the core capability of an operating system. For example – MirageOS is a library operating
system, which develops unikernels for networking purpose over a variety of online
computing and mobile environments. Another example of Unikernel OS can be Rumprun
Unikernel. This operating system comprises of thousands of coding lines and works with
POSIX application directly on the raw hardware. It also supports working on the cloud
hypervisors like Xen and KVM.

2. Blockchain – Blockchain technology is a new face of internet where digital data is


distributed without copying. Information held in the blockchain appears like sharing, having
numerous advantages of its use. It is impossible for an individual person to hold blockchain
because it does not a single failure point. The network lives in a ‘consensus’ mode where a
self-auditing kind of ecosystem is available. This system reconciles each transaction, which is
accomplished in 10 minutes of intervals.
Well, the very first application that came in the form of blockchain technology is ‘bitcoin’ in
the year 2009. Bitcoin is a cryptocurrency and, it is underpinned by the blockchain. This
next-gen cloud computing technology eliminates the involvement of a human in processing
cross-border trades. These type of systems can be set up as smart payments or contracts that
prove themselves helpful at the time when it is about meeting certain set of conditions.

3. Container-as-a-Service – CaaS (or Container-as-a-Service) is a service provided by


the cloud providers who render container orchestration and computing resources. The
framework can be utilized by IT industry developers through web interface or API for easy
container management. This new generation of cloud computing technology can be
considered as a new layer in cloud platform to deploy application. It indicates towards the
software that is purposed to give relief from the stress between operational and
developmental team in a business. CaaS is useful at the time of pushing application data and
monitoring program.
The tools under Container-as-a-Service category simplifies management and renders a
framework not only to define the initial deployment level of the container but, also to manage
several containers as a single thing. The whole and sole aim of these tools are to deal with
scaling, networking, and availability. Azure Container Service, Google Container
Engine, Cloud Foundry’s Diego, etc., are the live example of this next-gen cloud computing
technology.

4. Software Defined Networking – Depending upon the providers and users, the
meaning of this term is different. In general, it is a key component in data centers for the
automation purpose. SDN renders efficient methods for managing virtualization that saves
the extra costing in hardware implementation. Managers of the data center have the right to
manage every aspect associated with the data center to upgrade their hardware as per the
requirements. The world of digitization already comprises of several issues to maintain
stability in the market therefore, automated software becomes important. These automated
tools eliminate the complications that are faced while managing activities. It helps
organizations in enhancing their cloud data security by reducing the human errors.

You might also like