0% found this document useful (0 votes)
20 views90 pages

Unit 4 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views90 pages

Unit 4 For Students

Uploaded by

iamankitappy116
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Cloud Management in Cloud Computing

• The cloud has changed business as we know it, and has proliferated almost all organizational
structures across every industry. With this popularity have come the options: from infrastructure,
platform, and software as a service platforms, all the way to storage, security, and database
services.
• While organizations have more choices than ever for cloud computing, this only make things more
challenging—there is no standard way to oversee it all. In fact, some businesses that once had
3,000 services under one management umbrella now have more than 10,000!
• This is where having solid, scalable cloud management becomes so important.

• Cloud management is the organized management of cloud computing products and services that
operate in the cloud. It refers to the processes, strategies, policies, and technology used to help
control and maintain public and private cloud, hybrid cloud, or multi-cloud environments.
Cloud Management Tasks :
• Auditing System Backups –
It is required to audit the backups from time to time to ensure restoration of randomly selected files of
different users. This might be done by the organization or by the cloud provider.

• Flow of data in the system –


The managers are responsible for designing a data flow diagram that shows how the data is supposed to
flow throughout the organization.

• Vendor Lock-In –
The managers should know how to move their data from a server to another in case the organization
decides to switch providers.

• Knowing provider’s security procedures –


The managers should know the security plans of the provider, especially Multitenant use, E-commerce
processing, Employee screening and Encryption policy.

• Monitoring the Capacity, Planning and Scaling abilities –


The manager should know if their current cloud provider is going to meet their organization’s demand in
the future and also their scaling capabilities.

• Monitoring audit log –


In order to identify errors in the system, logs are audited by the managers on a regular basis.

• Solution Testing and Validation –


It is necessary to test the cloud services and verify the results and for error-free solutions.
What are cloud management tools?
• Cloud management tools are those that enable organizations to manage
their multi-cloud (both public clouds and those on-premise) resources and
services. These tools can be purchased and operated by:
• One central organization
• Numerous lines of business
• Tools can also be deployed as a SaaS product or on-premises. These tools serve
nearly limitless functions and cover a variety of categories such as:
• Resource management
• Performance monitoring
• Scalability
• Automation and provisioning
• Cross-platform interoperability
• Compliance and governance
• Reporting
• Resource management
• When it comes to cloud management tools, resource management is one of the most core
functions to be included. This ability enables IT to control each resource. It also Reduce cloud
administrative overhead and Accelerate deployments
• Performance monitoring
• Performance monitoring tool needs to monitor resource usage and its impact on system
performance. the monitoring tool should also provide a historical context and predictions
about future capacity usage so that a plan for growth can be put in place.
• Scalability
• Most hybrid clouds continue to change and grow, so a management tool must be flexible to
Incorporate new technologies and Scale across different geographies
• Automation & provisioning
• Developers and testers need fast, self-service access to cloud resources that provide consistent
automated provisioning of development, test, and production platforms, so that they spend less
time troubleshooting configuration changes caused by manually deployed systems and
applications.
• Cross-platform Interoperability
• Cloud management tools must help IT Operations deliver and manage these heterogeneous services in a
simple way. Widespread support for operating systems and cloud platforms is essential to the user, and
must be automated by the chosen solution.
• Compliance & governance
• cloud management tool should be able to Determine the ownership of individual resources and
Automate approvals based on a predefined set of criteria and roles. Additionally, the tools should
automate common regulatory and operational compliance policies that govern and optimize IT agility.
• Reporting
• It is essential for organizations, especially large enterprises, that their cloud management tool is able to
provide full visibility into both The systems and The users. This type of reporting should offer a bird’s-
eye view of the entire system, that collects data from the multiple technologies and integrates it all into
one big picture. These reports should also include information revealing overall costs, as well as
chargeback and showback.
• Some of the companies and their tools are:
• AWS
• Google
• Microsoft Azure
• Amazon Web Services
• Amazon Web Services provides a facility for the user to access and modify the
instances of the cloud with the help of command line interface.
• Google
• It provides Google Cloud Platform (GCP) which is a monitoring and logging tools. In
addition, Google Stackdriver provides performance data for virtual machines and
applications.
• Microsoft Azure
• Microsoft has an Azure site recovery tool which helps administrators to automatically
replicate VMs.
Virtual machine migration
• virtual machines’ migration plays an important role in data centers by
making it easy to adjust resource’s priorities to match resource’s
demand conditions.
• This role is completely going in the direction of meeting SLAs
• once it has been detected that a particular VM is consuming more than
its fair share of resources at the expense of other VMs on the same
host, it will be eligible, for this machine, to either be moved to another
underutilized host or assign more resources for it, in case that the host
machine still has resources; this in turn will highly avoid the violations
of the SLA and will also, fulfill the requirements of on-demand
computing resources.
Memory migration
• Memory migration is one of the most important aspects of Virtual machine
migration. Moving the memory instance of the VM from one physical state
to another can be approached in any number of ways.
• Internet SuspendResume (ISR) technique looks to exploit temporal locality,
• Temporal locality refers to the fact that the memory states differ only by the
amount of work done since the VM was last suspended before being
initiated for migration.
• To exploit temporal locality each file in the file system is represented as a
tree of small sub-files. A copy of this tree exists in both the suspended and
resumed VM instances.
• The advantage of using a tree based representation of files is that, the
caching ensures the transmission of only those files which have been
changed.
• The memory migration in general can be classified into three phases:
• Push phase:
• The source VM continues running while certain pages are pushed
across the network to the new destination. To ensure consistency, the
pages modified during the transmission process must be re-sent.
• Stop-and-copy phase: The source VM is stopped, pages are copied
across to the destination VM, and then the new VM is started.
• Pull phase: The new VM starts its execution, and if it accesses a page
that has not yet been copied, this page is faulted in, across the network
from the source VM.
FILE SYSTEM MIGRATION
• To support VM migration, the system must provide each VM with a consistent, location-independent view of the file system that is
available on all hosts. A simple way to achieve this is to provide each VM with its own virtual disk, to which the file system is
mapped, and transport the contents of this virtual disk along with the other states of the VM. However, due to the current trend of
high capacity disks, migration of the contents of an entire disk over the network is not a viable solution. To reduce the amount of
data transferred from suspended site to the resumed site two approaches can be used-Smart Copying and Proactive State Transfer.
• In Smart Copying, the VMM exploits spatial locality. In these conditions, it is possible to transmit only the difference between the
two file systems at suspend and resume locations. This significantly reduces the amount of actual physical data that has to be
moved. In situations where there is no locality to exploit, a different approach is to synthesize much of the state at the resume site.
On many systems, user files only form a small fraction of the actual data on disk. Operating System and Application Software
account for the majority of storage space. If disk images of standard software are available at resume site, suspended state can be
reconstructed by first applying those disk images and then applying customization transmitted from the suspended state. The
Proactive State Transfer solution works in those cases where the resume site can be predicted with reasonable confidence. In these
cases, the system ensures that suspended state is available at the resume site in advance of demand.

• Another way this could be done would be to have a global file system across all machines where a VM could be located. This
removes the need to copy files from one machine to another since all files would be network accessible. Most modern data centers
consolidate their storage requirements using a network-attached storage (NAS) device, in preference to using local disks in
individual servers. NAS has many advantages in this environment, including simple centralized administration, widespread vendor
support, and reliance on fewer spindles leading to a reduced failure rate. A further advantage for migration is that it obviates the
need to migrate disk storage, as the NAS is uniformly accessible from all host machines in the cluster.
NETWORK MIGRATION
• A migrating VM should maintain all open network connections without relying on forwarding
mechanisms on the original host or on support from mobility or redirection mechanisms.
• To enable remote systems to locate and communicate with a VM, each VM must be assigned a
virtual IP address known to other entities.
• This address can be distinct from the IP address of the host machine, where the VM is currently
located.
• Each VM can also have its own distinct virtual MAC address.
• The VMM maintains a mapping of the virtual IP and MAC addresses to their corresponding VMs.
• In general a migrating VM includes all the protocol state (e.g., TCP ), and will carry its IP address
with it.
• If the source and destination machines of the VM migration are typically connected to a single
switched LAN, an unsolicited ARP reply from the migrating host is provided, advertising that the
IP has moved to a new location.
• This solves the open network connection problem by reconfiguring all the peers to send future
packets to the new location.
• Although a few packets that have already been transmitted might be lost, there are no other
problems with this mechanism.
• Alternatively, on a switched network, the migrating OS can keep its original Ethernet MAC
address, relying on the network switch to detect its move to a new port.
WORKFLOW ENGINE FOR CLOUDS

• A workflow models a process as consisting of a series of steps that


simplifies the complexity of execution and management of
applications.
• The recent progress in virtualization technologies and the rapid growth
of cloud computing services have opened a new paradigm in
distributed computing for utilizing existing (and often cheaper)
resource pools for on demand and scalable scientific computing.
Scientific Workflow Management Systems (WfMS) need to adapt to
this new paradigm in order to leverage the benefits of cloud services.
ARCHITECTURE OF WORKFLOW MANAGEMENT
SYSTEMS
• Scientific applications are typically modeled as workflows, consisting
of tasks, data elements, control sequences and data dependencies.
Workflow management systems are responsible for managing and
executing these workflows.
• The Cloud Workflow Management System consists of components that are responsible for handling tasks,
data and resources taking into account users’ QoS requirements. Its architecture is depicted in Figure below:
• The architecture consists of three major parts: (a) the user interface, (b) the core, and (c) plug-ins.
• The user interface allows end users to work with workflow composition, workflow execution planning,
submission, and monitoring. These features are delivered through a Web portal or through a stand-alone
application that is installed at the user’s end. Users define task properties and link them based on their data
dependencies.
• The components within the core are responsible for managing the execution of workflows. They facilitate in
the translation of high-level workflow descriptions (defined at the user interface using XML) to task and data
objects. These objects are then used by the execution subsystem. The scheduling component applies user-
selected scheduling policies and plans to the workflows at various stages in their execution. The tasks and
data dispatchers interact with the resource interface plug-ins to continuously submit and monitor tasks in the
workflow. These components form the core part of the workflow engine.
• The plug-ins support workflow executions on different environments and platforms. Our system has plug-ins
for querying task and data characteristics (e.g., querying metadata services, reading from trace files),
transferring data to and from resources (e.g., transfer protocol implementations, and storage and replication
services), monitoring the execution status of tasks and applications (e.g., real-time monitoring GUIs, logs of
execution, and the scheduled retrieval of task status), and measuring energy consumption
• The resources are at the bottom layer of the architecture and include clusters, global grids, and clouds. The
WfMS has plug-in components for interacting with various resource management systems present at the front
end of distributed resources.
Resource provisioning
• The allocation of resources and services from a cloud provider to a
customer is known as resource provisioning in cloud computing,
sometimes called cloud provisioning.
• Resource provisioning is the process of choosing, deploying, and
managing software (like load balancers and database server
management systems) and hardware resources (including CPU,
storage, and networks) to assure application performance.
Parameters for Resource Provisioning
• Response time : The resource provisioning algorithm designed must take
minimal time to respond when executing the task
• Minimize Cost: From the Cloud user point of view cost should be
minimized.
• Revenue Maximization: This is to be achieved from the Cloud Service
Provider’s view.
• Fault tolerant : The algorithm should continue to provide service in spite of
failure of nodes
• Reduced SLA Violation: The algorithm designed must be able to reduce
SLA violation
• Reduced Power Consumption Response time: VM placement & migration
techniques must lower power consumption
Inter cloud
• To handle rapid increase in the content, media cloud plays a very vital role. But it is not possible
for standalone clouds to handle everything with the increasing user demands. For scalability and
better service provisioning, at times, clouds have to communicate with other clouds and share their
resources. This scenario is called Intercloud computing or cloud federation.

• The term "intercloud" refers to a network of interconnected clouds. This encompasses private,
public, and hybrid clouds, all of which work together to create a seamless data flow.

• A theoretical model for cloud computing services is referred to as the “inter-cloud” or “cloud of
clouds.” combining numerous various separate clouds into a single fluid mass for on-demand
operations. Simply put, the inter-cloud would ensure that a cloud could utilize resources outside of
its range using current agreements with other cloud service providers. There are limits to the
physical resources and the geographic reach of any one cloud.
Need of Inter-Cloud

• Due to their Physical Resource limits, Clouds have certain Drawbacks:


• When a cloud’s computational and storage capacity is completely
depleted, it is unable to serve its customers.
• The Inter-Cloud addresses these circumstances when one cloud would
access the computing, storage, or any other resource of the
infrastructures of other clouds.
Benefits of the Inter-Cloud Environment
include:
• Avoiding vendor lock-in to the cloud client
• Having access to a variety of geographical locations, as well as
enhanced application resiliency.
• Better service level agreements (SLAs) to the cloud client
• Expand-on-demand is an advantage for the cloud provider.
• Portability and Migration, Moving data from one supplier to the next
might become as simple as "dragging and dropping." Money, time, and
human resources would all be saved as a result of this.
Inter-Cloud Resource Management

• Each single cloud does not have infinite physical resources or


ubiquitous geographic footprint. A cloud may be saturated to the
computational and storage resources of its infrastructure.
• the intercloud would ensure that a cloud could utilize resources outside
of its range combining numerous various separate clouds into a single
fluid mass for on-demand operations.
• The concept of inter-cloud resource management refers to the efficient
management of resources across different cloud environments. In other
words, inter-cloud resource management enables organizations to access
and manage resources across multiple cloud platforms, regardless of their
geographical location or underlying technology.
• The need for inter-cloud resource management arises from the fact that
organizations often use multiple cloud environments for various purposes.
• For example, an organization might use one cloud environment for running
its production workloads, while another cloud environment might be used
for development and testing.
• This leads to a situation where organizations have resources spread across
multiple clouds, making it difficult to manage them effectively.
• The main challenge of inter-cloud resource management is to provide a
unified view of all the resources across different clouds and to ensure that
these resources are used efficiently.
key issues addressed by inter-cloud resource management
• Resource Discovery: Inter-cloud resource management solutions must provide a
way to discover resources across multiple clouds, regardless of their underlying
technology. This includes identifying the location, type, and availability of
resources, as well as their capabilities and constraints.
• Resource Allocation: Inter-cloud resource management solutions must be able to
allocate resources efficiently across multiple clouds, taking into account the needs
and priorities of different workloads. This includes balancing resource usage
across clouds, optimizing resource utilization, and ensuring that resources are used
in a cost-effective manner.
• Resource Migration: Inter-cloud resource management solutions must provide
the ability to migrate resources between clouds, either automatically or manually.
This includes the ability to move resources between clouds based on changing
requirements or to respond to unexpected changes in resource availability.
• Resource Monitoring and Management: Inter-cloud resource management
solutions must provide a way to monitor and manage resources across multiple
clouds, including the ability to view resource usage and performance, and to make
changes to the configuration of resources as needed.
Different approaches to inter-cloud resource management
• Cloud Orchestration: This approach uses a cloud orchestration platform to
manage resources across multiple clouds. The platform provides a unified
view of resources, allowing administrators to allocate and manage resources
across multiple clouds from a single location.
• Cloud Federation: This approach uses a federation model to manage
resources across multiple clouds. The federation model provides a way for
organizations to pool resources from multiple clouds, enabling them to use
resources from different clouds as if they were part of a single cloud
environment.
• Hybrid Cloud Management: This approach uses a hybrid cloud
management platform to manage resources across both private and public
clouds. The platform provides a unified view of resources, allowing
administrators to allocate and manage resources across multiple clouds from
a single location.
Topologies used In Inter-Cloud Architecture

Peer-to-Peer Inter-Cloud Federation: Clouds work together directly,


but they may also utilize distributed entities as directories or brokers.
Clouds communicate and engage in direct negotiation without the use of
intermediaries. The peer-to-peer federation intercloud projects are
RESERVOIR (Resources and Services Virtualization without Barriers
Project).
Centralized Inter-Cloud Federation: In the cloud, resource sharing is
carried out or facilitated by a central body. The central entity serves as a
registry for the available cloud resources. The inter-cloud initiatives
Dynamic Cloud Collaboration (DCC), and Federated Cloud
Management leverage centralized inter-cloud federation.

Multi-Cloud Service: Clients use a service to access various clouds.
The cloud client hosts a service either inside or externally. The services
include elements for brokers. The inter-cloud initiatives OPTIMUS,
contrail, MOSAIC, STRATOS, and commercial cloud management
solutions leverage multi-cloud services.

Resource Provisioning Approaches
• Efficient resource provisioning is a key requirement in cloud computing.
• Cloud consumers do not get direct access to physical computing resources.
• The provisioning of resources to consumers is enabled through VM (virtual
machine) provisioning.
• Physical resources from resource pool are made available to those VMs
which in turn are made available to consumers as well as for the
applications.
• Physical resources can be assigned to the VMs using two types of
provisioning approaches like static and dynamic.
Static provisioning
• Static provisioning is suitable for applications which have predictable and generally unchanging
workload demands.
• In this approach, once a VM is created it is expected to run for long time without incurring any
further resource allocation decision overhead on the system.
• Here, resource-allocation decision is taken only once and that too at the beginning when user’s
application starts running.
• Thus, this approach provides room for a little more time to take decision regarding resource
allocation since that does not impact negatively on the performance of the system.
• Although static provisioning approach does not bring about any runtime overhead it has major
limitations also.
• This provisioning approach fails to deal with un-anticipated changes in resource demands.
• When resource demand crosses the limit specified in SLA document it causes trouble for the
consumers.
• Again from provider’s point of view, some resources remain unutilized forever since provider
arranges for sufficient volume of resources to avoid SLA violation.
• So this method has drawback from the viewpoint of both provider as well as for consumer.
Dynamic provisioning
• With dynamic provisioning, the resources are allocated and de-allocated as per requirement during
run-time.
• This on-demand resource provisioning provides elasticity to the system.
• Providers no more need to keep a certain volume of resources unutilized for each and every
system separately, rather they maintain a common resource pool and allocate resources from that
when it is required.
• Resources are removed from VMs when they are no more required and returned to the pool.
• With this dynamic approach, the processes of billing also become as pay-per-usage basis.
• Dynamic provisioning technique is more appropriate for cloud computing where application’s
demand for resources is most likely to change or vary during the execution.
• But this provisioning approach needs the ability of integrating newly-acquired resources into the
existing infrastructure.
• This gives provisioning elasticity to the system.
• Dynamic provisioning allows system to adapt in changed conditions at the cost of bearing run-
time resource allocation decision overhead.
• This overhead leads some amount of delay in system but this can be minimized by putting upper
limit on the complexity of provisioning algorithms.
Hybrid Approach
• Dynamic provisioning addresses the problems of static approach, but
introduces run-time overhead.
• To tackle with this problem, a hybrid provisioning approach is
suggested that combines both static and dynamic provisioning.
• It starts with static provisioning technique at the initial stage of VM
creation and then turns it into dynamic re-provisioning of resources.
• This approach can often effectively address real-time scenario with
changing load in cloud computing.
Under-provision and Over-provision Problems

• Traditional computing systems mostly follow static resource


provisioning approach. But, it is very difficult to correctly predict the
future demand of any application (and hence the resource requirement
for the application) despite rigorous planning and efforts. This
naturally results in under-provision or over-provision of resources in
traditional environment.
• Providers supply cloud services by signing SLAs with end users.
• The SLAs must commit sufficient resources such as CPU, memory,
and bandwidth that the user can use for a preset period.
• When demand for computing resources crosses the limit of available resources, then a shortage of
resource is created. This scenario is known as under-provision of resource.
• A simple solution to this problem is to reserve sufficient volume of resources for an application so
that resource shortage can never happen. But this introduces a new problem. In such case, most of
the resources will remain unutilized for majority of time. This scenario is known as over-provision
of the resources
• Figure below shows the under-provision scenario. Here, the allotted and defined volume of
resource is represented by the dashed line.
• Under-provisioning problem occurs when resource demand of application is higher than this
allotted volume.
• Under-provisioning causes application performance degradation.
• The over-provisioning problem appears when the reserved volume of resource for an application
never falls below the estimated highest-required amount of resource for the application considering
the varying demand.
• In such case, since for most of the time, the actual resource demand remains quite lesser than the
reserved amount it ultimately turns into un-utilization of valuable resource.
• This not only causes wastage of resource but also increases the cost of computation.
Resource Provisioning Plans in Cloud
• Consumers can purchase cloud resources from the provider through
web form by creating the account.
• Cloud providers generally offer two different resource provisioning
plans or pricing models to fulfill consumers’ requirements.
• These two plans are made to serve different kind of business
purposes.
• The plans are known as short-term on-demand plan and long-term
reservation plan.
• Most commercial cloud providers offer both of the plans.
Short term plan
• In this pricing model, resources are allotted on short-term basis as per demand.
• When demand rises, the resources are provisioned accordingly to meet the need.
• When demand decreases, the allotted resources are released from the application
and returned to the free resource pool.
• Consumers are charged on pay-per-usage basis. So, with this on-demand plan, the
resource allocation follows the dynamic provisioning approach to fit the fluctuated
and unpredictable demands.
• In the short-term on-demand plan, it is the responsibility of the provider to ensure
application performance by provisioning resources during high demand (the
demand from multiple consumers at same time).
• This requires appropriate planning at provider’s end.
• Estimation of resource requirements during high demand plays the vital role under
this pricing model.
Long term Plan
• In long-term reservation plan (also known as ‘advance provisioning’), a service contract is made
between the consumer and the provider regarding requirement of resources.
• The provider then arranges in advance and keeps aside a certain volume of resources from the
resource pool to support the consumer’s needs in the time of urgency.
• This arrangement of appropriating resources is done before starting of the service.
• In this model of resource provisioning, pricing is not on-demand basis. Rather it is charged as a
one-time fee for a fixed period of time generally counted in months or years.
• At provider’s end, the computational complexity as well as the cost is less under this plan in
comparison to the on-demand plan. This is because the provider becomes aware about the
maximum resource requirement of the consumer and keeps the resource pool ready to supply
resources to meet demands.
• This reduces the provider’s cost (of planning and implementation) and in effect the consumers may
get the service in much cheaper rate (generally when long term contract is made) than the on-
demand plan if hourly usage rates are considered.
• The problem of the reservation plan is, since a fixed volume of resources is arranged as per the
SLA there are possibilities of under-provisioning or over-provisioning of resources.
• It is important for the cloud consumer to estimate the requirements carefully so that those problems
can be avoided and at the same time the resource provisioning cost could minimize.
• This goal can be achieved through an optimal resource management plan.
VM Sizing
• VM sizing refers to the estimation of the amount of resources that
should be allocated to a virtual machine.
• The estimation is made depending on various parameters extracted out
of requests from consumers.
• In static approach of resource provisioning, the VM size is determined
at the beginning of the VM creation.
• But in dynamic approach of resource provisioning, the VM size
changes with time depending on application load.
• The primary objective of VM sizing is to ensure that VM capacity
always remains proportionate with the workload.
• VM sizing can be maintained in two different ways.
• Traditionally, VM sizing is done on a VM-by-VM basis which is known as individual-
VM based provisioning.
• Here, resources are allotted to each virtual machine depending on the study of its previous
workload patterns. When additional resources are required for any VM to support load
beyond its initial expectation, the resources are allotted from the common resource pool.
• The other way is joint-VM provisioning approach where resources to VMs are
provisioned in combined way so that the unused resources of one VM can be allocated to
other VM(s) when they are hosted over the same physical infrastructure.
• This approach takes advantage of the dynamic VM demand characteristics.
• Unutilized resources of less loaded VMs can be utilized in other VMs during their peak
demand.
• This leads to overall resource capacity saving at provider’s end.
• This technique is also known as VM multiplexing.
Dynamic Provisioning and Fault Tolerance
• Dynamic resource provisioning brings many advantages to cloud computing
over traditional computing approach.
• It allows runtime replacement of computing resources and helps to build
reliable system.
• This is done by constantly monitoring over all the nodes of a system
executing some of the particular tasks.
• Whenever some of those nodes show low reliability over a predetermined
period of time, a new node is introduced into the system to replace that
defective or low performing node.
• This little effort turns the whole cloud system being more tolerant to faults.
• The reliability of nodes can be measured
Zero Downtime Architecture: Advantage of Dynamic Provisioning

• The dynamic resource provisioning capability of cloud systems leads to an important goal of
system design which is zero-downtime architecture.
• One physical server generally facilitates or hosts multiple virtual servers. Hence, the physical
server acts as the single point of failure for all of the virtual systems it creates.
• But the dynamic provisioning mechanism immediately replaces any crashing physical system with
a new system instantly and thus the running virtual system gets a new physical host without
halting.
• Figure below demonstrates how live VM migration maintains zero downtime during failure of
physical host.
• Here, the virtual server A1 (VM-A1) was hosted by physical server A1 and two applications were
running on VM-A1.
• When physical server ‘A1’ crashes, VM-A1 is shifted to a new VM with current system status and
all of the applications running in VM-A1 have been migrated to VM-A2 which is hosted by
physical server ‘A2’.
• Thus, the applications remain unaffected by the effects of zero downtime
Demand-Driven method
• This method adds or removes computing instances based on the current utilization level of
the allocated resources.
• The demand-driven method automatically allocates two Xeon processors for the user
application, when the user was using one Xeon processor more than 60 percent of the time
for an extended period.
• In general, when a resource has surpassed a threshold for a certain amount of time, the
scheme increases that resource based on demand.
When a resource is below a threshold for a certain amount of time, that resource could be
decreased accordingly. (Defines a range for CPU utilization say for eg: 30% to 70%. if CPU
utilization below 30% decreases the CPU capacity. If CPU utilization above 70% increases
the CPU capacity)
• Amazon implements such an auto-scale feature in its EC2 platform.
• This method is easy to implement.
• Disadvantage: The scheme does not work out right if the workload changes abruptly.
Event Driven method
• This scheme adds or removes machine instances based on a specific time
event.
• The scheme works better for seasonal or predicted events such as
Christmastime in the West and the Lunar New Year in the East.
• During these events, the number of users grows before the event period and
then decreases during the event period.
• This scheme anticipates peak traffic before it happens.
• The method results in a minimal loss of QoS, if the event is predicted
correctly.
• Otherwise, wasted resources are even greater due to events that do not
follow a fixed pattern.
Popularity-Driven method
• In this method, the Internet searches for popularity of certain
applications and creates the instances by popularity demand.
• (Currently popular applications→ Facebook, Instagram, Twitter)
• The scheme anticipates increased traffic with popularity.
• Again, the scheme has a minimal loss of QoS, if the predicted
popularity is correct.
• Resources may be wasted if traffic does not occur as expected.
Global Exchange of resources
• In order to support a large number of consumers from around the world, cloud
infrastructure providers have established data centers in multiple geographical locations to
provide redundancy and ensure reliability in case of site failures.
• For example, Amazon has data centers in the United States (e.g., one on the East Coast
and another on the West Coast) and Europe.
• However, it is difficult for cloud customers to determine in advance the best location for
hosting their services as they may not know the origin of consumers of their services.
• Also, SaaS providers may not be able to meet the QoS expectations of their service
consumers originating from multiple geographical locations.
• This necessitates building mechanisms for seamless federation of data centers of a cloud
provider or providers supporting dynamic scaling of applications across multiple domains
in order to meet QoS targets of cloud customers. (Creating of VMs at multiple data
centers at multiple places all over the world that satisfies customer QoS)
• Shortcomings.
• 1. It is difficult for cloud customers to determine in advance the best location for
hosting their services as they may not know the origin of consumers of their services.
• 2. SaaS providers may not be able to meet the QoS expectations of their service
consumers originating from multiple geographical locations
• No single cloud infrastructure provider will be able to establish its data centers at all
possible locations throughout the world.
• As a result, cloud application service(SaaS)providers will have difficulty in meeting QoS
expectations for all their consumers.
• This necessitates federation of cloud infrastructure service providers for seamless
provisioning of services across different cloud providers.
• To realize this, the Cloudbus Project at the University of Melbourne has proposed
Inter-Cloud architecture
• By realizing Inter Cloud architectural principles in mechanisms in
their offering:-
• Cloud providers will be able to dynamically expand or resize their
provisioning capability based on sudden spikes in workload demands
by leasing available computational and storage capabilities from other
cloud service providers.
• Operate as part of a market driven resource leasing federation, where
application service providers such as Salesforce.com host their
services based on negotiated SLA contracts driven by competitive
market prices
• Deliver on-demand, reliable, cost-effective, and QoS-aware services
based on virtualization technologies while ensuring high QoS
standards and minimizing service costs.
• Cloud providers will be able to dynamically expand or resize their provisioning
capability based on sudden spikes in workload demands by leasing available
computational and storage capabilities from other cloud service providers; operate
as part of a market-driven resource leasing federation.
• They consist of client brokering and coordinator services that support utility-
driven federation of clouds: application scheduling, resource allocation, and
migration of workloads.
• The Cloud Exchange (CEx) acts as a market maker for bringing together service
producers and consumers. It aggregates the infrastructure demands from
application brokers and evaluates them against the available supply currently
published by the cloud coordinators.
• It supports trading of cloud services based on competitive economic models such
as commodity markets and auctions.
• An SLA specifies the details of the service to be provided in terms of metrics
agreed upon by all parties, and incentives and penalties for meeting and violating
the expectations, respectively.
• The availability of a banking system within the market ensures that financial
transactions pertaining to SLAs between participants are carried out in a secure
and dependable environment.
Inter Grid Gateway
• Peering arrangements established between gateways enables the allocation of resources
from multiple grids to establish the execution environment.
• Figure below illustrates a scenario in which an IGG allocates resources from a local
cluster to deploy applications in 3 steps:
• (1) requesting the VMs,
• (2) enactment of the leases, and
• (3) delpoyment of the VMs as requested. Under peak demand, this IGG interacts with
another that can allocate resources from a cloud computing provider.
• A grid has predefined peering arrangements with other grids, which the inter-grid gateway
(IGG) manages.
• Through multiple IGGs, the system coordinates the use of InterGrid resources.
• An IGG is aware of the peering terms with other grids, selects suitable grids that can
provide the required resources, and replies to requests from other IGGs.
• Request redirection policies determine which peering grid InterGrid selects to process a
request and a price for which that grid will perform it.
• An IGG can also allocate resources from a cloud provider.
• The cloud system creates a virtual environment to help users deploy their applications.
These applications use the distributed grid resources.
• The InterGrid allocates and provides as a distributed virtual
environment (DVE). This is a virtual cluster of VMs that runs isolated
from other virtual clusters.
• A component, called the DVE manager, performs resource allocation
and management on behaff of specific user applications.
• The core component of the IGG is a scheduler to implement the
provisioning policies and peering with other gateways.

Cloud Security
• Cloud security, also known as cloud computing security, consists of a set of policies,
controls, procedures and technologies that work together to protect cloud-based systems,
data, and infrastructure.
• These security measures are configured to protect cloud data, support regulatory
compliance and protect customers' privacy as well as setting authentication rules for
individual users and devices.
• From authenticating access to filtering traffic, cloud security can be configured to the
exact needs of the business.
• And because these rules can be configured and managed in one place, administration
overheads are reduced and IT teams empowered to focus on other areas of the business.
• The way cloud security is delivered will depend on the individual cloud provider or the
cloud security solutions in place.
• However, implementation of cloud security processes should be a joint responsibility
between the business owner and solution provider.
Basic Cloud Security
• Three basic cloud security enforcements are expected.
• First, facility security in data centers demands on-site security year round.
Biometric readers, CCTV (close circuit TV), motion detection, and man traps
are often deployed. (on-site security at data center→ Biometric, CCTV, motion
detection, man traps)
• Second, Network security demands fault-tolerant external firewalls, intrusion
detection systems (IDSes), and third-party vulnerability assessment.
• Finally, platform security demands SSL and data decryption, strict password
policies, and system trust certification (providing trust certificate in using
platform). (SSL (Secure Sockets Layer) is a security technology used to secure
transactions b/w server and browser)
Cloud security benefits
• Cloud security offers many benefits, including:
• Centralized security: Just as cloud computing centralizes applications and data, cloud security centralizes protection. Cloud-based
business networks consist of numerous devices and endpoints that can be difficult to manage when dealing with them. Managing
these entities centrally enhances traffic analysis and web filtering, streamlines the monitoring of network events and results in feour
software and policy updates. Disaster recovery plans can also be implemented and actioned easily when they are managed in one
place.
• Reduced costs: One of the benefits of utilizing cloud storage and security is that it eliminates the need to invest in dedicated
hardware. Not only does this reduce capital expenditure, but it also reduces administrative overheads. Cloud security delivers
proactive security features that offer protection 24/7 with little or no human intervention.
• Reduced Administration: When we choose a reputable cloud services provider or cloud security platform, we can say goodbye to
manual security configurations and almost constant security updates. These tasks can have a massive drain on resources, but when
we move them to the cloud, all security administration happens in one place and is fully managed on our behalf.
• Reliability: Cloud computing services offer the ultimate in dependability. With the right cloud security measures in place, users can
safely access data and applications within the cloud no matter where they are or what device they are using.
Security Issues in Cloud Computing :
• There is no doubt that Cloud Computing provides various Advantages but there are also some security issues in cloud computing.
Below are some following Security Issues in Cloud Computing as follows.
• Data Loss
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we know that our sensitive data
is in the hands of Somebody else, and we don’t have full control over our database. So, if the security of cloud service is to break by
hackers then it may be possible that hackers will get access to our sensitive data or personal files.

• Interference of Hackers and Insecure API’s


As we know, if we are talking about the cloud and its services it means we are talking about the Internet. Also, we know that the
easiest way to communicate with Cloud is using API. So it is important to protect the Interface’s and API’s which are used by an
external user. But also in cloud computing, few services are available in the public domain which are the vulnerable part of Cloud
Computing because it may be possible that these services are accessed by some third parties. So, it may be possible that with the
help of these services hackers can easily hack or harm our data.

• User Account Hijacking


Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of User or an Organization is
hijacked by a hacker then the hacker has full authority to perform Unauthorized Activities.
• Changing Service Provider
Vendor lock-In is also an important Security issue in Cloud Computing. Many organizations will face different problems while
shifting from one vendor to another. For example, An Organization wants to shift from AWS Cloud to Google Cloud Services then
they face various problems like shifting of all data, also both cloud services have different techniques and functions, so they also
face problems regarding that. Also, it may be possible that the charges of AWS are different from Google Cloud, etc.

• Lack of Skill
While working, shifting to another service provider, need an extra feature, how to use a feature, etc. are the main problems caused in
IT Company who doesn’t have skilled Employees. So it requires a skilled person to work with Cloud Computing.

• Denial of Service (DoS) attack


This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in large organizations such as the
banking sector, government sector, etc. When a DoS attack occurs, data is lost. So, in order to recover data, it requires a great
amount of money as well as time to handle it.
SaaS Security
• SaaS Security refers to securing user privacy and corporate data in
subscription-based cloud applications. SaaS applications carry a
large amount of sensitive data and can be accessed from almost any
device by a mass of users, thus posing a risk to privacy and sensitive
information.
Why is SaaS Security important?

• SaaS (Software as a Service) has become increasingly popular in recent years due
to its flexibility, cost-effectiveness, and scalability. However, this popularity also
means that SaaS providers and their customers face significant security challenges.
• SaaS Security is important because:
• Sensitive data would be well-protected and not compromised by hackers,
malicious insiders or other cyber threats.
• SaaS security helps avoid severe consequences such as legal liabilities, damage to
reputation and loss of customers.
• Aids in increasing the trust of the SaaS provider to the customers.
• Aids in compliance with security standards and regulations.
• Ensures the security and protection of applications and data hosted from cyber
threats, minimizing the chances of data breaches and other security incidents.
Challenges in SaaS security
• Some of the most significant challenges in SaaS security include:
• 1. Lack of Control
• SaaS providers typically host applications and data in the cloud, meaning that customers have less direct control over their security.
This can make it challenging for customers to monitor and manage security effectively.

• 2. Access Management
• SaaS applications typically require users to log in and authenticate their identity. However, managing user access can be
challenging, particularly if the provider is hosting applications for multiple customers with different access requirements.

• 3. Data Privacy
• SaaS providers may be subject to data privacy regulations, which can vary by jurisdiction. This can make it challenging to ensure
compliance with all relevant laws and regulations, particularly if the provider hosts data for customers in multiple countries.

• 4. Third-party integration
• SaaS providers may integrate with third-party applications, such as payment processors or marketing platforms. However, this can
increase the risk of security incidents, as vulnerabilities in third-party software can potentially affect the entire system.

• 5. Continuous monitoring
• SaaS providers must continuously monitor their systems for security threats and vulnerabilities. This requires a high level of
expertise and resources to detect and respond to security incidents effectively.
What makes SaaS applications risky?
• 1. Virtualization
• Cloud computing systems run on virtual servers to store and manage multiple accounts and machines, unlike traditional networking
systems. In such a case, if even a single server is compromised it could put multiple stakeholders at risk. Though virtualization
technology has improved significantly over time, it still poses vulnerabilities that are often easy targets for cybercriminals. When
properly configured and implemented with strict security protocols, it can provide significant protection from numerous threats.
• 2. Managing identity
• Many SaaS providers allow for Single Sign-on (SSO) abilities to ease access to applications greatly. This is most helpful when there
are multiple SaaS applications and access is role-based. Some of the providers do have secure data access systems, however, with an
increase in the number of applications, it becomes quite complicated and difficult to manage securely.
• 3. Standards for cloud services
• SaaS security can greatly vary based on the provider and the standards maintained by them. Not all SaaS providers conform to
globally accepted SaaS security standards. Even those provide complicated and might not have SaaS-specific certification.
Standards such as ISO 27001 can offer a certain level of confidence; however, if not carefully evaluated they might not have all
security avenues covered under the certification.
• 4. Obscurity
• To be completely confident regarding SaaS security the customers must know in detail how everything works. If a SaaS provider
tries to be too obscure about the backend details, consider it a red flag.. Most popular SaaS providers are transparent about their
backend processes; however, several providers may not disclose details such as their security protocols and multi-tenant
infrastructure. In such cases, Service Level Agreements (SLA) are useful since it compels the provider to disclose all
responsibilities. After all, customers have a right to know how their data is protected against cyber-attacks and information exposure
among other SaaS risks.
• 5. Data location
• SaaS tools might store clients’ data in some other geographical region, but not all providers can promise that
due to several factors such as data laws and cost. Sometimes clients would be comfortable with their data
being stored within their country. Data location should also be based on factors such as data latency and load
balancing.
• 6. Access from anywhere
• SaaS apps can be accessed from anywhere and that is one of the reasons which makes them more appealing.
However, this feature has its own set of risks. Incidents such as accessing the application using an infected
mobile device or public WiFi without any VPN would compromise the server. If the endpoints are not secure
it would allow attackers to enter the server.
• 7. Data control
• Since all data will be hosted on the cloud, clients do not have complete control over it. If something goes
wrong, clients are at the mercy of the SaaS provider. Once agreeing to a price model, the provider becomes
responsible for storing and managing data. In such cases, clients often worry about who has access to it,
scenarios of data corruption, and access by third parties and competitors, to name a few. When sensitive data
is stored, answers to these queries become much more crucial.
SaaS Security Best Practices
• No system is safe , SaaS offerings also have security concerns that need to be resolved. By following the
below security practices, you can leverage the powerful features and advantages of SaaS without worrying
about security.
• 1. End-to-end data encryption
• This means that all kinds of interaction between server and user happens over SSL connections and are
encrypted. However, end-to-end encryption should also exist for data storage. Many providers have the option
to encrypt the data by default, while some clients need to explicitly specify this. Clients can also have the
option to encrypt specific fields such as financial details by using Multi-domain SSL certificates.
• 2. Vulnerability testing
• You can expect SaaS providers to make high claims regarding SaaS security. But the onus to verify these
claims can end up with the clients. If the SaaS provider has tools or checks, they should be reliable and meets
all standards. Apart from these, you should also ensure that intensive checks are done on the SaaS systems.
• There are multiple ways to assess SaaS security, such as automated tools or manually by security experts. A
comprehensive SaaS security check should meet both automated and manual checks since it would also
consider real-world scenarios and the latest threats. A number of quality SaaS security solutions are available
to help you with the security testing process.
• Policies for data deletion
• Data deletion policies play an important role in customers’ data safe. SaaS providers should be clear in
declaring their data deletion policies to their clients. These policies are mentioned in the service agreement
and should include what would happen after the customer’s data retention timeline ends. When applicable,
client data should be programmatically deleted from the server and respective logs should be generated.
• 4. Data security at the user level
• Multiple levels of SaaS security can limit the damage from cyber-attacks. At the user level, security protocols
such as role-based permissions and access, and enforced distribution of tasks, will protect your system from
attacks that leverage internal security gaps.
• 5. Virtual Private Network/Virtual Private Cloud
• VPN and VPC provide a safe environment for clients for their operation and data storage. These are better
options and more secure than multi-tenant systems. These also enable users to log in and use SaaS
applications from anywhere by securing endpoints and protecting the infrastructure.
• 6. Virtual Machine Management
• Your virtual machine needs to be updated regularly to maintain a secure infrastructure. Keep up with the latest
threats and patches on the market and deploy them timely to protect your VM.
• Scalability & Reliability
• SaaS offers great scalability (both vertical as well as horizontal) & reliability features. You have the benefit of
adding a new enhanced feature or additional resources as per your wish. Scaling cannot be realized instantly,
thus the vendor must put together a plan for horizontal redundancy. A CDN (Content delivery network) adds
more robustness to scaling.
• 8. Transport Layer Security and configuration certificates
• SaaS security is greatly enhanced when a provider protects externally transmitted data using Transport Layer
Security. Moreover, TLS also improves privacy between communicating applications and users. Make sure
that the certificates are appropriately configured and follow security protocols. The same applies to internal
data too. Internal data should also be stored in an encrypted format and any intra-application transfer should
be protected. Further, cookie security should be looked into as well.
• 9. User privileges and multi-factor authentication
• Different categories of users should have different levels of privileges. Cybercriminals often misuse privileges
to access the core files of an application. Admins should have exclusive access to crucial files and folders.
Also, authentication is a major point of entry for attackers. Factor Authentication is the new standard for
logging into applications. Make sure the SaaS application adheres to this custom.
• 10. Logs
• Logs help in monitoring SaaS security incidents and help in detecting any cyber attacks. SaaS systems should
have automatic log .Two-factor authentication should be available to clients to assist in audits or regular
monitoring.
• 11. Data Loss Prevention
• Data Loss Prevention (DLP) consists of two parts, detection, and action. DLP systems can scan outgoing or
transferred data for sensitive information through keyword and phrase searches. Once detected, data transfer
is blocked preventing any leakage. For a robust system, the DLP system can send alerts to the administrator
who verifies if the detection is correct. There are also SaaS APIs that enforce DLP protocols in your
application.
• 12. Deployment security
• Deployment can be either done on public cloud services or a SaaS vendor. In case you decide to self-deploy
your SaaS application then you need to test the security thoroughly and adopt enough safeguards to protect
your application against cyber attacks.
• Most of the big cloud providers take care of all your SaaS security needs, however, when opting for a public
cloud vendor, make sure that they follow all globally accepted standards. Asking for a pentest report while
making a vendor assessment is fair play on your part.
VM security
• A virtual machine (VM) is a digital substitute for a real computer.
• Virtual machine software is capable of running programs and
operating systems, storing data, connecting to networks, and
performing other computer operations Digitally.
• Still, it requires regular maintenance, such as updates and system
monitoring. Since a VM( Virtual Machine) stores and accesses lots of
data and can be misused and manipulated easily.
• Therefore we must ensure all the security features to protect data.
Securing a VM: Best Practices
• Securing a virtual machine in a cloud environment requires careful planning and implementation of various
security measures. Here are the essential steps:
• Implement strong access controls for VMs using multi-factor authentication, strong passwords, and role-based
access control (RBAC) to ensure only authorized users can access your VM.
• Use encryption to protect data stored on the VM. This includes using encryption for data at rest and data in
transit. We can use encryption protocols such as HTTPS, SSL/TLS, and SSH to protect data in transit. Some
common methods to enable encryption at rest include disk encryption, file-level encryption, and database-level
encryption.
• Use vulnerability management and patching to regularly update software and operating systems with the
latest security patches and updates. This will help to close any known vulnerabilities in the system.
• Use endpoint protection such as antivirus solutions to protect the VM from malware and other security
threats. Ensure that the endpoint security software is up-to-date and configured correctly.
• Use security monitoring regularly for any unusual activities or vulnerabilities. Use tools such as intrusion
detection systems (IDS) and security information and event management (SIEM) to monitor the VM and detect
any security incidents.
• Use a backup as regular data backups can help protect against data loss due to security incidents or other
disasters. By backing up data on a separate location or device, the data can be restored if the original data is lost
or becomes corrupted.
• Follow security hardening and best practices and industry-specific security requirements and regulations.
The National Institute of Standards and Technology (NIST) and Center for Internet Security (CIS) maintain
standards for system hardening best practices.

Security Governance
• Security governance in cloud computing involves establishing and
maintaining a framework of policies, procedures, and controls to ensure the
confidentiality, integrity, and availability of data and resources. Here are key
aspects of security governance in the context of cloud computing:
• Policy Development: Create cloud-specific security policies.
• Risk Management: Identify and mitigate security risks in cloud adoption.
• Access Controls: Implement robust IAM controls for cloud resource
access.
• Data Classification: Classify and secure data based on sensitivity.
• Incident Response Planning: Develop and test cloud-specific incident
response plans.
• Security Awareness: Provide ongoing training for employees on cloud
security best practices.
• Third-Party Security Assessments: Regularly assess cloud service
providers for security compliance.
• Continuous Monitoring: Implement real-time monitoring of cloud
resources for security threats.
• Security Audits: Conduct regular audits to evaluate the effectiveness of
security controls.
• Encryption: Enforce encryption for data at rest and in transit.
• Change Management: Implement controlled processes for changes to
cloud configurations.
IAM
• Identity and Access Management (IAM) is the security discipline that
enables the right individuals to access the right resources at the right
times for the right reasons. IAM addresses the mission-critical need to
ensure appropriate access to resources across increasingly
heterogeneous technology environments.
• Enterprises traditionally used on-premises IAM software to manage
identity and access policies, but nowadays, as companies add more
cloud services to their environments, the process of managing
identities is getting more complex. Therefore, adopting cloud-based
Identity-as-a-Service (IDaaS) and cloud IAM solutions becomes a
logical step.
• In more technical terms, IAM is a means of managing a given set of
users' digital identities, and the privileges associated with each identity.
• It is an umbrella term that covers a number of different products that all do
this same basic function.
• Within an organization, IAM may be a single product, or it may be a
combination of processes, software products, cloud services, and hardware
that give administrators visibility and control over the organizational data
that individual users can access.
• To verify identity, a computer system will assess a user for characteristics
that are specific to them. If they match, the user's identity is confirmed.
These characteristics are also known as "authentication factors," because
they help authenticate that a user is who they say they are.
• The three most widely used authentication factors are:
• Something the user knows
• Something the user has
• Something the user is
• Something the user knows: This factor is a piece of knowledge that
only one user should have, like a username and password combination.
• Something the user has: This factor refers to possession of a physical
token that is issued to authorized users. The system sends the
verification code on user mobile( which is unique to that person)
• Something the user is: This refers to a physical property of one's
body. A common example of this authentication factor in action is
Face ID, the feature offered by many modern smartphones. Fingerprint
scanning is another example. Less common methods used by some
high-security organizations include retina scans and blood tests.
Cloud IAM typically includes the following features:
• Authentication and Authorization: Ensures secure user authentication and defines
access permissions.
• Roles and Permissions: Enables creation of roles with specific permissions for the
principle of least privilege.
• Resource Policies: Attaches policies to cloud resources, specifying who can access them
and what actions are allowed.
• Federation and SSO: Supports identity federation and Single Sign-On for seamless
access across environments.
• Audit Trails: Generates logs for user activities, aiding in compliance and security
monitoring.
• Temporary Credentials: Provides temporary security credentials, reducing the risk of
long-term exposure.
• Single Access Control Interface. Cloud IAM solutions provide a clean and consistent
access control interface for all cloud platform services. The same interface can be used for
all cloud services.
• Enhanced Security. You can define increased security for critical applications.
Why do you need Identity and Access
Management?
• Identity and Access Management technology can be used to initiate,
capture, record, and manage user identities and their access
permissions. All users are authenticated, authorized, and evaluated
according to policies and roles.
• Poorly controlled IAM processes may lead to regulatory non-
compliance; if the organization is audited, management may not be
able to prove that company data is not at risk of being misused.
How can Cloud IAM help?
• The ability to spend less on enterprise security by relying on the centralized trust model to
deal with Identity Management across third-party and own applications.
• It enables your users to work from any location and any device.
• You can give them access to all your applications using just one set of credentials
through Single Sign-On.
• You can protect your sensitive data and apps: Add extra layers of security to your
mission-critical apps using Multifactor Authentication.
• It helps maintain compliance of processes and procedures. A typical problem is that
permissions are granted based on employees’ needs and tasks, and not revoked when they
are no longer necessary, thus creating users with lots of unnecessary privileges.
• Auth0 is an identity access management (IAM) provider
• Auth0 can authenticate your users with any identity provider running on any stack, any
device or cloud. It provides Single Sign-On, Multifactor Authentication, Social Login,
and several more features.
Cloud Security Standards

• Cloud-based services are now a crucial component of many


businesses, with technology providers adhering to strict privacy and
data security guidelines to protect the privacy of user information.
Cloud security standards assist and guide organizations in ensuring
secure cloud operations.
Need for Cloud Security Standards

• Ensure cloud computing is an appropriate environment: Organizations need to make sure that
cloud computing is the appropriate environment for the applications as security and mitigating risk
are the major concerns.
• To ensure that sensitive data is safe in the cloud: Organizations need a way to make sure that the
sensitive data is safe in the cloud while remaining compliant with standards and regulations.
• No existing clear standard: Cloud security standards are essential as earlier there were no existing
clear standards that can define what constitutes a secure cloud environment. Thus, making it
difficult for cloud providers and cloud users to define what needs to be done to ensure a secure
environment.
• Need for a framework that addresses all aspects of cloud security: There is a need for
businesses to adopt a framework to address these issues
What are Cloud Security Standards

• It was essential to establish guidelines for how work is done in the cloud due to the different
security dangers facing the cloud. They offer a thorough framework for how cloud security is
upheld with regard to both the user and the service provider.
• Cloud security standards provide a roadmap for businesses transitioning from a traditional
approach to a cloud-based approach by providing the right tools, configurations, and policies
required for security in cloud usage.
• It helps to devise an effective security strategy for the organization.
• It also supports organizational goals like privacy, portability, security, and interoperability.
• Certification with cloud security standards increases trust and gives businesses a competitive
edge.
Best Practices For Cloud Security

• 1. Secure Access to the Cloud


• Although the majority of cloud service providers have their own ways of safeguarding the
infrastructure of their clients, you are still in charge of protecting the cloud user accounts and
access to sensitive data for your company. Consider improving password management in your
organization to lower the risk of account compromise and credential theft.
• Adding password policies to your cybersecurity program is a good place to start. Describe the
cybersecurity practices you demand from your staff, such as using unique, complex passwords for
each account and routine password rotation.
• 2. Control User Access Rights
• Some businesses give employees immediate access to a wide range of systems and data in order to
make sure they can carry out their tasks effectively. For cybercriminals, these individuals’ accounts
are a veritable gold mine because compromising them can make it simpler to gain access to crucial
cloud infrastructure and elevate privileges. Your company can periodically review and revoke user
rights to prevent this.
• 3. Transparency and Employee Monitoring
• You can use specialized solutions to keep an eye on the behavior of your staff in order to promote
transparency in your cloud infrastructure. You can spot the earliest indications of a cloud account
compromise or an insider threat by keeping an eye on what your employees are doing while they
are at work. Imagine your cybersecurity experts discover a user accessing your cloud infrastructure
from a strange IP address or outside of normal business hours. In that situation, they’ll be able to
respond to such odd activity promptly because it suggests that a breach may be imminent.
• 4. Data Protection
• This involves data protection against unauthorized access, prevention of accidental data disclosure,
and ensuring ceaseless access to crucial data in the case of failures and errors.
• 5. Access Management
• Three capabilities that are a must in access management are the ability to identify and authenticate
users, the ability to assign access rights to users, and the ability to develop and enact access control
policies for all the resources.
Common Cloud Security Standards
• 1. NIST (National Institute of Standards and Technology)
• NIST is a federal organization in the US that creates metrics and standards to boost competition in the
scientific and technology industries. The National Institute of Regulations and Technology (NIST) developed
the Cybersecurity Framework to comply with US regulations such as the Federal Information Security
Management Act and the Health Insurance Portability and Accountability Act (HIPAA) (FISMA). NIST
places a strong emphasis on classifying assets according to their commercial value and adequately protecting
them.
• 2. ISO-27017
• A development of ISO-27001 that includes provisions unique to cloud-based information security. Along with
ISO-27001 compliance, ISO-27017 compliance should be taken into account. This standard has not yet been
introduced to the marketplace. It attempts to offer further direction in the cloud computing information
security field. Its purpose is to supplement the advice provided in ISO/IEC 27002 and various other ISO27k
standards, such as ISO/IEC 27018 on the privacy implications of cloud computing, and ISO/IEC 27031 on
business continuity.
• 3. ISO-27018
• The protection of personally identifiable information (PII) in public clouds that serve as PII processors is
covered by this standard. Despite the fact that this standard is especially aimed at public-cloud service
providers like AWS or Azure, PII controllers (such as a SaaS provider processing client PII in AWS)
nevertheless bear some accountability. If you are a SaaS provider handling PII, you should think about
complying with this standard.
• 4. CIS controls
• Organizations can secure their systems with the help of Internet Security Center (CIS) Controls,
which are open-source policies based on consensus. Each check is rigorously reviewed by a
number of professionals before a conclusion is reached.
To easily access a list of evaluations for cloud security, consult the CIS Benchmarks customized
for particular cloud service providers. For instance, you can use the CIS-AWS controls, a set of
controls created especially for workloads using Amazon Web Services (AWS).
• 5. FISMA
• In accordance with the Federal Information Security Management Act (FISMA), all federal
agencies and their contractors are required to safeguard information systems and assets. NIST,
using NIST SP 800-53, was given authority under FISMA to define the framework security
standards (see definition below).
• 6. Cloud Architecture Framework
• These frameworks, which frequently cover operational effectiveness, security, and cost-value
factors, can be viewed as best parties standards for cloud architects. This framework, developed by
Amazon Web Services, aids architects in designing workloads and applications on the Amazon
cloud. Customers have access to a reliable resource for architecture evaluation thanks to this
framework, which is based on a collection of questions for the analysis of cloud environments.
• General Data Protection Regulation (GDPR)
• For the European Union, there are laws governing data protection and privacy. Even though this law only
applies to the European Union, it is something you should keep in mind if you store or otherwise handle any
personal information of residents of the EU.
• 8. SOC Reporting
• A form of audit of the operational processes used by IT businesses offering any service is known as a “Service
and Organization Audits 2” (SOC 2). A worldwide standard for cybersecurity risk management systems is
SOC 2 reporting. Your company’s policies, practices, and controls are in place to meet the five trust
principles, as shown by the SOC 2 Audit Report. The SOC 2 audit report lists security, availability, processing
integrity, confidentiality, and confidentiality as security principles. If you offer software as a service, potential
clients might request proof that you adhere to SOC 2 standards.
• 9. PCI DSS
• For all merchants who use credit or debit cards, the PCI DSS (Payment Card Industry Data Security Standard)
provides a set of security criteria. For businesses that handle cardholder data, there is PCI DSS. The PCI DSS
specifies fundamental technological and operational criteria for safeguarding cardholder data. Cardholders are
intended to be protected from identity theft and credit card fraud by the PCI DSS standard.
• 10. HIPAA
• The Health Insurance Portability and Accountability Act (HIPAA), passed by the US Congress to safeguard
individual health information, also has parts specifically dealing with information security. Businesses that
handle medical data must abide by HIPAA law. The HIPAA Security Rule (HSR) is the best choice in terms
of information security. The HIPAA HSR specifies rules for protecting people’s electronic personal health
information that a covered entity generates, acquires, makes use of or maintains.
• Organizations subject to HIPAA regulations need risk evaluations and risk management plans to reduce
threats to the availability, confidentiality, and integrity of the crucial health data they manage. Assume your
company sends and receives health data via cloud-based services (SaaS, IaaS, PaaS). If so, it is your
responsibility to make sure the service provider complies with HIPAA regulations and that you have
implemented best practices for managing your cloud setups.
• 11. CIS AWS Foundations v1.2
• Any business that uses Amazon Web Service cloud resources can help safeguard sensitive IT systems and data
by adhering to the CIS AWS Foundations Benchmark. Intelligence analysts developed a set of objective,
consensus-driven configuration standards known as the CIS (Center for Internet Security) Benchmarks to help
businesses improve their information security. Additionally, CIS procedures are for fortifying AWS accounts to
build a solid foundation for running jobs on AWS.
• 12. ACSC Essential Eight
• ACSC Essential 8 (also known as the ASD Top 4) is a list of eight cybersecurity mitigation strategies for small
and large firms. In order to improve security controls, protect businesses’ computer resources and systems, and
protect data from cybersecurity attacks, the Australian Signals Directorate (ASD) and the Australian Cyber
Security Centre (ACSC) developed the “Essential Eight Tactics.”

You might also like