0% found this document useful (0 votes)
15 views12 pages

Unit 4 Cloud Computing

Uploaded by

asta11732
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views12 pages

Unit 4 Cloud Computing

Uploaded by

asta11732
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Unit - 4

1. Resource Provisioning
The emergence of computing clouds suggests fundamental changes in software and hardware
architecture. Cloud architecture puts more emphasis on the number of processor cores or VM
instances.
1.1 Provisioning of Compute Resources (VMs)
Providers supply cloud services by signing SLAs with end users. The SLAs must commit
sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset
period. Under provisioning of resources will lead to broken SLAs and penalties. Over
provisioning of resources will lead to resource underutilization, and consequently, a decrease in
revenue for the provider. Deploying an autonomous system to efficiently provision resources to
users is a challenging problem. The difficulty comes from the unpredictability of consumer
demand, software and hardware failures, heterogeneity of services, power management, and
conflicts in signed SLAs between consumers and service providers. Resource provisioning
schemes also demand fast discovery of services and data in cloud computing infrastructures.

1.2 Resource Provisioning Methods


Figure 1 shows three cases of static cloud resource provisioning policies. In case (a), over
provisioning with the peak load causes heavy resource waste (shaded area). In case (b), under
provisioning (along the capacity line) of resources results in losses by both user and provider in
that paid demand by the users (the shaded area above the capacity) is not served and wasted
resources still exist for those demanded areas below the provisioned capacity. In case (c), the
constant provisioning of resources with fixed capacity to a declining user demand could result in
even worse resource waste. The user may give up the service by canceling the demand, resulting
in reduced revenue for the provider. Both the user and provider may be losers in resource
provisioning without elasticity. Three resource-provisioning methods are presented in the
following sections. The demand-driven method provides static resources and has been used in
grid computing for many years. The event driven method is based on predicted workload by
time. The popularity-driven method is based on Internet traffic monitored.

Figure 1
1.2.1 Demand-Driven Resource Provisioning
This method adds or removes computing instances based on the current utilization level of the
allocated resources. The demand-driven method automatically allocates two Xeon processors for
the user application, when the user was using one Xeon processor more than 60 percent of the
time for an extended period. In general, when a resource has surpassed a threshold for a certain
amount of time, the scheme increases that resource based on demand. When a resource is below
a threshold for a certain amount of time, that resource could be decreased accordingly. Amazon
implements such an auto-scale feature in its EC2 platform. This method is easy to implement.
The scheme does not work out right if the workload changes abruptly. The x-axis in figure 2 is
the time scale in milliseconds. In the beginning, heavy fluctuations of CPU load are encountered.
All three methods have demanded a few VM instances initially. Gradually, the utilization rate
becomes more stabilized with a maximum of 20 VMs (100 percent utilization) provided for
demand-driven provisioning in figure 2(a). However, the event-driven method reaches a stable
peak of 17 VMs toward the end of the event and drops quickly in figure 2(b). The popularity
provisioning shown in figure 2(c ) leads to a similar fluctuation with peak VM utilization in the
middle of the plot.
1.2.2 Event-Driven Resource Provisioning
This scheme adds or removes machine instances based on a specific time event. The scheme
works better for seasonal or predicted events such as Christmas time in the West and the Lunar
New Year in the East. During these events, the number of users grows before the event period
and then decreases during the event period. This scheme anticipates peak traffic before it
happens. The method results in a minimal loss of QoS, if the event is predicted correctly.
Otherwise, wasted resources are even greater due to events that do not follow a fixed pattern.

1.2.3 Popularity-Driven Resource Provisioning


In this method, the Internet searches for popularity of certain applications and creates the
instances by popularity demand. The scheme anticipates increased traffic with popularity. Again,
the scheme has a minimal loss of QoS, if the predicted popularity is correct. Resources may be
wasted if traffic does not occur as expected. In figure 2(c ), EC2 performance by CPU utilization
rate (the dark curve with the percentage scale shown on the left) is plotted against the number of
VMs provisioned (the light curves with scale shown on the right, with a maximum of 20 VMs
provisioned).
Figure 2

2. Global Exchange of Cloud Resources / Inter Cloud Resource Management


In order to support a large number of application service consumers from around the world,
cloud infrastructure providers (i.e., IaaS providers) have established data centers in multiple
geographical locations to provide redundancy and ensure reliability in case of site failures. For
example, Amazon has data centers in the United States (e.g., one on the East Coast and another
on the West Coast) and Europe. However, currently Amazon expects its cloud customers (i.e.
SaaS providers) to express a preference regarding where they want their application services to
be hosted. Amazon does not provide seamless/automatic mechanisms for scaling its hosted
services across multiple geographically distributed data centers. This approach has many
shortcomings. First, it is difficult for cloud customers to determine in advance the best location
for hosting their services as they may not know the origin of consumers of their services. Second,
SaaS providers may not be able to meet the QoS expectations of their service consumers
originating from multiple geographical locations. This necessitates building mechanisms for
seamless federation of data centers of a cloud provider or providers supporting dynamic scaling
of applications across multiple domains in order to meet QoS targets of cloud customers. Figure
3 shows the high-level components of the Melbourne group’s proposed InterCloud architecture.

Figure 3: Inter-cloud exchange of cloud resources through brokering

In addition, no single cloud infrastructure provider will be able to establish its data centers at all
possible locations throughout the world. As a result, cloud application service (SaaS) providers
will have difficulty in meeting QoS expectations for all their consumers. Hence, they would like
to make use of services of multiple cloud infrastructure service providers who can provide better
support for their specific consumer needs. This kind of requirement often arises in enterprises
with global operations and applications such as Internet services, media hosting, and Web 2.0
applications. This necessitates federation of cloud infrastructure service providers for seamless
provisioning of services across different cloud providers. To realize this, the Cloudbus Project at
the University of Melbourne has proposed InterCloud architecture supporting brokering and
exchange of cloud resources for scaling applications across multiple clouds. By realizing
InterCloud architectural principles in mechanisms in their offering, cloud providers will be able
to dynamically expand or resize their provisioning capability based on sudden spikes in
workload demands by leasing available computational and storage capabilities from other cloud
service providers.
Intercloud consist of client brokering and coordinator services that support utility-driven
federation of clouds: application scheduling, resource allocation, and migration of workloads.
The architecture cohesively couples the administratively and topologically distributed storage
and compute capabilities of clouds as part of a single resource leasing abstraction. The system
will ease the cross domain capability integration for on-demand, flexible, energy-efficient, and
reliable access to the infrastructure based on virtualization technology.
The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and
consumers. It aggregates the infrastructure demands from application brokers and evaluates them
against the available supply currently published by the cloud coordinators. It supports trading of
cloud services based on competitive economic models such as commodity markets and auctions.
CEx allows participants to locate providers and consumers with fitting offers. Such markets
enable services to be commoditized, and thus will pave the way for creation of dynamic market
infrastructure for trading based on SLAs. An SLA specifies the details of the service to be
provided in terms of metrics agreed upon by all parties, and incentives and penalties for meeting
and violating the expectations, respectively. The availability of a banking system within the
market ensures that financial transactions pertaining to SLAs between participants are carried out
in a secure and dependable environment.

3.Cloud Security & Security Challenges


Although virtualization and cloud computing can help companies accomplish more by breaking
the physical bonds between an IT infrastructure and its users, heightened security threats must be
overcome in order to benefit fully from this new computing paradigm. This is particularly true
for the SaaS provider. Some security concerns are worth more discussion. For example, in the
cloud, we lose control over assets in some respects, so our security model must be reassessed.
Enterprise security is only as good as the least reliable partner, department, or vendor. Can we
trust our data to our service provider? With the cloud model, we lose control over physical
security. In a public cloud, we are sharing computing resources with other companies. In a
shared pool outside the enterprise, we don’t have any knowledge or control of where the
resources run. Exposing our data in an environment shared with other companies could give the
government “reasonable cause” to seize our assets because another company has violated the
law. Simply because we share the environment in the cloud, may put our data at risk of seizure.
Storage services provided by one cloud vendor may be incompatible with another vendor’s
services should you decide to move from one to the other. Vendors are known for creating what
the hosting world calls “sticky services”—services that an end user may have difficulty
transporting from one cloud vendor to another (e.g., Amazon’s “Simple Storage Service” [S3]
is incompatible with IBM’s Blue Cloud, or Google, or Dell). If information is encrypted while
passing through the cloud, who controls the encryption/decryption keys? Is it the customer or the
cloud vendor? Most customers probably want their data encrypted both ways across the Internet
using SSL (Secure Sockets Layer protocol). They also most likely want their data encrypted
while it is at rest in the cloud vendor’s storage pool. Be sure that we, the customer, control the
encryption/decryption keys, just as if the data were still resident on our own servers. Data
integrity means ensuring that data is identically maintained during any operation (such as
transfer, storage, or retrieval). Put simply, data integrity is assurance that the data is consistent
and correct. Ensuring the integrity of the data really means that it changes only in response to
authorized transactions. This sounds good, but we must remember that a common standard to
ensure data integrity does not yet exist. As more and more mission-critical processes are moved
to the cloud, SaaS suppliers will have to provide log data in a real-time, straightforward manner,
probably for their administrators as well as their customers’ personnel. Someone has to be
responsible for monitoring for security and compliance, and unless the application and data are
under the control of end users, they will not be able to. Will customers trust the cloud provider
enough to push their mission-critical applications out to the cloud? Since the SaaS provider’s
logs are internal and not necessarily accessible externally or by clients or investigators,
monitoring is difficult. Since access to logs is required for Payment Card Industry Data Security
Standard (PCI DSS) compliance and may be requested by auditors and regulators, security
managers need to make sure to negotiate access to the provider’s logs as part of any service
agreement. Cloud applications undergo constant feature additions, and users must keep up to
date with application improvements to be sure they are protected. The speed at which
applications will change in the cloud will affect both the SDLC and security. For example,
Microsoft’s SDLC assumes that mission-critical software will have a three- to five-year period in
which it will not change substantially, but the cloud may require a change in the application
every few weeks. Even worse, a secure SLDC will not be able to provide a security cycle that
keeps up with changes that occur so quickly. This means that users must constantly upgrade,
because an older version may not function, or protect the data. Having proper fail-over
technology is a component of securing the cloud that is often overlooked. The company can
survive if a non-mission-critical application goes offline, but this may not be true for mission-
critical applications. Core business practices provide competitive differentiation. Security needs
to move to the data level, so that enterprises can be sure their data is protected wherever it goes.
Sensitive data is the domain of the enterprise, not the cloud computing provider. One of the key
challenges in cloud computing is data-level security.

4. Software-as-a-Service Security
Cloud computing models of the future will likely combine the use of SaaS (and other XaaS’s as
appropriate), utility computing, and Web 2.0 collaboration technologies to leverage the Internet
to satisfy their customers’ needs. New business models being developed as a result of the move
to cloud computing are creating not only new technologies and business operational processes
but also new security requirements and challenges as described previously. As the most recent
evolutionary step in the cloud service model, SaaS will likely remain the dominant cloud service
model for the foreseeable future and the area where the most critical need for security practices
and oversight will reside. Just as with a managed service provider, corporations or end users will
need to research vendors’ policies on data security before using vendor ser-vices to avoid losing
or not being able to access their data. The technology analyst and consulting firm Gartner lists
seven security issues which one should discuss with a cloud-computing vendor:
1. Privileged user access—Inquire about who has specialized access to data, and about the hiring
and management of such administrators.
2. Regulatory compliance—Make sure that the vendor is willing to undergo external audits
and/or security certifications.
3. Data location—Does the provider allow for any control over the location of data?
4. Data segregation—Make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery—Find out what will happen to data in the case of a disaster. Do they offer complete
restoration? If so, how long would that take?
6. Investigative support—Does the vendor have the ability toinvestigate any inappropriate or
illegal activity?
7. Long-term viability—What will happen to data if the com-pany goes out of business? How
will data be returned, and in what format?

Determining data security is harder today, so data security functions have become more critical
than they have been in the past. A tactic not covered by Gartner is to encrypt the data yourself. If
you encrypt the data using a trusted algorithm, then regardless of the service provider’s security
and encryption policies, the data will only be accessible with the decryption keys. Of course, this
leads to a follow-on problem: How do you manage private keys in a pay-on-demand computing
infrastructure? To address the security issues listed above along with others mentioned earlier in
the chapter, SaaS providers will need to incorporate and enhance security practices used by the
managed service providers and develop new ones as the cloud computing environment evolves.
The baseline security practices for the SaaS environment as currently formulated are discussed in
the following sections.

■Security Management (People)


One of the most important actions for a security team is to develop a formal charter for the
security organization and program. This will foster a shared vision among the team of what
security leadership is driving toward and expects, and will also foster “ownership” in the success
of the collective team. The charter should be aligned with the strategic plan of the organization
or company the security team works for. Lack of clearly defined roles and responsibilities, and
agreement on expectations, can result in a general feeling of loss and confusion among the
security team about what is expected of them, how their skills and experienced can be leveraged,
and meeting their performance goals. Morale among the team and pride in the team is lowered,
and security suffers as a result.

■Security Governance
A security steering committee should be developed whose objective is to focus on providing
guidance about security initiatives and alignment with business and IT strategies. A charter for
the security team is typically one of the first deliverables from the steering committee. This
charter must clearly define the roles and responsibilities of the security team and other groups
involved in performing information security functions. Lack of a formalized strategy can lead to
an unsustainable operating model and security level as it evolves. In addition, lack of attention to
security governance can result in key needs of the business not being met, including but not
limited to, risk management, security monitoring, application security, and sales support. Lack of
proper governance and management of duties can also result in potential security risks being left
unaddressed and opportunities to improve the business being missed because the security team is
not focused on the key security functions and activities that are critical to the business.

■Risk Management
Effective risk management entails identification of technology assets; identification of data and
its links to business processes, applications, and data stores; and assignment of ownership and
custodial responsibilities. Actions should also include maintaining a repository of information
assets. Owners have authority and accountability for information assets including protection
requirements, and custodians implement confidentiality, integrity, availability, and privacy
controls. A formal risk assessment process should be created that allocates security resources
linked to business continuity.

■Risk Assessment
Security risk assessment is critical to helping the information security organization make
informed decisions when balancing the dueling priorities of business utility and protection of
assets. Lack of attention to completing formalized risk assessments can contribute to an increase
in information security audit findings, can jeopardize certification goals, and can lead to
inefficient and ineffective selection of security controls that may not adequately mitigate
information security risks to an acceptable level. A formal information security risk management
process should proactively assess information security risks as well as plan and manage them on
a periodic or as needed basis. More detailed and technical security risk assessments in the form
of threat modeling should also be applied to applications and infrastructure. Doing so can help
the product management and engineering groups to be more proactive in designing and testing
the security of applications and systems and to collaborate more closely with the internal security
team. Threat modeling requires both IT and business process knowledge, as well as technical
knowledge of how the applications or systems under review work.

■Security Portfolio Management


Given the fast pace and collaborative nature of cloud computing, security portfolio management
is a fundamental component of ensuring efficient and effective operation of any information
security program and organization. Lack of portfolio and project management discipline can lead
to projects never being completed or never realizing their expected return; unsustainable and
unrealistic workloads and expectations because projects are not prioritized according to strategy,
goals, and resource capacity; and degradation of the system or processes due to the lack of
supporting maintenance and sustaining organization planning. For every new project that a
security team undertakes, the team should ensure that a project plan and project manager with
appropriate training and experience is in place so that the project can be seen through to
completion. Portfolio and project management capabilities can be enhanced by developing
methodology, tools, and processes to support the expected complexity of projects that include
both traditional business practices and cloud computing practices.

■Security Awareness
People will remain the weakest link for security. Knowledge and culture are among the few
effective tools to manage risks related to people. Not providing proper awareness and training to
the people who may need them can expose the company to a variety of security risks for which
people, rather than system or application vulnerabilities, are the threats and points of entry.
Social engineering attacks, lower reporting of and slower responses to potential security
incidents, and inadvertent customer data leaks are all possible and probable risks that may be
triggered by lack of an effective security awareness program. The one-size-fits-all approach to
security awareness is not necessarily the right approach for SaaS organizations; it is more
important to have an information security awareness and training program that tailors the
information and training according the individual’s role in the organization. For example,
security awareness can be provided to development engineers in the form of secure code and
testing training, while customer service representatives can be provided data privacy and security
certification awareness training. Ideally, both a generic approach and an individual-role approach
should be used.

■Education and Training


Programs should be developed that provide a baseline for providing fundamental security and
risk management skills and knowledge to the security team and their internal partners. This
entails a formal process to assess and align skill sets to the needs of the security team and to
provide adequate training and mentorship—providing a broad base of fundamental security,
inclusive of data privacy, and risk management knowledge. As the cloud computing business
model and its associated services change, the security challenges facing an organization will also
change. Without adequate, cur-rent training and mentorship programs in place, the security team
may not be prepared to address the needs of the business.

■Policies, Standards, and Guidelines


Many resources and templates are available to aid in the development of information security
policies, standards, and guidelines. A cloud computing security team should first identify the
information security and business requirements unique to cloud computing, SaaS, and
collaborative software application security. Policies should be developed, documented, and
implemented, along with documentation for supporting standards and guide-lines. To maintain
relevancy, these policies, standards, and guidelines should be reviewed at regular intervals (at
least annually) or when significant changes occur in the business or IT environment. Outdated
policies, standards, and guidelines can result in inadvertent disclosure of information as a cloud
computing organizational business model changes. It is important to maintain the accuracy and
relevance of information security policies, standards, and guidelines as business initiatives, the
business environment, and the risk landscape change. Such policies, standards, and guidelines
also pro-vide the building blocks with which an organization can ensure consistency of
performance and maintain continuity of knowledge during times of resource turnover.

■Secure Software Development Life Cycle (SecSDLC)


The SecSDLC involves identifying specific threats and the risks they represent, followed by
design and implementation of specific controls to counter those threats and assist in managing
the risks they pose to the organization and/or its customers. The SecSDLC must provide
consistency, repeatability, and conformance. The SDLC consists of six phases, and there are
steps unique to the SecSLDC in each of phases:
Phase 1. Investigation: Define project processes and goals, and document them in the program
security policy.
Phase 2. Analysis: Analyze existing security policies and programs, analyze current threats and
controls, examine legal issues, and per-form risk analysis.
Phase 3. Logical design: Develop a security blueprint, plan incident response actions, plan
business responses to disaster, and determine the feasibility of continuing and/or
outsourcing the project.
Phase 4. Physical design: Select technologies to support the security blueprint, develop a
definition of a successful solution, design physical security measures to support
technological solutions, and review and approve plans.
Phase 5. Implementation: Buy or develop security solutions. At the end of this phase, present a
tested package to management for approval.
Phase 6. Maintenance: Constantly monitor, test, modify, update, and repair to respond to
changing threats.
In the SecSDLC, application code is written in a consistent manner that can easily be audited and
enhanced; core application services are pro-vided in a common, structured, and repeatable
manner; and framework modules are thoroughly tested for security issues before implementation
and continuously retested for conformance through the software regression test cycle. Additional
security processes are developed to support application development projects such as external
and internal penetration testing and standard security requirements based on data classification.
Formal training and communications should also be developed to raise awareness of process
enhancements.

5. Cloud Security Defense Strategies


A healthy cloud ecosystem is desired to free users from abuses, violence, cheating, hacking,
viruses, rumors, pornography, spam, and privacy and copyright violations. Following are
prominent security models for IaaS, PaaS, and SaaS. These security models are based on various
SLAs between providers and users.

■Basic Cloud Security


Three basic cloud security enforcements are expected. First, facility security in data centers
demands on-site security year round. Biometric readers, CCTV (close-circuit TV), motion
detection, and man traps are often deployed. Also, network security demands fault-tolerant
external firewalls, intrusion detection systems (IDSes), and third-party vulnerability assessment.
Finally, platform security demands SSL and data decryption, strict password policies, and system
trust certification. Servers in the cloud can be physical machines or VMs. User interfaces are
applied to request services. The provisioning tool carves out the systems from the cloud to satisfy
the requested service. A security-aware cloud architecture demands security enforcement.
Malware-based attacks such as network worms, viruses, and DDoS attacks exploit system
vulnerabilities. These attacks compromise system functionality or provide intruders unauthorized
access to critical information. Thus, security defenses are needed to protect all cluster servers and
data centers. Here are some cloud components that demand special security protection:

• Protection of servers from malicious software attacks such as worms, viruses, and malware
• Protection of hypervisors or VM monitors from software-based attacks and vulnerabilities
• Protection of VMs and monitors from service disruption and DoS attacks
• Protection of data and information from theft, corruption, and natural disasters
• Providing authenticated and authorized access to critical data and services

■Security Challenges in VMs


Traditional network attacks include buffer overflows, DoS attacks, spyware, malware, rootkits,
Trojan horses, and worms. In a cloud environment, newer attacks may result from hypervisor
malware, guest hopping and hijacking, or VM rootkits. Another type of attack is the man-in-the-
middle attack for VM migrations. In general, passive attacks steal sensitive data or passwords.
Active attacks may manipulate kernel data structures which will cause major damage to cloud
servers. An IDS can be a NIDS or a HIDS. Program shepherding can be applied to control and
verify code execution. Other defense technologies include using the RIO dynamic optimization
infra-structure, or VMware’s vSafe and vShield tools, security compliance for hypervisors, and
Intel vPro technology. Others apply a hardened OS environment or use isolated execution and
sandboxing.
■Cloud Defense Methods
Virtualization enhances cloud security. But VMs add an additional layer of software that could
become a single point of failure. With virtualization, a single physical machine can be divided or
partitioned into multiple VMs (e.g., server consolidation). This provides each VM with better
security isolation and each partition is protected from DoS attacks by other partitions. Security
attacks in one VM are isolated and contained from affecting the other VMs. VM failures do not
propagate to other VMs. The hypervisor provides visibility of the guest OS, with complete guest
isolation. Fault containment and failure isolation of VMs provide a more secure and robust
environment. Malicious intrusions may destroy valuable hosts, networks, and storage resources.
Internet anomalies found in routers, gateways, and distributed hosts may stop cloud services.
Trust negotiation is often done at the SLA level. Public Key Infrastructure (PKI) services could
be augmented with data-center reputation systems. Worm and DDoS attacks must be contained.
It is harder to establish security in the cloud because all data and software are shared by default.

■Defense with Virtualization


The VM is decoupled from the physical hardware. The entire VM can be represented as a
software component and can be regarded as binary or digital data. The VM can be saved, cloned,
encrypted, moved, or restored with ease. VMs enable faster disaster recovery. Live migration of
VMs was suggested by many researchers for building distributed intrusion detection systems
(DIDSes). Multiple IDS VMs can be deployed at various resource sites including data centers.
Security policy conflicts must be resolved at design time and updated periodically.

■Privacy and Copyright Protection


With shared files and data sets, privacy, security, and copyright data could be compromised in a
cloud computing environment. Users desire to work in a software environment that provides
many useful tools to build cloud applications over large data sets. Google’s platform essentially
applies in-house software to protect resources. The Amazon EC2 applies HMEC and X.509
certificates in securing resources. It is necessary to protect browser-initiated application software
in the cloud environment. Here are several security features desired in a secure cloud:
• Dynamic web services with full support from secure web technologies
• Established trust between users and providers through SLAs and reputation systems
• Effective user identity management and data-access management
• Single sign-on and single sign-off to reduce security enforcement overhead
• Auditing and copyright compliance through proactive enforcement
• Shifting of control of data operations from the client environment to cloud providers
• Protection of sensitive and regulated information in a shared environment
Figure 4: The typical security structure coordinated by a secured gateway plus external firewalls to safeguard the
access of public or private clouds

You might also like