Unit 4 Cloud Computing
Unit 4 Cloud Computing
1. Resource Provisioning
The emergence of computing clouds suggests fundamental changes in software and hardware
architecture. Cloud architecture puts more emphasis on the number of processor cores or VM
instances.
1.1 Provisioning of Compute Resources (VMs)
Providers supply cloud services by signing SLAs with end users. The SLAs must commit
sufficient resources such as CPU, memory, and bandwidth that the user can use for a preset
period. Under provisioning of resources will lead to broken SLAs and penalties. Over
provisioning of resources will lead to resource underutilization, and consequently, a decrease in
revenue for the provider. Deploying an autonomous system to efficiently provision resources to
users is a challenging problem. The difficulty comes from the unpredictability of consumer
demand, software and hardware failures, heterogeneity of services, power management, and
conflicts in signed SLAs between consumers and service providers. Resource provisioning
schemes also demand fast discovery of services and data in cloud computing infrastructures.
Figure 1
1.2.1 Demand-Driven Resource Provisioning
This method adds or removes computing instances based on the current utilization level of the
allocated resources. The demand-driven method automatically allocates two Xeon processors for
the user application, when the user was using one Xeon processor more than 60 percent of the
time for an extended period. In general, when a resource has surpassed a threshold for a certain
amount of time, the scheme increases that resource based on demand. When a resource is below
a threshold for a certain amount of time, that resource could be decreased accordingly. Amazon
implements such an auto-scale feature in its EC2 platform. This method is easy to implement.
The scheme does not work out right if the workload changes abruptly. The x-axis in figure 2 is
the time scale in milliseconds. In the beginning, heavy fluctuations of CPU load are encountered.
All three methods have demanded a few VM instances initially. Gradually, the utilization rate
becomes more stabilized with a maximum of 20 VMs (100 percent utilization) provided for
demand-driven provisioning in figure 2(a). However, the event-driven method reaches a stable
peak of 17 VMs toward the end of the event and drops quickly in figure 2(b). The popularity
provisioning shown in figure 2(c ) leads to a similar fluctuation with peak VM utilization in the
middle of the plot.
1.2.2 Event-Driven Resource Provisioning
This scheme adds or removes machine instances based on a specific time event. The scheme
works better for seasonal or predicted events such as Christmas time in the West and the Lunar
New Year in the East. During these events, the number of users grows before the event period
and then decreases during the event period. This scheme anticipates peak traffic before it
happens. The method results in a minimal loss of QoS, if the event is predicted correctly.
Otherwise, wasted resources are even greater due to events that do not follow a fixed pattern.
In addition, no single cloud infrastructure provider will be able to establish its data centers at all
possible locations throughout the world. As a result, cloud application service (SaaS) providers
will have difficulty in meeting QoS expectations for all their consumers. Hence, they would like
to make use of services of multiple cloud infrastructure service providers who can provide better
support for their specific consumer needs. This kind of requirement often arises in enterprises
with global operations and applications such as Internet services, media hosting, and Web 2.0
applications. This necessitates federation of cloud infrastructure service providers for seamless
provisioning of services across different cloud providers. To realize this, the Cloudbus Project at
the University of Melbourne has proposed InterCloud architecture supporting brokering and
exchange of cloud resources for scaling applications across multiple clouds. By realizing
InterCloud architectural principles in mechanisms in their offering, cloud providers will be able
to dynamically expand or resize their provisioning capability based on sudden spikes in
workload demands by leasing available computational and storage capabilities from other cloud
service providers.
Intercloud consist of client brokering and coordinator services that support utility-driven
federation of clouds: application scheduling, resource allocation, and migration of workloads.
The architecture cohesively couples the administratively and topologically distributed storage
and compute capabilities of clouds as part of a single resource leasing abstraction. The system
will ease the cross domain capability integration for on-demand, flexible, energy-efficient, and
reliable access to the infrastructure based on virtualization technology.
The Cloud Exchange (CEx) acts as a market maker for bringing together service producers and
consumers. It aggregates the infrastructure demands from application brokers and evaluates them
against the available supply currently published by the cloud coordinators. It supports trading of
cloud services based on competitive economic models such as commodity markets and auctions.
CEx allows participants to locate providers and consumers with fitting offers. Such markets
enable services to be commoditized, and thus will pave the way for creation of dynamic market
infrastructure for trading based on SLAs. An SLA specifies the details of the service to be
provided in terms of metrics agreed upon by all parties, and incentives and penalties for meeting
and violating the expectations, respectively. The availability of a banking system within the
market ensures that financial transactions pertaining to SLAs between participants are carried out
in a secure and dependable environment.
4. Software-as-a-Service Security
Cloud computing models of the future will likely combine the use of SaaS (and other XaaS’s as
appropriate), utility computing, and Web 2.0 collaboration technologies to leverage the Internet
to satisfy their customers’ needs. New business models being developed as a result of the move
to cloud computing are creating not only new technologies and business operational processes
but also new security requirements and challenges as described previously. As the most recent
evolutionary step in the cloud service model, SaaS will likely remain the dominant cloud service
model for the foreseeable future and the area where the most critical need for security practices
and oversight will reside. Just as with a managed service provider, corporations or end users will
need to research vendors’ policies on data security before using vendor ser-vices to avoid losing
or not being able to access their data. The technology analyst and consulting firm Gartner lists
seven security issues which one should discuss with a cloud-computing vendor:
1. Privileged user access—Inquire about who has specialized access to data, and about the hiring
and management of such administrators.
2. Regulatory compliance—Make sure that the vendor is willing to undergo external audits
and/or security certifications.
3. Data location—Does the provider allow for any control over the location of data?
4. Data segregation—Make sure that encryption is available at all stages, and that these
encryption schemes were designed and tested by experienced professionals.
5. Recovery—Find out what will happen to data in the case of a disaster. Do they offer complete
restoration? If so, how long would that take?
6. Investigative support—Does the vendor have the ability toinvestigate any inappropriate or
illegal activity?
7. Long-term viability—What will happen to data if the com-pany goes out of business? How
will data be returned, and in what format?
Determining data security is harder today, so data security functions have become more critical
than they have been in the past. A tactic not covered by Gartner is to encrypt the data yourself. If
you encrypt the data using a trusted algorithm, then regardless of the service provider’s security
and encryption policies, the data will only be accessible with the decryption keys. Of course, this
leads to a follow-on problem: How do you manage private keys in a pay-on-demand computing
infrastructure? To address the security issues listed above along with others mentioned earlier in
the chapter, SaaS providers will need to incorporate and enhance security practices used by the
managed service providers and develop new ones as the cloud computing environment evolves.
The baseline security practices for the SaaS environment as currently formulated are discussed in
the following sections.
■Security Governance
A security steering committee should be developed whose objective is to focus on providing
guidance about security initiatives and alignment with business and IT strategies. A charter for
the security team is typically one of the first deliverables from the steering committee. This
charter must clearly define the roles and responsibilities of the security team and other groups
involved in performing information security functions. Lack of a formalized strategy can lead to
an unsustainable operating model and security level as it evolves. In addition, lack of attention to
security governance can result in key needs of the business not being met, including but not
limited to, risk management, security monitoring, application security, and sales support. Lack of
proper governance and management of duties can also result in potential security risks being left
unaddressed and opportunities to improve the business being missed because the security team is
not focused on the key security functions and activities that are critical to the business.
■Risk Management
Effective risk management entails identification of technology assets; identification of data and
its links to business processes, applications, and data stores; and assignment of ownership and
custodial responsibilities. Actions should also include maintaining a repository of information
assets. Owners have authority and accountability for information assets including protection
requirements, and custodians implement confidentiality, integrity, availability, and privacy
controls. A formal risk assessment process should be created that allocates security resources
linked to business continuity.
■Risk Assessment
Security risk assessment is critical to helping the information security organization make
informed decisions when balancing the dueling priorities of business utility and protection of
assets. Lack of attention to completing formalized risk assessments can contribute to an increase
in information security audit findings, can jeopardize certification goals, and can lead to
inefficient and ineffective selection of security controls that may not adequately mitigate
information security risks to an acceptable level. A formal information security risk management
process should proactively assess information security risks as well as plan and manage them on
a periodic or as needed basis. More detailed and technical security risk assessments in the form
of threat modeling should also be applied to applications and infrastructure. Doing so can help
the product management and engineering groups to be more proactive in designing and testing
the security of applications and systems and to collaborate more closely with the internal security
team. Threat modeling requires both IT and business process knowledge, as well as technical
knowledge of how the applications or systems under review work.
■Security Awareness
People will remain the weakest link for security. Knowledge and culture are among the few
effective tools to manage risks related to people. Not providing proper awareness and training to
the people who may need them can expose the company to a variety of security risks for which
people, rather than system or application vulnerabilities, are the threats and points of entry.
Social engineering attacks, lower reporting of and slower responses to potential security
incidents, and inadvertent customer data leaks are all possible and probable risks that may be
triggered by lack of an effective security awareness program. The one-size-fits-all approach to
security awareness is not necessarily the right approach for SaaS organizations; it is more
important to have an information security awareness and training program that tailors the
information and training according the individual’s role in the organization. For example,
security awareness can be provided to development engineers in the form of secure code and
testing training, while customer service representatives can be provided data privacy and security
certification awareness training. Ideally, both a generic approach and an individual-role approach
should be used.
• Protection of servers from malicious software attacks such as worms, viruses, and malware
• Protection of hypervisors or VM monitors from software-based attacks and vulnerabilities
• Protection of VMs and monitors from service disruption and DoS attacks
• Protection of data and information from theft, corruption, and natural disasters
• Providing authenticated and authorized access to critical data and services