0% found this document useful (0 votes)
36 views17 pages

Unit 3

The document discusses cloud computing concepts including public, private and hybrid cloud models. It describes cloud service models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document also discusses cloud architecture, advantages of cloud computing, and major public cloud platforms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views17 pages

Unit 3

The document discusses cloud computing concepts including public, private and hybrid cloud models. It describes cloud service models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The document also discusses cloud architecture, advantages of cloud computing, and major public cloud platforms.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

CLOUD COMPUTING

UNIT – 3
Syllabus: Cloud Platform Architecture: Cloud Computing and Service Models,
Public Cloud Platforms, Service Oriented Architecture, Programming on Amazon
AWS and Microsoft Azure

3.1. CLOUD COMPUTING AND SERVICE MODELS:

In recent days, the IT industry has moved from manufacturing to offering more services
(service-oriented). As of now, 80% of the industry is ‘service-industry’. It should be realized
that services are not manufactured/invented from time-to-time; they are only rented and
improved as per the requirements. Clouds aim to utilize the resources of data centers
virtually over automated hardware, databases, user interfaces and apps.

I)Public, Private and Hybrid Clouds: Cloud computing has evolved from the concepts of
clusters, grids and distributed computing. Different resources (hardware, finance, time) are
leveraged (use to maximum advantage) to bring out the maximum HTC. A Cloud Computing
model enables the users to share resources from anywhere at any time through their
connected devices.

Advantages of Cloud Computing: Recall that in Cloud Computing, the


programming is sent to data rather than the reverse, to avoid large data movement, and
maximize the bandwidth utilization. Cloud Computing also reduces the costs incurred by the
data centers, and increases the app flexibility. Cloud Computing consists of a virtual
platform with elastic resources and puts together the hardware, data and software as per
demand. Furthermore, the apps utilized and offered are heterogeneous.

The Basic Architecture of the types of clouds can be seen in Figure 4.1 below.

 Public Clouds: A public cloud is owned by a service provider, built over the Internet
and offered to a user on payment. Ex: Google App Engine (GAE), AWS, MS-Azure,
IBM Blie Cloud and Salesforce-Force.com. All these offer their services for creating
and managing VM instances to the users within their own infrastructure.
 Private Clouds: A private cloud is built within the domain of an intranet owned by
a single organization. It is client-owned and managed; its access is granted to a
limited number of clients only. Private clouds offer a flexible and agile private
infrastructure to run workloads within their own domains. Though private cloud
offers more control, it has limited resources only.
 Hybrid Clouds: A hybrid cloud is built with both public and private clouds. Private
clouds can also support a hybrid cloud model by enhancing the local infrastructure
with computing capacity of a public external cloud.

 Data Center Networking Architecture: The core of a cloud is the server cluster
and the cluster nodes are used as compute nodes. The scheduling of user jobs
requires that virtual clusters are to be created for the users and should be granted
control over the required resources. Gateway nodes are used to provide the access
points of the concerned service from the outside world. They can also be used for
security control of the entire cloud platform. It is to be noted that in physical
clusters/grids, the workload is static; in clouds, the workload is dynamic and the
cloud should be able to handle any level of workload on demand.

Data centers and supercomputers also differ in networking requirements, as


illustrated in Figure 4.2. Supercomputers use custom-designed high-bandwidth
networks such as fat trees or 3D torus networks. Data-center networks are mostly
IP-based commodity networks, such as the 10 Gbps Ethernet network, which is
optimized for Internet access. Figure 4.2 shows a multilayer structure for accessing
the Internet. The server racks are at the bottom Layer 2, and they are connected
through fast switches (S) as the hardware core. The data center is connected to the
Internet at Layer 3 with many access routers (ARs) and border routers (BRs).
 Cloud Development Trends: There is a good chance that private clouds will grow in
the future since private clouds are more secure, and adjustable within an organization.
Once they are matured and more scalable, they might be converted into public clouds.
In another angle, hybrid clouds might also grow in the future.

ii) Cloud Ecosystem and Enabling Technologies: The differences between classical
computing and cloud computing can be seen in the table below. In traditional computing,
a user has to buy the hardware, acquire the software, install the system, test the
configuration and execute the app code. The management of the available resources is
also a part of this. Finally, all this process has to be revised for every 1.5 or 2 years since
the used methodologies will become obsolete.

On the other hand, Cloud Computing follows a pay-as-you-go model [1]. Hence the cost is
reduced significantly – a user doesn’t buy any resources but rents them as per his
requirements. All S/W and H/W resources are leased by the user from the cloud resource
providers. This is advantageous for small and middle business firms which require limited
amount of resources only. Finally, Cloud Computing also saves power.

a) Cloud Design Objectives:


 Shifting computing from desktops to data centers
 Service provisioning and cloud economics
 Scalability in performance (as the no. of users increases)
 Data Privacy Protection
 High quality of cloud services (QoS must be standardized to achieve this)
 New standards and interfaces
b) Cost Model:

The above Figure 4.3a shows the additional costs on top of fixed capital investments in
traditional computing. In Cloud Computing, only pay-as-per-use is applied, and user-jobs
are outsourced to data centers. To use a cloud, one has no need to buy hardware
resources; he can utilize them as per the demands of the work and release the same after
the job is completed.

c) Cloud Ecosystems: With the emergence of Internet clouds, an ‘ecosystem’ (a


complex inter-connected systems network) has evolved. This consists of users, providers
and technologies. All this is based mainly on the open source Cloud Computing tools that
let organizations build their own IaaS. Private and hybrid clouds are also used. Ex:
Amazon EC2.

An ecosystem for private clouds was suggested by scientists as depicted in Figure 4.4.
In the above suggested 4 levels, at the user end, a flexible platform is required by the
customers. At the cloud management level, the virtualization resources are provided by
the concerned cloud manager to offer the IaaS. At the VI management level, the
manager allocates the VMs to the available multiple clusters. Finally, at the VM
management level, the VM managers handle VMs installed on the individual host
machines.

d) Increase of Private Clouds: Private clouds influence the infrastructure and services
that are utilized by an organization. Private and public clouds handle the workloads
dynamically but public clouds handle them without communication dependency. On the
other hand, private clouds can balance workloads to exploit the infrastructure effectively
to obtain HP. The major advantage of private clouds is less security problems and public
clouds need less investment.

iii)Infrastructure-as-a-Service (IaaS): A model for different services is shown in


Figure 4.5, as shown below. The required service is performed by the rented cloud
infrastructure. On this environment, the user can deploy and run his apps. Note that user
doesn’t have any control over the cloud infrastructure but can choose his OS, storage,
apps and network components.
Ex: Amazon EC2.
iv) platform-as-a-service (PaaS) and Software-as-a-Service (SaaS)
 Platform-as-a-Service (PaaS): To develop, deploy and manage apps with
provisioned resources, an able platform is needed by the users. Such a platform
includes OS and runtime library support. Different PaaS offered in the current
market and other details are highlighted in the Table 4.2 below:

It should be noted that platform cloud is an integrated system consisting of both S/W
and H/W. The user doesn’t manage the cloud infrastructure but chooses the platform that
is best suited to his choice of apps. The model also encourages third parties to
provide software management, integration and service monitoring solutions.
 Software as a Service (SaaS): This is about a browser-initiated app s/w over
thousands of cloud customers. Services & tools offered by PaaS are utilized in
construction and deployment of apps and management of their resources. The
customer needs no investment and the provider can keep the costs low. Customer
data is also stored in a cloud and is accessible through different other services. Ex:
Gmail, Google docs, Salesforce.com etc.
 Mashup of Cloud Services: Public clouds are more used these days but private
clouds are not far behind. To utilize the resources up to the maximum level and
deploy/remove the apps as per requirement, we may need to mix-up the different
parts of each service to bring out a chain of connected activities. Ex: Google Maps,
Twitter, Amazon ecommerce, YouTube etc.

3.2. PUBLIC CLOUD PLATFORMS:


PUBLIC CLOUD PLATFORMS: Cloud services are provided as per demand by different
companies. It can be seen in Figure 4.19 that there are 5 levels of cloud players.

The app providers at the SaaS level are used mainly by the individual users. Most
business organisations are serviced by IaaS and PaaS providers. IaaS provides compute,
storage, and communication resources to both app providers and organisational users. The
cloud environment is defined by PaaS providers. Note that PaaS provides support both
IaaS services and organisational users directly.
Cloud services depend upon machine virtualization, SOA, grid infrastructure management
and power efficiency. The provider service charges are much lower than the cost incurred
by the users when replacing damaged servers. The Table 4.5 shows a summary of the
profiles of the major service providers.

PKI=> Public Key Infrastructure; VPN=> Virtual Private Network

a) Google App Engine (GAE): The Google platform is based on its search engine
expertise and is applicable to many other areas (Ex: MapReduce). The Google
Cloud Infrastructure consists of several apps like Gmail, Google Docs, and
Google Earth and can support multiple no. of users simultaneously to raise the bar
for HA (high availability). Other technology achievements of Google include Google
File System (GFS) [like HDFS], MapReduce, BigTable, and Chubby (A Distributed
Lock Service). GAE enables users to run their apps on a large number of data
centers associated with Google’s search engine operations. The GAE architecture
can be seen in Figure 4.20 below:
The building blocks of Google’s Cloud Computing app include GFS for storing large
amounts of data, the MapReduce programming framework for developers, Chubby for
distributed lock services and BigTable as a storage service for accessing structural data.

GAE runs the user program on Google’s infrastructure where the user need not worry
about storage or maintenance of data in the servers. It is a combination of several
software components but the frontend is same as ASP (Active Server Pages), J2EE and
JSP.

Functional Modules of GAE:


 Datastore offers OO, distributed and structured data storage services based on
BigTable techniques. This secures data management operations.
 Application Runtime Environment: It is a platform for scalable web
programming and execution. (Supports the languages of Java and Python)
 Software Development Kit: It is used for local app development and test runs of
the new apps.
 Administration Console: Used for easy management of user app development
cycles infstead of physical resource management.
 Web Service Infrastructure provides special interfaces to guarantee flexible use
and management of storage and network resources.

The well-known GAE apps are the search engine, docs, earth and Gmail. Users linked with
one app can interact and interface with other apps through the resources of GAE
(synchronise and one login for all services).

b) Amazon Web Services (AWS): Amazon applies the IaaS model in providing its
services. The Figure 4.21 [1] below shows the architecture of AWS:
EC2 provides the virtualized platforms to host the VMs where the cloud app can run.
S3 (Simple Storage Service) provides the OO storage service for the users.
EBS (Elastic Block Service) provides the block storage interface which can be used to
support traditional apps.
SQS (Simple Queue Service) ensures a reliable message service between two processes.
Amazon offers a RDS (relational database service) with a messaging interface. The AWS
offerings are given below in Table 4.6

c) MS-Azure: The overall architecture of MS cloud platform, built on its own data
centers, is shown in Figure 4.22. It is divided into 3 major component platforms as
it can be seen. Apps are installed on VMs and Azure platform itself is built on
Windows OS.
 Live Service: Through this, the users can apply MS live apps and data across multiple
machines concurrently.
 .NET Service: This package supports app development on local hosts and execution on
cloud machines.
 SQL Azure: Users can visit and utilized the relational database associated with a SQL
server in the cloud.
 SharePoint Service: A scalable platform to develop special business apps.
 Dynamic CRM Service: This provides a business platform for the developers to manage
the CRM apps in financing, marketing, sales and promotions.

3.3 SERVICE-ORIENTED ARCHITECTURE:


SERVICE-ORIENTED ARCHITECTURE: SOA is concerned about how to design a
software system that makes use of services or apps through their interfaces. These apps
are distributed over the networks. The World Wide Web Consortium (W3C) defines SOA as
a form of distributed architecture characterized by:
 Logical View: The SOA is an abstracted, logical view of actual programs, DBs etc.
defined in terms of the operations it carries out. The service is formally defined in
terms of messages exchanged between providers and requests.
 Message Orientation
 Description Orientation
i. Services and Web Services: In an SOA concept, the s/w capabilities are
delivered & consumed through loosely coupled and reusable services using
messages. ‘Web Service’ is a self-contained modular application designed to be
used by other apps across the web. This can be seen in Figure 5.2.
WSDL => Web Services Description Language
UDDI => Universal Description, Discovery and Integration
SOAP => Simple Object Access Protocol

SOAP: This provides a standard packaging structure for transmission of XML documents
over various IPs. (HTTP, SMTP, FTP). A SOAP message consists of an envelope (root
element), which itself contains a header. It also had a body that carries the payload of
the message.
WSDL: It describes the interface and a set of operations supported by a web service in a
standard format.
UDDI: This provides a global registry for advertising and discovery of web services by
searching for names, identifiers, categories.
Since SOAP can combine the strengths of XML and HTTP, it is useful for heterogeneous
distributed computing environments like grids and clouds
ii. Enterprise Multitier Architecture: This is a kind of client/server architecture
application processing and data management are logically separate processes. As
seen below in Figure 5.4, it is a three-tier information system where each layer has
its own important responsibilities.

Presentation Layer: Presents information to external entities and allows them to interact
with the system by submitting operations and getting responses.
Application Logic (Middleware): These consist of programs that implement actual
operations requested by the client. The middle tier can also be used for user
authentication and granting of resources, thus removing some load from the servers.
Resource Management Layer (Data Layer): It deals with the data sources of an
information system.

iii. OGSA Grid: Open Grid Services Architecture is intended to


 Facilitate the usage of resources across heterogeneous environments
 Deliver best QoS
 Define open interfaces between diverse resources
 Develop inter-operable standards
OGSA architecture falls into seven broad areas, as shown in Figure 5.5.
Infrastructure Services, Execution Management Services, Data Management Services,
Resource Management Services, Security Services, Security Services, Information Services
and Self-management Services (automation).

These services are summarized as follows:


• Infrastructure Services Refer to a set of common functionalities, such as naming,
typically required by higher level services.
• Execution Management Services Concerned with issues such as starting and
managing tasks, including placement, provisioning, and life-cycle management. Tasks
may range from simple jobs to complex workflows or composite services.
• Data Management Services Provide functionality to move data to where it is
needed, maintain
replicated copies, run
queries and updates,
and transform data
into new formats.
These services must
handle issues such as
data consistency,
persistency, and
integrity. An OGSA
data service is a web
service that
implements one or
more of the base data
interfaces to enable
access to, and
management of, data
resources in a
distributed environment. The three base interfaces, Data Access, Data Factory, and
Data Management, define basic operations for representing, accessing, creating, and
managing data.
• Resource Management Services Provide management capabilities for grid
resources: management of the resources themselves, management of the resources
as grid components, and management of the OGSA infrastructure.
• Security Services Facilitate the enforcement of security-related policies within a
(virtual) organization, and supports safe resource sharing. Authentication,
authorization, and integrity assurance are essential functionalities provided by these
services.
• Information Services Provide efficient production of, and access to, information
about the grid and its constituent resources. The term “information” refers to dynamic
data or events used for status monitoring; relatively static data used for discovery;
and any data that is logged.
• Self-Management Services Support service-level attainment for a set of services
(or resources),with as much automation as possible, to reduce the costs and
complexity of managing the system. These services are essential in addressing the
increasing complexity of owning and operating an IT infrastructure.

3.4 PROGRAMMING ON AMAZON AWS AND MICROSOFT


AZURE

3.4.1 Programming on Amazon EC2

 Amazon was the first company to introduce VMs in application hosting


 Customers can rent VMs instead of physical machines to run their own
applications
 By using VMs, customers can load any software of their choice.
 The elastic feature of such a service is that a customer can create,
launch, and terminate server instances as needed, paying by the hour for
active servers
 Amazon provides several types of preinstalled VMs
 Instances are often called Amazon Machine Images (AMIs)
 AMI are preconfigured with operating systems based on Linux or
Windows, and additional software.
 AMIs are the templates for instances, which are running VMs.
Three type of AMI

• Private AMI: Images created by you, which are private by default. You
can grant access to other users to launch your private images
• Public AMI: Images created by users and released to the AWS
community, so anyone can launch instances based on them and use them
any way they like
• Paid QAMI: You can create images providing specific functions that can
be launched by anyone willing to pay you per each hour of usage on top
of Amazon’s charges.
The workflow to create a VM is

Create an AMI → Create Key Pair → Configure Firewall → Launch

FIGURE 4.23: Amazon EC2 execution environment.


It defines the IaaS instances available in five broad classes:

1. Standard instances are well suited for most applications.


2. Micro instances provide a small number of consistent CPU resources and
allow you to burst CPU capacity when additional cycles are available.
They are well suited for lower throughput applications and web sites that
consume significant compute cycles periodically.
3. High-memory instances offer large memory sizes for high-throughput
applications, including database and memory caching applications.
4. High-CPU instances have proportionally more CPU resources than
memory (RAM) and are well suited for compute-intensive applications.
5. Cluster compute instances provide proportionally high CPU resources with
increased network performance and are well suited for high-performance
computing (HPC) applications and other demanding network-bound
applications. They use 10 Gigabit Ethernet interconnections.
3.4.2 Amazon Simple Storage Service (S3)

Amazon S3Simple Storage Service is a scalable, high-speed, low-cost


web-based service designed for online backup and archiving of data and
application programs. It allows uploading, store, and downloading any type of
files up to 5 GB in size. This service allows the subscribers to access the same
systems that Amazon uses to run its own web sites. The subscriber has control
over the accessibility of data, i.e. privately/publicly accessible.

Amazon S3 provides a simple web services interface that can be used to


store and retrieve any amount of data, at any time, from anywhere on the web.
S3 provides the object-oriented storage service for users. Users can access their
objects through Simple Object Access Protocol (SOAP) with either browsers or
other client programs which support SOAP. SQS is responsible for ensuring a
reliable message service between two processes, even if the receiver processes
are not running. Following Figure shows the S3 execution environment.

Fig: Amazon S3 Execution Environment

The fundamental operation unit of S3 is called an object. Each object is


stored in a bucket and retrieved via a unique, developer-assigned key. In other
words, the bucket is the container of the object. Besides unique key attributes,
the object has other attributes such as values, metadata, and access control
information. From the programmer’s perspective, the storage provided by S3
can be viewed as a very coarse-grained key-value pair. Through the key-value
programming interface, users can write, read, and delete objects containing
from 1 byte to 5 gigabytes of data each. There are two types of web service
interface for the user to access the data stored in Amazon clouds. One is a REST
(web 2.0) interface, and the other is a SOAP interface.

Key features of S3:

o Redundant through geographic dispersion.


o Designed to provide 99.999999999 percent durability and 99.99 percent
availability of objects over a given year with cheaper reduced redundancy
storage (RRS).
o Authentication mechanisms to ensure that data is kept secure from
unauthorized access. Objects can be made private or public, and rights can
be granted to specific users.
o Per-object URLs and ACLs (access control lists).
o Default download protocol of HTTP. A BitTorrent protocol interface is provided
to lower costs for high-scale distribution.
o There is no data transfer charge for data transferred between Amazon EC2
and Amazon S3 within the same region
o Low cost and Easy to Use − Using Amazon S3, the user can store a large
amount of data at very low charges.
o Secure − Amazon S3 supports data transfer over SSL and the data gets
encrypted automatically once it is uploaded. The user has complete control
over their data by configuring bucket policies using AWS IAM.
o Scalable − Using Amazon S3, there need not be any worry about storage
concerns. We can store as much data as we have and access it anytime.
o Higher performance − Amazon S3 is integrated with Amazon CloudFront,
that distributes content to the end users with low latency and provides high
data transfer speeds without any minimum usage commitments.
o Integrated with AWS services − Amazon S3 integrated with AWS services
include Amazon CloudFront, Amazon CLoudWatch, Amazon Kinesis, Amazon
RDS, Amazon Route 53, Amazon VPC, AWS Lambda, Amazon EBS, Amazon
Dynamo DB, etc.

3.4.3 Amazon Elastic Block Store (EBS)

• Elastic Block Store (EBS) provides the volume block interface for saving
and restoring the virtual images of EC2 instances
• Users can use EBS to save persistent data and mount to the running
instances of EC2
• EBS allows to create storage volumes from 1 GB to 1 TB that can be
mounted as EC2 instances
• These storage volumes behave like raw, unformatted block devices
• Volume storage charges are based on the amount of storage users
allocate until it is released, and is priced at $0.10 per GB/month
• EBS also charges $0.10 per 1 million I/O requests made to the storage.
• The equivalent of EBS has been offered in open source clouds such
Nimbus
3.4.3.1 Amazon SimpleDB Service

SimpleDB provides a simplified data model based on the relational database


data model. Structured data from users must be organized into domains. Each
domain can be considered a table. The items are the rows in the table. A cell in
the table is recognized as the value for a specific attribute (column name) of the
corresponding row. This is similar to a table in a relational database. However, it
is possible to assign multiple values to a single cell in the table. This is not
permitted in a traditional relational database which wants to maintain data
consistency.

Many developers simply want to quickly store, access, and query the stored
data. SimpleDB removes the requirement to maintain database schemas with
strong consistency. SimpleDB is priced at $0.140 per Amazon SimpleDB Machine
Hour consumed with the first 25 Amazon SimpleDB Machine Hours consumed per
month free. SimpleDB, like Azure Table, could be called “LittleTable,” as they are
aimed at managing small amounts of information stored in a distributed
table;one could say BigTable is aimed at basic big data, whereas LittleTable is
aimed at metadata. Amazon Dynamo is an early research system along the lines
of the production SimpleDB system.

3.4.4 Microsoft Azure Programming Support

• In 2008, Microsoft launched a Windows Azure platform to meet the


challenges in cloud computing.
• This platform is built over Microsoft data centers
• The platform is divided into three major component platforms
• Applications are installed on VMs deployed on the data-center servers.
• Azure manages all servers, storage, and network resources of the data
center.
• The SQLAzure service offers SQL Server as a service
• File system interface is provided as a NTFS(New Technology File System),
volume backed by blob storage(Binary Large Object used for storing massive
amounts of unstructured data)
• Blobs are arranged as a three-level hierarchy:
Account → Containers → Page or Block Blobs
• Containers are like directories which act as the root for Account
• Block blob is used for streaming data
• Each blob is made up as a sequence of blocks of up to 4 MB each
• Block blobs can be up to 200 GB in size
• Page blobs are for random read/write access and consist of an array of pages
with a maximum blob size of 1 TB.
Figure: Features of the Azure Cloud Platform

3.4.4.1 SQLAzure

Azure offers a very rich set of storage capabilities. All the storage modalities
are accessed with REST interfaces except for the recently introduced Drives that
are analogous to Amazon EBS, and offer a file system interface as a durable
NTFS volume backed by blob storage. The REST interfaces are automatically
associated with URLs and all storage is replicated three times for fault tolerance
and is guaranteed to be consistent in access.

The basic storage system is built from blobs which are analogous to S3 for
Amazon. Blobs are arranged as a three-level hierarchy: Account → Containers →
Page or Block Blobs. Containers are analogous to directories in traditional file
systems with the account acting as the root. The block blob is used for streaming
data and each such blob is made up as a sequence of blocks of up to 4 MB each,
while each block has a 64 byte ID. Block blobs can be up to 200 GB in size. Page
blobs are for random read/write access and consist of an array of pages with a
maximum blob size of 1 TB. One can associate metadata with blobs as <name,
value> pairs with up to 8 KB per blob.

3.4.4.2 Azure Tables

• The Azure Table and Queue storage modes are aimed at much smaller
data volumes
• Queues provide reliable message delivery and used to support work
spooling between web and worker roles
• Queues consist of an unlimited number of messages with an 8KB limit on
message size
• Azure supports PUT, GET, and DELETE message operations as well as
CREATE and DELETE for queues
• Each account can have any number of Azure tables which consist of rows
called entities and columns called properties
• There is no limit to the number of entities in a table
• All entities can have up to 255 general properties which are <name,
type, value> triples
• An entity can have, at most, 1 MB storage
• If large value size is required then store a link to a blob store in the
Table property value

You might also like