0% found this document useful (0 votes)
108 views44 pages

5-Module 4 - Cloud Environment Google App Engine, AWS, Azure - Open Source-14-03-2024

Google App Engine (GAE) is a platform-as-a-service that provides scalable hosting using Google's infrastructure. GAE requires applications be written in Java or Python and uses Google services for storage and queries. GAE provides more resources than other services and eliminates system administration tasks. Google provides free usage of GAE resources up to limits and charges for additional usage. GAE is used to build and deploy scalable web applications and test application versions.

Uploaded by

musheera798
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views44 pages

5-Module 4 - Cloud Environment Google App Engine, AWS, Azure - Open Source-14-03-2024

Google App Engine (GAE) is a platform-as-a-service that provides scalable hosting using Google's infrastructure. GAE requires applications be written in Java or Python and uses Google services for storage and queries. GAE provides more resources than other services and eliminates system administration tasks. Google provides free usage of GAE resources up to limits and charges for additional usage. GAE is used to build and deploy scalable web applications and test application versions.

Uploaded by

musheera798
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 44

Cloud Environments

Module 4
Google App Engine
Google App Engine (GAE) is a platform-as-a-service product that provides
web app developers and enterprises with access to Google's scalable hosting and tier 1 internet
service.

GAE requires that applications be written in Java or Python, store data in Google Bigtable and
use the Google query language.

Noncompliant applications require modification to use GAE.

GAE provides more infrastructure than other scalable hosting services, such as Amazon
Elastic Compute Cloud (EC2).

GAE also eliminates some system administration and development tasks to make writing
scalable applications easier.
Google App Engine

Google provides GAE free up to a certain amount of use for the following resources:

•processor (CPU)

•storage

•application programming interface (API) calls

•concurrent requests

Users exceeding the per-day or per-minute rates can pay for more of these resources.
How is GAE used?

GAE is a fully managed, serverless platform that is used to host, build and deploy web
applications.

Users can create a GAE account, set up a software development kit and write application
source code. They can then use GAE to test and deploy the code in the cloud.

One way to use GAE is building scalable mobile application back ends that adapt to
workloads as needed.

Application testing is another way to use GAE. Users can route traffic to different application
versions to A/B test them and see which version performs better under various workloads.

A/B testing—also called split testing or bucket testing—compares the performance of two versions of content to see which one appeals
more to visitors/viewers. It tests a control (A) version against a variant (B) version to measure which one is most successful based on
your key metrics.
What are GAE's key features?

Key features of GAE include the following:

API selection. GAE has several built-in APIs, including the following five:

•Blobstore for serving large data objects;

•GAE Cloud Storage for storing data objects;

•Page Speed Service for automatically speeding up webpage load times;

•URL Fetch Service to issue HTTP requests and receive responses for efficiency and scaling;
and

•Memcache for a fully managed in-memory data store.


Amazon AWS

AWS is designed to allow application providers, ISVs, and vendors to quickly and securely
host your applications – whether an existing application or a new SaaS-based application.

You can use the AWS Management Console or well-documented web services APIs to access
AWS's application hosting platform.

ISV: Independent Software Vendors


What is IAM?
•AWS Identity and Access Management (IAM) is a web service that helps you securely control access
to AWS resources.

•IAM allows you to manage users and their level of access to the aws console.

•It is used to set users, permissions and roles. It allows you to grant access to the different parts of the
aws platform.

•AWS Identity and Access Management is a web service that enables Amazon Web Services (AWS)
customers to manage users and user permissions in AWS.

•With IAM, Organizations can centrally manage users, security credentials such as access keys, and
permissions that control which AWS resources users can access.

•IAM enables the organization to create multiple users, each with its own security credentials,
controlled and billed to a single aws account. IAM allows the user to do only what they need to do as a
part of the user's job.
Elastic Compute Cloud (EC2)

•Amazon EC2 is a web service that provides resizable compute capacity in the cloud.

•Amazon EC2 reduces the time required to obtain and boot new user instances to minutes
rather than in older days, if you need a server then you had to put a purchase order, and
cabling is done to get a new server which is a very time-consuming process.

•Now, Amazon has provided an EC2 which is a virtual machine in the cloud that completely
changes the industry.

•You can scale the compute capacity up and down as per the computing requirement changes.

•Amazon EC2 provides the developers with the tools to build resilient applications that isolate
themselves from some common scenarios.
Simple Storage Service (S3)

•S3 provides developers and IT teams with secure, durable, highly scalable object storage.

•It is easy to use with a simple web services interface to store and retrieve any amount of data
from anywhere on the web

•The files which are stored in S3 can be from 0 Bytes to 5 TB.

•It has unlimited storage means that you can store the data as much you want.

•Files are stored in Bucket. A bucket is like a folder available in S3 that stores the files.

•S3 is a universal namespace, i.e., the names must be unique globally. Bucket contains a DNS
address. Therefore, the bucket must contain a unique name to generate a unique DNS address.
AWS Global Infrastructure

The following are the components that make up the AWS infrastructure:

•Availability Zones

•Region

•Edge locations

•Regional Edge Caches


AWS Global Infrastructure

Availability zone as a Data Center


An availability zone is a facility that can be somewhere in a country or in a city. Inside this
facility, i.e., Data Centre, we can have multiple servers, switches, load balancing, firewalls.
The things which interact with the cloud sits inside the data centers.

An availability zone can be a several data centers, but if they are close together, they are
counted as 1 availability zone.

Region
A region is a geographical area. Each region consists of 2 more
availability zones.

A region is a collection of data centers which are completely


isolated from other regions.
AWS Global Infrastructure

Edge Locations

Edge locations are the endpoints for AWS used for caching content.

Edge locations consist of CloudFront, Amazon's Content Delivery Network (CDN).

Edge location is not a region but a small location that AWS have. It is used for caching the
content.

Edge locations are mainly located in most of the major cities to distribute the content to end
users with reduced latency.

For example, some user accesses your website from Singapore; then this request would be
redirected to the edge location closest to Singapore where cached data can be read.
AWS Global Infrastructure

Regional Edge Cache

Regional Edge cache lies between CloudFront Origin servers and the edge locations.

A regional edge cache has a large cache than an individual edge location.

Data is removed from the cache at the edge location while the data is retained at the Regional
Edge Caches.

When the user requests the data, then data is no longer available at the edge location.

Therefore, the edge location retrieves the cached data from the Regional edge cache instead
of the Origin servers that have high latency.
Microsoft Azure
Azure Management Portal is an interface to manage the services and infrastructure launched in
2012. All the services and applications are displayed in it and it lets the user manage them.
Microsoft Azure - Compute Module
Step 1 − First, login in to your Azure account.
Step 2 − Click ‘New’ at the left bottom corner and drag your cursor to ‘Compute‘.

Create a Web App


Step 1 − Click Web App.
Step 2 − Click Quick Create and enter the URL and choose a service plan from the dropdown
list

Windows Azure supports .Net, Java, PHP, Python, Node.js and Ruby.

There are several ways of publishing the code to Azure server. It can be published using FTP,
FTPs, Microsoft Web Deploy technology.

Various source control tools such as GitHub, Dropbox and Codeplex can also be used to
publish the code. It provides a very interactive interface to keep track of changes that have
been published already and also unpublished changes.
Create a Virtual Machine

Step 1 − Click on ‘Virtual Machine’ from the list.


Step 2 − Then click ‘From Gallery’.
Step 3 − Choose the Operating System or Program you want to run.
Step 4 − Choose the configuration and fill in the details.

The Username and Password you set up here will be needed to access the virtual machine
every time.

Step 5 − Once the machine is created you can connect to it by clicking on the connect icon
displayed at the bottom of the screen. It will save a .rpd file on your machine as shown in the
following image. Chose ‘save file’ on the screen and it will save in ‘downloads’ or the in the
set location on your machine.

Step 6 − Open that .rpd file and you can connect to the VM by filling in the credentials
open source cloud

• Cloud Foundry

• WSO2

• Cloudify

• OpenShift

• Stackato

• Alibaba

• https://www.openstack.org/

• https://opennebula.io/
Cloud Computing Infrastructure

Cloud infrastructure consists of servers, storage devices, network, cloud management


software, deployment software, and platform virtualization.
Hypervisor
Hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager. It allows to share the
single physical instance of cloud resources between several tenants.

Management Software
It helps to maintain and configure the infrastructure.

Deployment Software
It helps to deploy and integrate the application on the cloud.

Network
It allows to connect cloud services over the Internet. It is also possible to deliver network as a utility over the
Internet, which means, the customer can customize the network route and protocol.

Server
The server helps to compute the resource sharing and offers other services such as resource allocation and de-
allocation, monitoring the resources, providing security etc.

Storage
Cloud keeps multiple replicas of storage. If one of the storage resources fails, then it can be extracted from
another one, which makes cloud computing more reliable.
Infrastructural Constraints
Transparency
Virtualization is the key to share resources in cloud environment. But it is not possible to
satisfy the demand with single resource or server. Therefore, there must be transparency in
resources, load balancing and application, so that we can scale them on demand.

Scalability
Scaling up an application delivery solution is not that easy as scaling up an application
because it involves configuration overhead or even re-architecting the network. So, application
delivery solution is need to be scalable which will require the virtual infrastructure such that
resource can be provisioned and de-provisioned easily.

Intelligent Monitoring
To achieve transparency and scalability, application solution delivery will need to be capable
of intelligent monitoring.

Security
The mega data center in the cloud should be securely architected. Also the control node, an
entry point in mega data center, also needs to be secure.
Cloud Computing Architecture

Cloud computing architecture is a combination of service-oriented architecture and event-


driven architecture.
Front End

The front end is used by the client. It contains client-side interfaces and applications that are
required to access the cloud computing platforms. The front end includes web servers
(including Chrome, Firefox, internet explorer, etc.), thin & fat clients, tablets, and mobile
devices.

Back End

The back end is used by the service provider. It manages all the resources that are required
to provide cloud computing services. It includes a huge amount of data storage, security
mechanism, virtual machines, deploying models, servers, traffic control mechanisms, etc.
Components of Cloud Computing Architecture
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User Interface) to
interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
3. Service
A Cloud Services manages that which type of service you access according to the client’s
requirement. (SaaS, PaaS, IaaS)
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
5. Storage
Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.
Components of Cloud Computing Architecture
6. Infrastructure
It provides services on the host level, application level, and network level. Cloud infrastructure
includes hardware and software components such as servers, storage, network devices,
virtualization software, and other storage resources that are needed to support the cloud
computing model.
7. Management
Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish coordination
between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and communicate
with each other.
Cloud Computing Challenges
Cloud Computing Challenges

Security and Privacy


Security and Privacy of information is the biggest challenge to cloud computing. Security and
privacy issues can be overcome by employing encryption, security hardware and security
applications.

Portability
This is another challenge to cloud computing that applications should easily be migrated from
one cloud provider to another. There must not be vendor lock-in. However, it is not yet made
possible because each of the cloud provider uses different standard languages for their
platforms.

Interoperability
It means the application on one platform should be able to incorporate services from the other
platforms. It is made possible via web services, but developing such web services is very
complex.
Cloud Computing Challenges

Computing Performance
Data intensive applications on cloud requires high network bandwidth, which results in high
cost. Low bandwidth does not meet the desired computing performance of cloud application.

Reliability and Availability


It is necessary for cloud systems to be reliable and robust because most of the businesses are
now becoming dependent on services provided by third-party.

Can you list out the other challenges?


Inter Cloud

A theoretical model for cloud computing services is referred to as the “inter-cloud” or “cloud
of clouds.” combining numerous various separate clouds into a single fluid mass for on-
demand operations

Simply put, the inter-cloud would ensure that a cloud could utilize resources outside of its
range using current agreements with other cloud service providers. There are limits to the
physical resources and the geographic reach of any one cloud.
Need of Inter-Cloud

Due to their Physical Resource limits, Clouds have certain Drawbacks:

•When a cloud’s computational and storage capacity is completely depleted, it is unable to


serve its customers.
•The Inter-Cloud addresses these circumstances when one cloud would access the computing,
storage, or any other resource of the infrastructures of other clouds.

Benefits of the Inter-Cloud Environment include:

•Avoiding vendor lock-in to the cloud client


•Having access to a variety of geographical locations, as well as enhanced application
resiliency.
•Better service level agreements (SLAs) to the cloud client
•Expand-on-demand is an advantage for the cloud provider.
Inter Cloud Resource Management
Types of Inter-Cloud Resource Management

1.Federation Clouds: A federation cloud is a kind of inter-cloud where several cloud service
providers willingly link their cloud infrastructures together to exchange resources. Cloud
service providers in the federation trade resources in an open manner. With the aid of this
inter-cloud technology, private cloud portfolios, as well as government clouds (those utilized
and owned by non-profits or the government), can cooperate.

2.Multi-Cloud: A client or service makes use of numerous independent clouds in a multi-


cloud. A multi-cloud ecosystem lacks voluntarily shared infrastructure across cloud service
providers. It is the client’s or their agents’ obligation to manage resource supply and
scheduling. This strategy is utilized to use assets from both public and private cloud
portfolios. These multi-cloud kinds include services and libraries.
Topologies used in InterCloud Architecture

1. Peer-to-Peer Inter-Cloud Federation: Clouds work together directly, but they may also
utilize distributed entities as directories or brokers.

Clouds communicate and engage in direct


negotiation without the use of
intermediaries. The peer-to-peer
federation intercloud projects are
RESERVOIR (Resources and Services
Virtualization without Barriers Project).
Topologies used in InterCloud Architecture

2. Centralized Inter-Cloud Federation: In the cloud, resource sharing is carried out or


facilitated by a central body. The central entity serves as a registry for the available cloud
resources.

The inter-cloud initiatives Dynamic


Cloud Collaboration (DCC), and
Federated Cloud Management leverage
centralized inter-cloud federation.
Topologies used in InterCloud Architecture

3. Multi-Cloud Service: Clients use a service to access various clouds. The cloud client hosts a
service either inside or externally. The services include elements for brokers.

The inter-cloud initiatives OPTIMUS,


contrail, MOSAIC, STRATOS, and
commercial cloud management solutions
leverage multi-cloud services.
Topologies used in InterCloud Architecture

4. Multi-Cloud Libraries: Clients use a uniform cloud API as a library to create their own
brokers. Inter clouds that employ libraries make it easier to use clouds consistently.

Java library J-clouds, Python library


Apache Lib-Clouds, and Ruby library
Apache Delta-Cloud are a few examples
of multiple multi-cloud libraries.
Difficulties with Inter-Cloud Research
The needs of cloud users frequently call for various resources, and the needs are often
variable and unpredictable. This element creates challenging issues with resource
provisioning and application service delivery. The difficulties in federating cloud
infrastructures include the following:

Prediction of Application Service Behaviour: It is essential that the system be able to


predict customer wants and service Behaviour. It cannot make rational decisions to
dynamically scale up and down until it has the ability to predict. It is necessary to construct
prediction and forecasting models. Building models that accurately learn and fit statistical
functions suited to various behaviors is a difficult task.

Flexible Service-Resource Mapping: A difficult process of matching services to cloud


resources results from the system’s need to calculate the appropriate software and hardware
combinations. The QoS targets must be met simultaneously with the highest possible system
utilization and efficiency throughout the mapping of services.
Difficulties with Inter-Cloud Research

Integration and Interoperability: SMEs may not be able to migrate to the cloud since they
have a substantial number of on-site IT assets, such as business applications. Due to security
and privacy concerns, sensitive data in an organization may not be moved to the cloud. In
order for on-site assets and cloud services to work together, integration and interoperability
are required. It is necessary to find solutions for the problems of identity management, data
management, and business process orchestration.

Monitoring System Components at Scale: In spite of the distributed nature of the system’s
components, centralized procedures are used for system management and monitoring. The
management of multiple service queues and a high volume of service requests raises issues
with scalability, performance, and reliability, making centralized approaches ineffective.
Instead, decentralized messaging and indexing models-based architectures are required,
which can be used for service monitoring and management services.
Resource Allocation Methods in Cloud Computing
The allocation of resources and services from a cloud provider to a customer is known as
resource provisioning in cloud computing, sometimes called cloud provisioning. Resource
provisioning is the process of choosing, deploying, and managing software (like load
balancers and database server management systems) and hardware resources (including CPU,
storage, and networks) to assure application performance.

To effectively utilize the resources without going against SLA and achieving the QoS
requirements, Static Provisioning/Dynamic Provisioning and Static/Dynamic Allocation of
resources must be established based on the application needs.

Resource over and under-provisioning must be prevented. Power usage is another significant
restriction. Care should be taken to reduce power consumption, dissipation, and VM
placement. There should be techniques to avoid excess power consumption. (Carbon footprint
)
Resource Allocation Methods in Cloud Computing
The allocation of resources and services from a cloud provider to a customer is known as
resource provisioning in cloud computing, sometimes called cloud provisioning. Resource
provisioning is the process of choosing, deploying, and managing software (like load
balancers and database server management systems) and hardware resources (including CPU,
storage, and networks) to assure application performance.

To effectively utilize the resources without going against SLA and achieving the QoS
requirements, Static Provisioning/Dynamic Provisioning and Static/Dynamic Allocation of
resources must be established based on the application needs.

Resource over and under-provisioning must be prevented. Power usage is another significant
restriction. Care should be taken to reduce power consumption, dissipation, and VM
placement. There should be techniques to avoid excess power consumption. (Carbon footprint
)
Types of Cloud Provisioning
•Static Provisioning or Advance Provisioning: Static provisioning can be used successfully
for applications with known and typically constant demands or workloads. In this instance, the
cloud provider allows the customer with a set number of resources. The client can thereafter
utilize these resources as required. The client is in charge of making sure the resources aren’t
overutilized. This is an excellent choice for applications with stable and predictable needs or
workloads. For instance, a customer might want to use a database server with a set quantity of
CPU, RAM, and storage.

When a consumer contracts with a service provider for services, the supplier makes the
necessary preparations before the service can begin. Either a one-time cost or a monthly fee is
applied to the client.
Resources are pre-allocated to customers by cloud service providers. This means that before
consuming resources, a cloud user must select how much capacity they need in a static sense.
Static provisioning may result in issues with over or under-provisioning.
Types of Cloud Provisioning
•Dynamic provisioning or On-demand provisioning: With dynamic provisioning, the
provider adds resources as needed and subtracts them as they are no longer required. It follows
a pay-per-use model, i.e. the clients are billed only for the exact resources they use. “Dynamic
provisioning” techniques allow VMs to be moved on-the-fly to new computing nodes within
the cloud, in situations where demand by applications may change or vary. This is a suitable
choice for programs with erratic and shifting demands or workloads. For instance, a customer
might want to use a web server with a configurable quantity of CPU, memory, and storage. In
this scenario, the client can utilize the resources as required and only pay for what is really
used. The client is in charge of ensuring that the resources are not oversubscribed; otherwise,
fees can skyrocket.

•Self-service provisioning or user self-provisioning: In user self-provisioning, sometimes


referred to as cloud self-service, the customer uses a web form to acquire resources from the
cloud provider, sets up a customer account, and pays with a credit card. Shortly after, resources
are made accessible for consumer use.
Tools for Cloud Provisioning:

•Google Cloud Deployment Manager

•IBM Cloud Orchestrator

•AWS CloudFormation

•Microsoft Azure Resource Manager


Global exchange of cloud resources.
Entities of the Global exchange of cloud resources
Market directory
A market directory is an extensive database of resources, providers, and participants using the resources.
Participants can use the market directory to find providers or customers with suitable offers.
Auctioneers
Auctioneers clear bids and ask from market participants regularly. Auctioneers sit between providers and
customers and grant the resources available in the Global exchange of cloud resources to the highest bidding
customer.
Brokers
Brokers mediate between consumers and providers by buying capacity from the provider and sub-leasing these
to the consumers. They must select consumers whose apps will provide the most utility. Brokers may also
communicate with resource providers and other brokers to acquire or trade resource shares. To make decisions,
these brokers are equipped with a negotiating module informed by the present conditions of the resources and
the current demand.
Service-level agreements(SLAs)
The service level agreement (SLA) highlights the details of the service to be provided in terms of metrics that
have been agreed upon by all parties, as well as penalties for meeting and failing to meet the expectations.

You might also like