0% found this document useful (0 votes)
47 views21 pages

Unit Iv

Uploaded by

sobowo7302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views21 pages

Unit Iv

Uploaded by

sobowo7302
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT IV CLOUD DEPLOYMENT ENVIRONMENT

Google App Engine – Amazon AWS – Microsoft Azure; Cloud Software


Environments –Eucalyptus – OpenStack.

Google App Engine


• Google App Engine (GAE) is a platform-as-a-service product that
provides web app developers and enterprises with access to
Google's scalable hosting and tier 1 internet service.
• GAE requires that applications be written in Java or Python, store data in
Google Bigtable and use the Google query language. Noncompliant
applications require modification to use GAE.
• GAE provides more infrastructure than other scalable hosting services,
such as Amazon Elastic Compute Cloud (EC2).

Google provides GAE free up to a certain amount of use for the following
resources:

• processor (CPU)
• storage
• application programming interface (API) calls
• concurrent requests
How is GAE used?
GAE is a fully managed, serverless platform that is used to host, build and
deploy web applications. Users can create a GAE account, set up a software
development kit and write application source code. They can then use GAE to
test and deploy the code in the cloud.

Procedure:

• If you don't already have Python 2.7.17 installed in your computer,


download and Install

• You can download the Google App Engine SDK


• Making your First Application

1. I am going to make the Folder on my Desktop called “apps” – the


path to this folder .

2. create a file called app.yaml in the “apps folder”

3. open SDK and set to path , where you save the app folder that
folder path copy and past sdk cmd…

4. you will get local host address , Then press Browse to open a
browser pointing at your application which is running at
http://localhost:8080/

5. Paste http://localhost:8080 into your browser and you should see


your application as follows:

6. Just for fun, edit the add.py to change the name “a value” to your
own name and press Refresh in the browser to verify your updates.

Application testing is another way to use GAE. Users can route traffic to
different application versions to A/B test them and see which version performs
better under various workloads.

What are GAE's key features?


Key features of GAE include the following:

API selection. GAE has several built-in APIs, including the following five:

• Blobstore for serving large data objects;


• GAE Cloud Storage for storing data objects;
• Page Speed Service for automatically speeding up webpage load times;
• URL Fetch Service to issue HTTP requests and receive responses for
efficiency and scaling; and
• Memcache for a fully managed in-memory data store.

Managed infrastructure. Google manages the back-end infrastructure for


users. This approach makes GAE a serverless platform and simplifies API
management.

Several programming languages. GAE supports a number of languages,


including GO, PHP, Java, Python, NodeJS, .NET and Ruby. It also supports
custom runtimes.

Support for legacy runtimes. GAE supports legacy runtimes, which are
versions of programming languages no longer maintained. Examples include
Python 2.7, Java 8 and Go 1.11.

Application diagnostics. GAE lets users record data and run diagnostics on
applications to gauge performance.

Security features. GAE enables users to define access policies with the GAE
firewall and managed Secure Sockets Layer/Transport Layer
Security certificates for free.

Traffic splitting. GAE lets users route requests to different application


versions.

Versioning. Applications in Google App Engine function as a set


of microservices that refer back to the main source code. Every time code is
deployed to a service with the corresponding GAE configuration files, a version
of that service is created.

Google App Engine benefits and challenges


GAE extends the benefits of cloud computing to application development, but it
also has drawbacks.

Benefits of GAE
• Ease of setup and use.
• Pay-per-use pricing.
• Scalability.
• Security.
GAE challenges
• Lack of control. Although a managed infrastructure has advantages, if a
problem occurs in the back-end infrastructure, the user is dependent on
Google to fix it.
• Performance limits. CPU-intensive operations are slow and expensive to
perform using GAE. This is because one physical server may be serving
several separate, unrelated app engine users at once who need to share the
CPU.
• Limited access. Developers have limited, read-only access to the GAE
filesystem.
• Java limits. Java apps cannot create new threads and can only use a subset
of the Java runtime environment standard edition classes.
Examples of Google App Engine
One example of an application created in GAE is an Android messaging app
that stores user log data. The app can store user messages and write event logs
to the Firebase Realtime Database and use it to automatically synchronize data
across devices.

Java servers in the GAE flexible environment connect to Firebase and receive
notifications from it. Together, these components create a back-end streaming
service to collect messaging log data.

Amazon AWS:
o AWS stands for Amazon Web Services.
o The AWS service is provided by the Amazon that uses distributed IT
infrastructure to provide different IT resources available on demand. It
provides different services such as infrastructure as a service (IaaS),
platform as a service (PaaS) and packaged software as a service (SaaS).
o Amazon launched AWS, a cloud computing platform to allow the
different organizations to take advantage of reliable IT infrastructure .

AWS Services:
• Sever - EC2(Elastic cloud computing) Instance
• Storage – Simple service storage(S3)
• Network – Virtual Private cloud(VPC)
• Data Base – Rational database service(RDS)
• Security – Identity access Management(IAM)
• Application

Amazon AMI:

o An AMI stands for Amazon Machine Images.


o An AMI is a virtual image used to create a virtual machine within an EC2
instance.
o You can also create multiple instances using single AMI when you need
instances with the same configuration.
o You can also create multiple instances using different AMI when you
need instances with a different configuration.
o It also provides a template for the root volume of an instance.

AMI Lifecycle
o First, you need to create and register an AMI.
o You can use an AMI to launch EC2 instances.
o You can also copy an AMI to some different region.
o When AMI is no longer required, then you can also deregister it

Amazon AWS:
Server –EC2(Elastic Cloud Computing )Instance:

• OS Selection – Windows,linux,ubunu,Redhat…..
• Amazon Elastic Compute Cloud (Amazon EC2) instances represent
virtual machines. EC2 instances are launched by created by an Amazon
Machine Image (AMI). An AWS template that describes and defines the
OS and operating environment for one or more EC2 instances of one or
more EC2 instance types.
• Each instance type delivers a mix of CPU, memory, storage and
networking capacity, across one or more size options and should be
carefully matched to your workload's unique demands.

EC2 functions:

1. Load variety of operating system.

2. Install custom applications.

3. Manage network access permission.

4. Run image using as many/few systems as customer desire.

Signup
for Create
an Create
a Createa
AWS IAM user virtual
key pair
private
cloud

Createa
Cleanup Connectto Launch on
security
instance instance Instance
group

Step 1: SignUp for AWS

. When you signup for Amazon Web Services (AWS), your AWS account is
automatically signed up for all services in AWS, including Amazon EC2.
You are charged only for the services that you use.

• With Amazon EC2, you pay only for what you use. If you are a new AWS
customer, you can get started with Amazon EC2 for free.

Step 2: Create an IAM user


The console requires your password. You can create access keys for your
AWS account to access the command line interface or APL. However, we
don't recommend that you access AWS using the credentials for your AWS
account; we recommend that you use AWS Identity and Access Management
(IAM) instead.

You can then access AWS using a special URL and the credentials for the
IAM user. If you signed up for AWS but have not created an IAM user for
yourself, you can create one using the IAM console.

Step 3: Create a key pair

AWS uses public-key cryptography to secure the login information for your
instance. A Linux instance has no password; you use a key pair to log in to
your instance securely. You specify the name of the key pair when you
launch your instance, then provide the private key when you log in using
SSH.

4: Create a Virtual Private Cloud (VPC)

Amazon VPC enables you to launch AWS resources into a virtual network
that you've defined, known as a Virtual Private Cloud (VPC). The newer
EC2 instance types require that you launch your instances in a VPC. If you
have a default VPC, you can skip this section and move to the next task,
create a security group. To determine whether you have a default VPC, open
the Amazon EC2 console and look for default VPC under account attributes
on the dashboard.

Step 5: Create a security group

• Security groups act as a firewall for associated instances, controlling both


inbound and outbound traffic at the instance level. You must add rules to a
security group that enable you to connect to your instance from your IP
address using SSH. You can also add rules that allow inbound and outbound
HTTP and HTTPS access from anywhere. Note that if you plan to launch
instances in multiple regions, you'll need to create a security group in each
region.

Step 6: Launch an Instance

EC2 Creation:
i)OS Selection (AMI)-windows ,amazon Linux, Ubuntu ….

we have different Amazon Machine Images. These are the snapshots of


different virtual machines. We will be using Amazon Linux AMI 2018.03.0
(HVM) as it has built-in tools such as java, python, ruby, perl, and especially
AWS command line tools.

ii)Hardware (Instance type) – Memory optimized ,storage


optimized(vcpu & RAM =1B)

o Choose an Instance Type, and then click on the Next. Suppose I choose a
t2.micro as an instance type.

iii)Configuration – N/W, Subnet , IP

Network: Choose your network, set it as default, i.e., vpc-dacbc4b2


(default) where vpc is a virtual private cloud where we can launch the AWS
resources such as EC2 instances in a virtual cloud.

Subnet: It is a range of IP addresses in a virtual cloud. In a specified subnet,


you can add new AWS resources.

Shutdown behavior: It defines the behavior of the instance type. You can
either stop or terminate the instance when you shut down the Linux machine.
Now, I leave it as Stop.

Enable Termination Protection: It allows the people to protect against the


accidental termination.

Monitoring: We can monitor things such as CPU utilization. Right now, I


uncheck the Monitoring.

User data: In Advanced details, you can pass the bootstrap scripts to EC2
instance. You can tell them to download PHP, Apache, install the Apache, etc.

iv)Storage – Elastic Block Store

Volume Type: We select the Magnetic (standard) as it is the only disk which
is bootable.
Delete on termination: It is checked means that the termination of an EC2
instance will also delete EBS volume.

v) Tag – Name the Server

o Now, Add the Tags and then click on the Next.

we observe that we add two tags, i.e., the name of the server and department.
Create as many tags as you can as it reduces the overall cost.

vi)Security group – protocol HTTP

o Configure Security Group. The security group allows some specific


traffic to access your instance.

o Review an EC2 instance that you have just configured, and then click on
the Launch button.
vii)Key pair generation – privacy enhanced mail, encrypted private key

o Create a new key pair and enter the name of the key pair. Download the
Key pair.
viii)Launch Instance :

o Click on the Launch Instances button.


Simple Storage service S3

o S3 is a safe place to store the files.


o It is Object-based storage, i.e., you can store the images, word files, pdf
files, etc.
o The files which are stored in S3 can be from 0 Bytes to 5 TB.
o It has unlimited storage means that you can store the data as much you
want.
o Files are stored in Bucket. A bucket is like a folder available in S3 that
stores the files.
o Buckets
o Every object is incorporated in a bucket.
o For example, if the object named photos/tree.jpg is stored in the
treeimage bucket, then it can be addressed by using the URL
http://treeimage.s3.amazonaws.com/photos/tree.jpg.
o A bucket has no limit to the amount of objects that it can store. No bucket
can exist inside of other buckets.
o Amazon S3 defines a bucket name as a series of one or more labels,
separated by periods, that adhere to the following rules: The bucket name
can be between 3 and 63 characters long, and can contain only lower-case
characters, numbers, periods and dashes
o Amazon S3 defines a bucket name as a series of one or more labels,
separated by periods, that adhere to the following rules: 1. The bucket
name can be between 3 and 63 characters long and can contain
o only lower-case characters, numbers, periods and dashes.

If you create a bucket, URL look like:

o If you upload a file to S3 bucket, then you will receive an HTTP 200 code
means that the uploading of a file is successful.
o Objects
o Objects are the entities which are stored in an S3 bucket.
o An object consists of object data and metadata where metadata is a
set of name-value pair that describes the data.
o An object consists of some default metadata such as date last
modified, and standard HTTP metadata, such as Content type.
Custom metadata can also be specified at the time of storing an
object.
o It is uniquely identified within a bucket by key and version ID.

Amazon Elastic Block Store

Amazon Elastic Block Store (Amazon EBS) provides persistent block


storage volumes for use with Amazon EC2 instances in the AWS Cloud.

. EBS volumes are highly available and reliable storage volumes that can be

attached to any running instance that is in the same Availability Zone.

EBS volumes are particularly well-suited for use as the primary storage for
file systems, databases, or for any applications that require fine granular
updates and access to raw, unformatted, block-level storage.

The size of an EBS volume can be configured by the user and can range
from 1 GB to 1 TB.

The network-based EBS storage service is delivered in volumes, which can


be attached to an EC2 instance and used just like a disk

Microsoft Azure:
Microsoft Azure is a growing set of cloud computing services created by
Microsoft that hosts your existing applications, streamline the development of a
new application, and also enhances our on-premises applications. It helps the
organizations in building, testing, deploying, and managing applications and
services through Microsoft-managed data centers.

Azure Services
o Compute services: It includes the Microsoft Azure Cloud Services,
Azure Virtual Machines, Azure Website, and Azure Mobile Services,
which processes the data on the cloud with the help of powerful
processors.
o Data services: This service is used to store data over the cloud that can
be scaled according to the requirements. It includes Microsoft Azure
Storage (Blob, Queue Table, and Azure File services), Azure SQL
Database, and the Redis Cache.
o Application services: It includes services, which help us to build and
operate our application, like the Azure Active Directory, Service Bus for
connecting distributed systems, HDInsight for processing big data, the
Azure Scheduler, and the Azure Media Services.
o Network services: It helps you to connect with the cloud and on-
premises infrastructure, which includes Virtual Networks, Azure Content
Delivery Network, and the Azure Traffic Manager.

How Azure works

It is essential to understand the internal workings of Azure so that we can design


our applications on Azure effectively with high availability, data residency,
resilience, etc.
Microsoft Azure is completely based on the concept of virtualization. So,
similar to other virtualized data center, it also contains racks. Each rack has a
separate power unit and network switch, and also each rack is integrated with a
software called Fabric-Controller. This Fabric-controller is a distributed
application, which is responsible for managing and monitoring servers within
the rack. In case of any server failure, the Fabric-controller recognizes it and
recovers it. And Each of these Fabric-Controller is, in turn, connected to a piece
of software called Orchestrator. This Orchestrator includes web-services, Rest
API to create, update, and delete resources.

When a request is made by the user either using PowerShell or Azure portal.
First, it will go to the Orchestrator, where it will fundamentally do three things:

1. Authenticate the User


2. It will Authorize the user, i.e., it will check whether the user is allowed to
do the requested task.
3. It will look into the database for the availability of space based on the
resources and pass the request to an appropriate Azure Fabric controller
to execute the request.

Combinations of racks form a cluster. We have multiple clusters within a data


center, and we can have multiple Data Centers within an Availability zone,
multiple Availability zones within a Region, and multiple Regions within a
Geography.

o Geographies: It is a discrete market, typically contains two or more


regions, that preserves data residency and compliance boundaries.
o Azure regions: A region is a collection of data centers deployed within a
defined perimeter and interconnected through a dedicated regional low-
latency network.

Azure covers more global regions than any other cloud provider, which offers
the scalability needed to bring applications and users closer around the world. It
is globally available in 50 regions around the world. Due to its availability over
many regions, it helps in preserving data residency and offers comprehensive
compliance and flexible options to the customers.

o Availability Zones: These are the physically separated location within an


Azure region. Each one of them is made up of one or more data centers,
independent configuration.
Azure Pricing
Microsoft offers the pay-as-you-go approach that helps organizations to serve
their needs. Typically the cloud services will be charged based on the usage.
The flexible pricing option helps in up-scaling and down-scaling the
architecture as per our requirements.

Azure Certification

Microsoft Azure helps to fill the gap between the industry requirement and the
resource available. Microsoft provides Azure Certification into three major
categories, which are:

o Azure Administrator: Those who implement, monitor, and maintain


Microsoft Azure solutions, including major services.
o Azure Developer: Those who design, build, test, and maintain cloud
solutions, such as applications and services, partnering with cloud
solution architects, cloud DBAs, cloud administrators, and clients to
implement these solutions.
o Azure Solution Architect: Those who have expertise in compute,
network, storage, and security so that they can design the solutions that
run on Azure.

Eucalyptus

Eucalyptus is a Linux-based open-source software architecture for cloud


computing and also a storage platform that implements Infrastructure a Service
(IaaS). It provides quick and efficient computing services. Eucalyptus was
designed to provide services compatible with Amazon’s EC2 cloud and
Simple Storage Service(S3).
Eucalyptus Architecture

Eucalyptus CLIs can handle Amazon Web Services and their own private
instances. Clients have the independence to transfer cases from Eucalyptus to
Amazon Elastic Cloud. The virtualization layer oversees the Network, storage,
and Computing. Occurrences are isolated by hardware virtualization.
Important Features are:-
1. Images: A good example is the Eucalyptus Machine Image which is a
module software bundled and uploaded to the Cloud.
2. Instances: When we run the picture and utilize it, it turns into an instance.
3. Networking: It can be further subdivided into three modes: Static
mode(allocates IP address to instances), System mode (assigns a MAC
address and imputes the instance’s network interface to the physical
network via NC), and Managed mode (achieves local network of
instances).
4. Access Control: It is utilized to give limitations to clients.
5. Elastic Block Storage: It gives block-level storage volumes to connect to
an instance.
6. Auto-scaling and Load Adjusting: It is utilized to make or obliterate
cases or administrations dependent on necessities.
Components of Architecture

• Node Controller is the lifecycle of instances running on each node.


Interacts with the operating system, hypervisor, and Cluster Controller. It
controls the working of VM instances on the host machine.
• Cluster Controller manages one or more Node Controller and Cloud
Controller simultaneously. It gathers information and schedules VM
execution.
• Storage Controller (Walrus) Allows the creation of snapshots of volumes.
Persistent block storage over VM instances. Walrus Storage Controller is a
simple file storage system. It stores images and snapshots. Stores and
serves files using S3(Simple Storage Service) APIs.
• Cloud Controller Front-end for the entire architecture. It acts as a
Complaint Web Services to client tools on one side and interacts with the
rest of the components on the other side.

Operation Modes Of Eucalyptus

• Managed Mode: Numerous security groups to users as the network is


large. Each security group is assigned a set or a subset of IP addresses.
Ingress rules are applied through the security groups specified by the user.
The network is isolated by VLAN between Cluster Controller and Node
Controller. Assigns two IP addresses on each virtual machine.
• Managed (No VLAN) Node: The root user on the virtual machine can
snoop into other virtual machines running on the same network layer. It
does not provide VM network isolation.
• System Mode: Simplest of all modes, least number of features. A MAC
address is assigned to a virtual machine instance and attached to Node
Controller’s bridge Ethernet device.
• Static Mode: Similar to system mode but has more control over the
assignment of IP address. MAC address/IP address pair is mapped to static
entry within the DHCP server. The next set of MAC/IP addresses is
mapped.

Advantages Of The Eucalyptus Cloud

1. Eucalyptus can be utilized to benefit both the eucalyptus private cloud and
the eucalyptus public cloud.
2. Examples of Amazon or Eucalyptus machine pictures can be run on both
clouds.
3. Its API is completely similar to all the Amazon Web Services.
4. Eucalyptus can be utilized with DevOps apparatuses like Chef and Puppet.

OpenStack
OpenStack is a cloud OS that is used to control the large pools of computing,
storage, and networking resources within a data center. OpenStack is an open-
source and free software platform. This is essentially used and implemented as
an IaaS for cloud computing.

We can call the OpenStack a software platform that uses pooled virtual
resources to create and manage private and public cloud. OpenStack offers
many cloud-related services (such as networking, storage, image services,
identity, etc.) by default. This can be handled by users through a web-based
dashboard, a RESTful API, or command-line tools. OpenStack manages a lot of
virtual machines; this permits the usage of physical resources to be reduced.

Basic Principles of OpenStack

Open Source: Under the Apache 2.0 license, OpenStack is coded and
published. Apache allows the community to use it for free.

Open Design: For the forthcoming update, the development group holds a
Design Summit every 6 months.

Open Development: The developers maintain a source code repository that is


freely accessible through projects like the Ubuntu Linux distribution via
entig100s.

Open Community: OpenStack allows open and transparent documentation for


the community.

Components of OpenStack

Major components of OpenStack are given below:

Compute (Nova): Compute is a controller that is used to manage resources in


virtualized environments. It handles several virtual machines and other
instances that perform computing tasks.
Object Storage (Swift): To store and retrieve arbitrary data in the cloud, object
storage is used. In Swift, it is possible to store the files, objects, backups,
images, videos, virtual machines, and other unstructured data.

Block Storage (Cinder): This works in the traditional way of attaching and
detaching an external hard drive to the OS for its local use. Cinder manages to
add, remove, create new disk space in the server.

Networking (Neutron): This component is used for networking in OpenStack.


Neutron manages all the network-related queries, such as IP address
management, routers, subnets, firewalls, VPNs, etc. It confirms that all the other
components are well connected with the OpenStack.

Dashboard (Horizon): This is the first component that the user sees in the
OpenStack. Horizon is the web UI (user interface) component used to access the
other back-end services. Through individual API (Application programming
interface), developers can access the OpenStack's components, but through the
dashboard, system administrators can look at what is going on in the cloud and
manage it as per their need.

Identity Service (Keystone): It is the central repository of all the users and
their permissions for the OpenStack services they use. This component is used
to manage identity services like authorization, authentication, AWS Styles
(Amazon Web Services) logins, token-based systems, and checking the other
credentials (username & password).

Image Service (Glance): The glance component is used to provide the image
services to OpenStack. Here, image service means the images or virtual copies
of hard disks.

You might also like