0% found this document useful (0 votes)
93 views96 pages

Exploring The Components of AWS

Amazon Web Services provides a variety of cloud computing services including infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) through its EC2 platform. EC2 allows users to launch virtual machines called instances across global regions and availability zones. Different types of EC2 instances provide optimized resources for compute, memory, storage and graphics including general purpose, compute optimized, memory optimized, GPU and storage optimized instances. Users can select the appropriate instance type to meet their specific application needs and pay for instances by the hour or reserve them to receive discounts.

Uploaded by

aka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views96 pages

Exploring The Components of AWS

Amazon Web Services provides a variety of cloud computing services including infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS) through its EC2 platform. EC2 allows users to launch virtual machines called instances across global regions and availability zones. Different types of EC2 instances provide optimized resources for compute, memory, storage and graphics including general purpose, compute optimized, memory optimized, GPU and storage optimized instances. Users can select the appropriate instance type to meet their specific application needs and pay for instances by the hour or reserve them to receive discounts.

Uploaded by

aka
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

EXPLORING THE

COMPONENTS OF
AWS
INTRODUCTION
• Amazon Web Services (AWS) is a comprehensive,
evolving cloud computing platform provided by
Amazon. It provides a mix of infrastructure as a
service (IaaS), platform as a service (PaaS) and
packaged software as a service (SaaS) offerings.
• AWS launched in 2006 from the internal infrastructure
that Amazon.com built to handle its online retail
operations. AWS was one of the first companies to
introduce a pay-as-you-go cloud computing model
that scales to provide users with compute, storage or
throughput as needed.
• Amazon Web Services provides services from dozens
of data centers spread across availability zones(AZs)
in regions across the world. An AZ represents a
location that typically contains multiple physical data
centers, while a region is a collection of AZs in
geographic proximity connected by low-latency
network links. An AWS customer can spin up virtual
machines (VMs) and replicate data in different AZs to
achieve a highly reliable infrastructure that is resistant
to failures of individual servers or an entire data
center.
ELASTIC CLOUD COMPUTE
• Amazon Elastic Compute Cloud (EC2) forms a
central part of Amazon.com’s cloud-
computing platform, AWS, by allowing users to
rent virtual computers on which they run their own
computer applications. EC2 encourages scalable
deployment of applications by providing a web
service through which a user can boot an Amazon
Machine Image (AMI) to configure a virtual machine,
which Amazon calls an "instance", containing any
software desired. A user can create, launch, and
terminate server-instances as needed, paying by the
second for active servers. EC2 provides users with
control over the geographical location of instances that
allows for latency optimization and high levels
Initially, EC2 used Xen virtualization exclusively. However, on
November 6, 2017, Amazon announced the new C5 family of
instances that were based on a custom architecture around
the KVM hypervisor, called Nitro. Each virtual machine, called
an "instance", functions as a virtual private server. Amazon
sizes instances based on "Elastic Compute Units". The
performance of otherwise identical virtual machines may
vary. On November 28, 2017, AWS announced a bare-metal
instance type offering marking a remarkable departure from
exclusively offering virtualized instance types.
As of January 2019, the following instance types were offered:
•General Purpose: A1, T3, T2, M5, M5a, M4, T3a
•Compute Optimized: C5, C5n, C4
•Memory Optimized: R5, R5a, R4, X1e, X1, High Memory, z1d
•Accelerated Computing: P3, P2, G3, F1
•Storage Optimized: H1, I3, D2
As of April 2018, the following paying method for instance
were offered:
•On-demand: pay by the hour without commitment.
•Reserved: rent instances with one-time payment receiving
discounts on the hourly charge.
•Spot: bid-based service: runs the jobs only if the spot price is
below the bid specified by bidder.
AWS EC2 Instance Types:
General Purpose
T2
Burstable Performance Instances that offer a baseline level of
CPU performance with the capability to burst above the
baseline. The ability to burst and baseline performance are
directed by CPU Credits. Every T2 instance regularly gets CPU
Credits at an established rate that is based on the size of the
instance. These instances accumulate CPU Credits as and when
they become idle, and utilize the CPU credits as and when they
become active. These instances are a better option for workloads
that do not require the full CPU consistently but rarely requires
bursting. These instances are suitable for general purpose
workloads like developer environments, small databases, and
M3
The M3 instance type offers a balance of memory, network, and
compute resources. These instances are for general purpose
virtual machines, and most of the EC2 instances belong to this
category.
M3 instances are suitable for mid-size and small databases, data
processing jobs that require extra memory, running backend
servers for SAP, cluster computing, Microsoft SharePoint, and
several other applications.
M4
M4 instances are the most recent general-purpose instances.
The M4 family of instances offers a balance of memory,
network, and compute resources, and it is a better option for
several applications. They have custom Intel Xeon E5-2676 v3
Haswell processors that are optimized explicitly for EC2. The
clock rate for these instances can be in the range of 2.4 GHz to
3.0 GHz with the aid of Intel Turbo Boost.
M4 instances also provide Enhanced Networking that gives up
to four times the packet rate of instances without Enhanced
Networking, while guaranteeing reliable latency, even under
high network I/O. By default, these instances are EBS-
Optimized and have a devoted network capacity for
Input/Output operations.
Compute-optimized instances
C4
Feature maximum performance processors with the lowest price
performance in EC2 Instance types. These instances are suitable
for compute-bound applications that derive advantage
from high-performance processors. C4 instances are ideal for
media transcoding, Massively Multiplayer gaming servers, high
traffic web servers, batch processing workloads, and high-
performance computing.
C4 instances are dependent on custom 2.9 GHz Intel Xeon E5-
2666 v3 processors, which are specifically optimized for EC2.
The Intel Turbo Boost Technology helps clock speed of the C4
instances to touch 3.5Ghz with 1 or 2 core Turbo Boost
onc4.8xlarge instances.
C3
When compared to C1, C3 instances offer much faster
processors with about twice the memory per SSD-based and
vCPU instance storage. These instances are suitable for
applications that derive advantage from a large amount of
compute capacity for memory and are best-fitted for high
performing web servers, and several compute-intensive
applications.
Compute-optimized instances are a recent introduction from
AWS. The instances are intended to provide maximum
performance at an affordable price. They have per-core
performance, which beats those offered by any of the AWS EC2
instance types, with a price-performance ratio that would be the
best fit for compute-intensive workloads.
Memory Optimized
X1
Best suited for enterprise-class, large-scale, in-memory
applications and offer the lowest price for each GiB of
RAM among AWS EC2 instance types. These instances provide
1,952 GiB of DDR4 based memory. Compared to other Amazon
EC2 instances, these offer the lowest price for each GiB of
RAM and are best suited for executing in-memory databases such
as SAP HANA, other big data processing engines such as Presto or
Apache Spark, and HPC (High-Performance Computing)
applications. These instances are SAP certified for running
production environments of the next-generation Business Suite on
HANA (SoH), Business Suite S/4HANA, Business Warehouse on
HANA (BW), and Data Mart Solutions on HANA on the AWS
R3
R3 instances are well-equipped for running on memory-intensive
applications and are less expensive for each GiB of RAM. These
instances offer greater I/O performance, constant memory
bandwidth, support for reduced latency, lower jitter, and
maximum packet per second performance and support for EBS
optimization. They are suitable for applications that require
maximum memory performance with a high price point for each
GB of RAM.
These instances are best suited for in-memory analytics and
high-performance databases, including NoSQL databases and
relational databases like MemcacheD/Redis and MongoDB
applications. These instances support HVM (Hardware
Virtualization) Amazon Machine Images only.
GPU
G2 instances are well-suited for general purpose and graphics
GPU compute applications. They belong to a GPU-powered family
and offer molecular modeling, machine learning, rendering,
transcoding jobs, and game streaming, which require enormous
amounts of similar processing power. These instances provide a
high performing NVIDIA GPU with 4GB of video memory and
1,536 CUDA cores, which makes them suitable for 3D
visualizations, video creation services, and graphics-intensive
applications. The NVIDIA GRID GPU contains committed,
hardware-accelerated video encoding that produces H.264 video
stream, which may be displayed on any device with a well-suited
video codec. These instances are suitable for 3D application
streaming and other GPU compute workloads.
Storage Optimized
I2
High storage instances that offer fast SSD-backed instance
storage, which is best for high random I/O performance and
provide maximum IOPS at the lowest cost. The primary data
storage in such instances is SSD-based instance storage. As with
all other instance storage, these volumes continue for the life of
the instance. When terminating and stopping an instance, the
applications, as well as the data that is stored in the instance
store, are wiped out. It is recommended to make back-ups at
regular intervals or replicate the data that has been stored in the
instance storage. The user can activate the TRIM command to
notify the SSD controller when the data is no longer required. This
offers the controller much more available free space that could
decrease write amplification while increasing performance.
Dense-storage Instances
D2
Offer up to  48 TB of HDD-based storage, providing huge disk
throughput and offering the lowest price for each disk throughput
performance on AWS EC2. D2 instances are intended for
workloads that need greater sequential write and read access to
larger data sets on local storage. Best-suited for MPP data
warehouses, Hadoop, MapReduce distributed computing, and data
or log processing computing. By default, these instances are EBS-
optimized and offer a dedicated block storage throughput for AWS
EBS, which ranges from 750 Mbps up to 4,000 Mbps free of
charge. Allows you to regularly achieve maximum performance for
EBS volumes by reducing contention among network traffic and
Amazon EBS I/O from the D2 instance.
INSTANCE LIFECYCLE
•Pending
• When the instance is first launched it enters into
the pending state.
•Running
• After the instance is launched, it enters into
the running state.
• Charges are incurred for every hour or partial
hour the instance is running even if it is idle.
•Start & Stop (EBS-backed instances only)
• Only an EBS-backed instance can be stopped and
started. Instance store-bakced instance cannot be
stopped and started.
• An instance can be stopped & started in case the
instance fails a status check or is not running as
expected.
• Stop
• After the instance is stopped, it enters in stopping
state and then to stopped state.
• Charges are only incurred for the EBS storage and
not for the instance hourly charge or data transfer.
• While the instance is stopped, you can treat its
root volume like any other volume, and modify it.
• Volume can be detached from the
stopped instance, and attached to a running
instance, modified, detached from the running
instance, and then reattached to the stopped
instance. It should be reattached using the
storage device name that’s specified as the root
device in the block device mapping for the
• Start
• When the instance is started, it enters into pending
state and then into running
• An instance when stopped and started is launched
on a new host
• Any data on an instance store volume (not root
volume) would be lost while data on the EBS
volume persists
• EC2 instance retains its private IP address as well as
the Elastic IP address. However, the public IP address,
if assigned instead of the Elastic IP address, would be
released
• Charges for full hour are incurred for every transition
from stopped to running, even if the transition is
•Instance reboot
• Both EBS-backed and Instance store-backed
instances can be rebooted
• An instance retains it public DNS, public and
private IP address during the reboot
• Data on the EBS and Instance store volume is also
retained
• Amazon recommends to use Amazon EC2 to
reboot the instance instead of running
the operating system reboot command from your
instance as it performs a hard reboot if the
instance does not cleanly shutdown within four
minutes also creates an API record in CloudTrail, if
enabled.
Instance retirement
• An instance is scheduled to be retired when AWS
detects irreparable failure of the underlying
hardware hosting the instance.
• When an instance reaches its scheduled retirement
date, it is stopped or terminated by AWS.
• If the instance root device is an Amazon EBS volume,
the instance is stopped, and can be started again at
any time.
• If the instance root device is an instance store
volume, the instance is terminated, and cannot be
used again.
•Instance Termination
• An instance can be terminated, and it enters into the
shutting-down and then the terminated state
• After an instance is terminated, it can’t be
connected and no charges are incurred
• Instance Shutdown behaviour
• EBS-backed
instance supports InstanceInitiatedShutdownB
ehavior attribute which determines whether the
instance would be stopped or terminated when a
shutdown command is initiated from the instance
itself.
• Default behaviour for the the instance to be
stopped.
• Termination protection
• Termination protection (DisableApiTermination
attribute) can be enabled on the instance to
prevent it from being accidently terminated
• DisableApiTerminationfrom the Console, CLI or API.
• Instance can be terminated through Amazon EC2
CLI.
• Termination protection does not work for instances
when
• part of an Autoscaling group
• launched as Spot instances
• terminating an instance by initiating shutdown
from the instance
• Data persistence
• EBS volume have a DeleteOnTermination attribute
which determines whether the volumes would be
persisted or deleted when an instance they are
associated with are terminated
• Data on Instance store volume data does not
persist
• Data on EBS root volumes, have
the DeleteOnTermination flag set to true, would be
deleted by default
• Additional EBS volumes attached have
the DeleteOnTermination flag set to false are not
deleted but just dettached from the instance.
AMAZON S3
Amazon Simple Storage Service (Amazon S3) is a
scalable, high-speed, web-based cloud storage service
designed for online backup and archiving of data
and applications on Amazon Web Services. Amazon S3
was designed with a minimal feature set and created to
make web-scale computing easier for developers.
Amazon S3 is an object storage service, which differs
from block and file cloud storage. Each object is stored
as a file with its metadata included and is given an ID
number. Applications use this ID number to access an
object. Unlike file and block cloud storage, a developer
can access an object via a REST API.
Amazon S3 manages data with an object
storage architecture which aims to provide scalability, high
availability, and low latency with 99.999999999% (11 9's)
durability and between 99.95% to 99.99% availability. The basic
storage units of Amazon S3 are objects which are organized into
buckets. Each object is identified by a unique, user-assigned
key. Buckets can be managed using either the console provided
by Amazon S3, programmatically using the AWS SDK, or with the
Amazon S3 REST API. Objects can be managed using the AWS
SDK or with the Amazon S3 REST API and can be up to
five terabytes in size with two kilobytes of metadata. Additionally,
objects can be downloaded using the HTTP GET interface and
the BitTorrent protocol.
The S3 cloud storage service gives a subscriber access
to the same systems that Amazon uses to run its own
websites. S3 enables customers to upload, store and
download practically any file or object that is up to five
terabytes (TB) in size, with the largest single upload
capped at five gigabytes (GB).
Advantages
Amazon S3 is intentionally built with a minimal feature set that
focuses on simplicity and robustness. Following are some of
advantages of the Amazon S3 service:
•Create Buckets – Create and name a bucket that stores
data. Buckets are the fundamental container in Amazon S3 for
data storage.
•Store data in Buckets – Store an infinite amount of data in a
bucket. Upload as many objects as you like into an Amazon
S3 bucket. Each object can contain up to 5 TB of data. Each
object is stored and retrieved using a unique developer-
assigned key.
•Download data – Download your data or enable others to do
so. Download your data any time you like or allow others to do
the same.
•Permissions – Grant or deny access to others who want to
upload or download data into your Amazon S3 bucket. Grant
upload and download permissions to three types of users.
Authentication mechanisms can help keep data secure from
unauthorized access.
•Standard interfaces – Use standards-based REST and
SOAP interfaces designed to work with any Internet-
development toolkit.
Requests are authorized using an access control list associated
with each object bucket and support versioning which is
disabled by default. Bucket names and keys are chosen so that
objects are addressable using HTTP URLs:
∙ http://s3.amazonaws.com/bucket/key
∙ http://bucket.s3.amazonaws.com/key
∙ http://bucket/key (where bucket is a DNS CNAME
record pointing to bucket.s3.amazonaws.com)
S3 can be used to replace significant existing web-hosting infra
with HTTP client accessible objects. The AWS authentication
mechanism allows the bucket owner to create an authenticated
URL with valid for a specified amount of time.
AMAZON GLACIER VS S3
Lifecycle rules within S3 allow you to manage the life cycle of
the objects stored on S3. After a set period of time, you can
either have your objects automatically delete or archived off to
Amazon Glacier. Amazon Glacier is marketed by AWS as
“extremely low cost storage”. The cost per Terrabyte of storage
and month is again only a fraction of the cost of S3. Amazon
Glacier is pretty much designed as a write once and retrieves
never (or rather rarely) service. This is reflected in the pricing,
where extensive restores come at a additional cost and the
restore of objects require lead times of up to 5 hours.
Let me highlight the difference between the ‘pure’ Amazon
Glacier service and the Glacier storage class within Amazon S3.
S3 objects that have been moved to Glacier storage using S3
Lifecycle policies can only be accessed (or shall I say restored)
using the S3 API endpoints. As such they are still managed as
objects within S3buckets, instead of Archives within Vaults,
which is the Glacier terminology. This differentiation is important
when you look at the costs of the services. While Amazon
Glacier is much cheaper than S3 on storage, charges are
approximately ten times higher for archive and restore requests.
This is re-iterating the store once, retrieve seldom pattern.
Amazon also reserves 32KB for metadata per Archive within
Glazier, instead of 8 KB per Object in S3, both of which are
charged back to the user. This is important to keep in mind for
your backup strategy, particularly if you are storing a large
number of small files. If those files are unlikely to require
restoring in the short term it may be more cost effective to
combine them into an archive and store them directly within
Amazon Glazier.
ELASTIC BLOCK
STORAGE
INTRODUCTION
● Amazon EBS is like a hard drive in the cloud that provides
persistent block storage volumes for use with Amazon EC2
instances.

● These volumes can be


1. attached to your EC2 instances and would allow you to
create a file system on top of these volumes
2. run a database server
3. use them in any other way you would use a block device.
● EBS volumes are placed in an availability zone, where they are automatically
replicated to protect data loss from the failure of a single component.
● Since they are replicated only across a single availability zone, one may lose data if
the whole availability zone goes down, which is really rare.
● BENEFITS:
1. Reliable and secure storage − Each of the EBS volume will automatically
respond to its Availability Zone to protect from component failure.
2. Secure − Amazon’s flexible access control policies allows to specify who can
access which EBS volumes.
3. Higher performance
4. Easy data backup − Data backup can be saved by taking point-in-time snapshots
of Amazon EBS volumes.
TYPES
1) General Purpose SSD (gp2)
● This is the volume that EC2 chooses by default as the root volume of
your instance.
● It provides a balance of both price and performance.
● SSD stands for Solid State Drive which is multiple times faster than
HDD (Hard Disk Drive) for small input/output operations.
● Having it as the root volume for your instances can significantly
improve the performance of your server.
● By default, SSD supports 3 IOPS (Input Output Operations per
Second)/GB means 1 GB volume will give 3 IOPS, and 10 GB volume
will give 30 IOPS.
TYPES
2) Provisioned IOPS SSD (io1)

● This is the fastest and most expensive EBS volume.


● They are designed for I/O intensive applications such as large
Relational or NoSQL databases.
● One are charged for the provisioned IOPS along with the storage space
of the volume.
● By default, IOPS SSD supports 30 IOPS/GB means 10GB volume will
give 300 IOPS. Its storage capacity of one volume ranges from 10GB to
1TB.
TYPES
3) Throughput Optimized HDD (st1)

● These are low-cost magnetic storage volumes which define


performance in terms of Throughput.
● These are designed for large, sequential workloads like Big Data, Data
warehouses, and log processing. You will probably use these volumes
for your Hadoop cluster.
● They provide throughput of up to 500 MB/s and cannot be used as root
volume for an EC2 instance.
TYPES
4) Cold HDD (sc1)

● These are even cheaper magnetic storage than Throughput Optimized.


● They are designed for large, sequential cold workloads like a file server.
● They are good for infrequently accessed workloads and provide
throughput of up to 250 MB/s.
● They also cannot be used as root volumes.
TYPES
5) Magnetic (standard)

● These are previous generation magnetic drives that are suited for
workloads where data is accessed infrequently.
● Their size can be up to 1 TiB and on average they provide a throughput
of 100 MB/s.
● These can be used as root volumes for EC2 instances.
AMAZON VIRTUAL
PRIVATE CLOUD
AMAZON VIRTUAL PRIVATE CLOUD
● Amazon Virtual Private Cloud (VPC) allows the users to use AWS
resources in a virtual network.
● The users can customize their virtual networking environment as they
like, such as selecting own IP address range, creating subnets.etc.
● When you create a VPC, you must specify a range of IPv4 addresses for
the VPC in the form of a CIDR block; for example, 10.0.0.0/16.
● Amazon VPC supports IPv4 and IPv6 addressing, and has different
CIDR block size limits for each. By default, all VPCs and subnets must
have IPv4 CIDR blocks
AMAZON VIRTUAL PRIVATE CLOUD
● Many connectivity options − There are various connectivity options
that exist in Amazon VPC. EG: Connect VPC directly to the Internet
via public subnets..
● Easy to use − Ease of creating a VPC in very simple steps by selecting
network setups as per requirement.
● Easy to backup data − Periodically backup data from the datacenter
into Amazon EC2 instances by using Amazon EBS volumes.
● Easy to extend network using Cloud − Move applications, launch
additional web servers and increase storage capacity by connecting it to
a VPC.
● A virtual private cloud (VPC) with a size /16 IPv4 CIDR block (example:
10.0.0.0/16). This provides 65,536 private IPv4 addresses.
● An Internet gateway. This connects the VPC to the Internet and to other AWS
services.
● An instance with a private IPv4 address in the subnet range (example:
10.0.0.6), which enables the instance to communicate with other instances in
the VPC, and an Elastic IPv4 address (example: 198.51.100.2), which is a
public IPv4 address that enables the instance to be reached from the Internet.
● A custom route table associated with the subnet. The route table entries
enable instances in the subnet to use IPv4 to communicate with other
instances in the VPC, and to communicate directly over the Internet. A subnet
that's associated with a route table that has a route to an Internet gateway is
known as a public subnet.
SUBNETS
Subnets
● A subnet is a segment of an Amazon VPC’s IP address where you can
launch Amazon EC2 instances, Amazon relational database
service(RDA) and other AWS services.
● CIDR blocks define subnets.(Eg: 192.168.0.0/24)
● The smallest subnet that you can create is /28(16 IP addresses)
● Amazon reserves the first 4 and the last IP address for internal
networking purposes.
● For eg: a subnet defined as /28 has 16 IP addresses, subtracting 5 to be
used by amazon to yield 11 IP addresses left for the user.
● Can be classified as- public, private or VPN only.
Subnets
● Public Subnet: Associated route table directs the subnet’s traffic to

amazon’s VPC’s Internal Gateway.


● Private Subnet: Associated route table directs the subnet’s traffic to

amazon’s VPC’s Internal Gateway.


● CPN only subnet: Associated route table does not direct the subnet’s

traffic to amazon’s VPC’s Internal Gateway but to VPG


● One subnet equals one availability zone.
ROUTE TABLES
Route Tables
● A route table is a logical construct within an Amazon VPC that
contains a set of rules(called routes) that are applied to the subnet
and are used to determine where network traffic is directed.
● A route table’s routes are what permit Amazon EC2 instances
within different subnets within an Amazon VPC to communicate
with each other. You can modify route tables and add your own
custom routes.
● One can also specify which subnets are public and which are
private.
Route Tables
● Each route table contains a default route called the local route
which enables communication within an Amazon VPC and this
cannot be modified or removed.
● VPC automatically comes with a main route table that you can
modify.
● You can create additional custom route tables using VPC.
● Each subnet must be associated with a route table which controls
the routing for the subnet. If no explicit association with a subnet
takes place, the subnet will use the main route table.
● You can always replace the main route table with a custom table
that you have created so that the subnet automatically gets
associated with it.
ELASTIC IP
ADDRESS
Elastic IP Addresses
● An Elastic IP address is a static IPv4 address designed for
dynamic cloud computing. An Elastic IP address is associated
with your AWS account. With an Elastic IP address, you can
mask the failure of an instance or software by rapidly remapping
the address to another instance in your account.
● An Elastic IP address is a public IPv4 address, which is reachable
from the internet. If your instance does not have a public IPv4
address, you can associate an Elastic IP address with your
instance to enable communication with the internet; for example,
to connect to your instance from your local computer.
AWS EXPLORING
CONTENTS
ELASTIC NETWORK
INTERFACES(ENI)
An Elastic Network Interface (ENI) is a virtual network
interface that you can attach to an instance in an Amazon
VPC. ENIs are only available within an Amazon VPC, and
they are associated with a subnet upon creation. They can
have one public IP address and multiple private IP
addresses. If there are multiple private IP addresses, one of
them is primary.
Assigning a second network interface to an instance via an
ENI allows it to be dual-homed (have network presence in
different subnets).
An ENI created independently of a particular instance
persists regardless of the lifetime of any instance to which
it is attached; if an underlying instance fails, the IP address
may be preserved by attaching the ENI to a replacement
instance.
ENIs allow you to create a management network, use
network and security appliances in your Amazon VPC,
create dual-homed instances with workloads/roles on
distinct subnets, or create a low-budget, high-availability
solution.
SECURITY GROUP
• A security group is a virtual stateful firewall that
controls inbound and outbound network traffic to AWS
resources and Amazon EC2 instances. All Amazon EC2
instances must be launched into a security group. If a
security group is not specified at launch, then the
instance will be launched into the default security group
for the Amazon VPC.
• The default security group allows communication
between all resources within the security group, allows
all outbound traffic, and denies all other traffic.
Security Group Rules:
For each security group, you add rules that control the
inbound traffic to instances and a separate set of rules that
control the outbound traffic
SNAPSHOT OF SECURITY GROUP
RULES OF A WEB SERVER
• You can create up to 500 security groups for each Amazon
VPC.
• You can add up to 50 inbound and 50 outbound rules to
each security group. If you need to apply more than 100
rules to an instance, you can associate up to five security
groups with each network interface.
• You can specify allow rules, but not deny rules. This is an
important difference between security groups and ACLs.
• You can specify separate rules for inbound and outbound
traffic. By default, no inbound traffic is allowed until you
add inbound rules to the security group.
• By default, new security groups have an outbound rule that
allows all outbound traffic. You can remove the rule and add
outbound rules that allow specific outbound traffic only.
• Security groups are stateful. This means that responses to
allowed inbound traffic are allowed to flow outbound
regardless of outbound rules and vice versa. This is an
important difference between security groups and network
ACLs.
• Instances associated with the same security group can’t
talk to each other unless you add rules allowing it (with the
exception being the default security group).
• You can change the security groups with which an instance
is associated after launch, and the changes will take effect
immediately.
ACL( ACCESS CONTROL LISTS)
A network access control list (ACL) is another layer of
security that acts as a stateless firewall on a subnet level.
A network ACL is a numbered list of rules that AWS
evaluates in order, starting with the lowest numbered rule,
to determine whether traffic is allowed in or out of any
subnet associated with the network ACL. Amazon VPCs are
created with a modifiable default network ACL associated
with every subnet that allows all inbound and outbound
traffic.
When you create a custom network ACL, its initial
configuration will deny all inbound and outbound traffic
until you create rules that allow otherwise. You may set up
network ACLs with rules similar to your security groups in
order to add a layer of security to your Amazon VPC, or you
may choose to use the default network ACL that does not
filter traffic traversing the subnet boundary. Overall, every
subnet must be associated with a network ACL.
COMPARISON
ELASTIC LOAD BALANCING
The Elastic Load Balancing service allows you to distribute
traffic across a group of Amazon
EC2 instances in one or more Availability Zones, enabling
you to achieve high availability in
your applications. Elastic Load Balancing supports routing
and load balancing of Hypertext
Transfer Protocol (HTTP), Hypertext Transfer Protocol Secure
(HTTPS), Transmission Control Protocol (TCP), and Secure
Sockets Layer (SSL) traffic to Amazon EC2 instances.
Elastic Load Balancing provides a stable, single Canonical
Name record (CNAME) entry point for Domain Name
System (DNS) configuration and supports both Internet-
facing and
internal application-facing load balancers. Elastic Load
Balancing supports health checks for Amazon EC2
instances to ensure traffic is not routed to unhealthy or
failing instances.
Also, Elastic Load Balancing can automatically scale based
on collected metrics.
There are several advantages of using Elastic Load
Balancing. Because Elastic Load Balancing is a managed
service, it scales in and out automatically to meet the
demands of increased application traffic and is highly
available within a region itself as a service. Elastic Load
Balancing helps you achieve high availability for your
applications by distributing traffic across healthy instances
in multiple Availability Zones.
Additionally, Elastic Load Balancing seamlessly integrates
with the Auto Scaling service to automatically scale the
Amazon EC2 instances behind the load balancer. Finally,
Elastic Load Balancing is secure, working with Amazon
Virtual Private Cloud (Amazon VPC) to route traffic
internally between application tiers, allowing you to expose
only Internet-facing public IP addresses. Elastic Load
Balancing also supports integrated certificate management
and SSL termination.
TYPES OF LOAD BALANCERS
Elastic Load Balancing provides several types of load
balancers for handling different kinds of connections
including Internet-facing, internal, and load balancers that
support encrypted connections.
Internet-Facing Load Balancers
An Internet-facing load balancer is, as the name implies, a load
balancer that takes requests from clients over the Internet and
distributes them to Amazon EC2 instances that are registered with
the load balancer. When you configure a load balancer, it receives a
public DNS name that clients can use to send requests to your
application. The DNS servers resolve the DNS name to your load
balancer’s public IP address, which can be visible to client
applications. An AWS recommended best practice is always to
reference a load balancer by its DNS name, instead of by the IP
address of the load balancer, in order to provide a single, stable
entry point. Because Elastic Load Balancing scales in and out to
meet traffic demand, it is not recommended to bind an application to
an IP address that may no longer be part of a load balancer’s pool of
resources. Elastic Load Balancing in Amazon VPC supports IPv4
addresses only. Elastic Load Balancing in EC2-Classic supports both
IPv4 and IPv6 addresses.
Internal Load Balancers
In a multi-tier application, it is often useful to load balance
between the tiers of the application. For example, an
Internet-facing load balancer might receive and balance
external traffic to the presentation or web tier whose
Amazon EC2 instances then send its requests to a load
balancer sitting in front of the application tier. You can use
internal load balancers to route traffic to your Amazon EC2
instances in VPCs with private subnets.
HTTPS Load Balancers
You can create a load balancer that uses the SSL/Transport Layer Security
(TLS) protocol for encrypted connections (also known as SSL offload). This
feature enables traffic encryption between your load balancer and the
clients that initiate HTTPS sessions, and for connections between your load
balancer and your back-end instances. Elastic Load Balancing provides
security policies that have predefined SSL negotiation configurations to
use to negotiate connections between clients and the load balancer. In
order to use SSL, you must install an SSL certificate on the load balancer
that it uses to terminate the connection and then decrypt requests from
clients before sending requests to the back-end Amazon EC2 instances.
You can optionally choose to enable authentication on your back-end
instances. Elastic Load Balancing does not support Server Name Indication
(SNI) on your load balancer. This means that if you want to host multiple
websites on a fleet of Amazon EC2 instances behind Elastic Load Balancing
with a single SSL certificate, you will need to add a Subject Alternative
Name (SAN) for each website to the certificate to avoid site users seeing a
warning message when the site is accessed.
Listeners
Every load balancer must have one or more listeners
configured. A listener is a process that checks for
connection requests—for example, a CNAME configured to
the A record name of the load balancer. Every listener is
configured with a protocol and a port (client to load
balancer) for a front-end connection and a protocol and a
port for the back-end (load balancer to Amazon EC2
instance) connection. Elastic Load Balancing supports the
following protocols:
• HTTP
• HTTPS
• TCP
• SSL
Elastic Load Balancing supports protocols operating at two
different Open System Interconnection (OSI) layers. In the
OSI model, Layer 4 is the transport layer that describes the
TCP connection between the client and your back-end
instance through the load balancer. Layer 4 is the lowest
level that is configurable for your load balancer. Layer 7 is
the application layer that describes the use of HTTP and
HTTPS connections from clients to the load balancer and
from the load balancer to your back-end instance. The SSL
protocol is primarily used to encrypt confidential data over
insecure networks such as the Internet. The SSL protocol
establishes a secure connection between a client and the
back-end server and ensures that all the data passed
between your client and your server is private.
CONFIGURING ELB
Before You Begin
• Prepare Your VPC and EC2 Instances.
• Launch the EC2 instances that you plan to register with your
load balancer. Ensure that the security groups for these
instances allow HTTP access on port 80.
• Install a web server, such as Apache or Internet Information
Services (IIS), on each instance, enter its DNS name into the
address field of an Internet-connected web browser, and verify
that the browser displays the default page of the server.
Step 1: Select a Load Balancer Type
Elastic Load Balancing supports three types of load balancers:
Application Load Balancers, Network Load Balancers, and Classic
Load Balancers.
To create a Classic Load Balancer
• Open the Amazon EC2 console at 
https://console.aws.amazon.com/ec2/.
• On the navigation bar, choose a region for your load balancer. Be
sure to select the same region that you selected for your EC2
instances.
• On the navigation pane, under LOAD BALANCING, choose Load
Balancers.
• Choose Create Load Balancer.
• For Classic Load Balancer, choose Create.
Step 2: Define Your Load Balancer
You must provide a basic configuration for your load
balancer, such as a name, a network, and a listener.
A listener is a process that checks for connection requests.
It is configured with a protocol and a port for front-end
(client to load balancer) connections and a protocol and a
port for back-end (load balancer to instance) connections.
In this tutorial, you configure a listener that accepts HTTP
requests on port 80 and sends them to your instances on
port 80 using HTTP.
To define your load balancer and listener
• For Load Balancer name, type a name for your load balancer.
• The name of your Classic Load Balancer must be unique within your set
of Classic Load Balancers for the region, can have a maximum of 32
characters, can contain only alphanumeric characters and hyphens,
and must not begin or end with a hyphen.
• For Create LB inside, select the same network that you selected for
your instances: EC2-Classic or a specific VPC.
• [Default VPC] If you selected a default VPC and would like to choose
the subnets for your load balancer, select Enable advanced VPC
configuration.
• Leave the default listener configuration.
• [EC2-VPC] For Available subnets, select at least one available public
subnet using its add icon. The subnet is moved under Selected
subnets. To improve the availability of your load balancer, select more
than one public subnet.
• Choose Next: Assign Security Groups.
Step 3: Assign Security Groups to Your Load Balancer in a
VPC
If you selected a VPC as your network, you must assign your load
balancer a security group that allows inbound traffic to the ports
that you specified for your load balancer and the health checks
for your load balancer.
To assign security group to your load balancer
• On the Assign Security Groups page, select Create a new
security group.
• Type a name and description for your security group, or leave
the default name and description. This new security group
contains a rule that allows traffic to the port that you configured
your load balancer to use.
• Choose Next: Configure Security Settings.
• Choose Next: Configure Health Check to continue to the next
step.
Step 4: Configure Health Checks for Your EC2
Instances
• Elastic Load Balancing automatically checks the health of
the EC2 instances for your load balancer. If Elastic Load
Balancing finds an unhealthy instance, it stops sending
traffic to the instance and reroutes traffic to healthy
instances. In this step, you customize the health checks
for your load balancer.
To configure health checks for your instances
• On the Configure Health Check page, leave Ping Protocol set to HTTP
and Ping Port set to 80.
• For Ping Path, replace the default value with a single forward slash
("/"). This tells Elastic Load Balancing to send health check queries to
the default home page for your web server, such as index.html.
• For Advanced Details, leave the default values.
• Choose Next: Add EC2 Instances.
Step 5: Register EC2 Instances with Your Load
Balancer
Your load balancer distributes traffic between the instances
that are registered to it.
To register EC2 instances with your load balancer
• On the Add EC2 Instances page, select the instances to
register with your load balancer.
• Leave cross-zone load balancing and connection draining
enabled.
• Choose Next: Add Tags.
Step 6: Create and Verify Your Load Balancer
Before you create the load balancer, review the settings
that you selected. After creating the load balancer, you can
verify that it's sending traffic to your EC2 instances.
To create and test your load balancer
• On the Review page, choose Create.
• After you are notified that your load balancer was
created, choose Close.
• Select your new load balancer.
• On the Description tab, check the Status row.
CLOUD WATCH
• Amazon CloudWatch is a component of Amazon Web
Services that provides monitoring for AWS resources and
the customer applications running on the Amazon
infrastructure.
• CloudWatch enables real-time monitoring of AWS
resources such as Amazon EC2 instances, Amazon EBS
(Elastic Block Store) volumes, Elastic Load Balancers,
and Amazon RDS database instances.  
• The application automatically
provides metrics for CPU utilization, latency, and request
counts; users can also stipulate additional metrics to be
monitored, such as such as memory usage, transaction
volumes or error rates.
• Users can access CloudWatch functions through an API,
command-line tools, one of the AWS SDK (software
development kits) or the AWS Management Console. The
CloudWatch interface provides current statistics that can
be viewed in graph format. Users can set notifications
(called “alarms”) to be sent when something being
monitored surpasses a specified threshold. The app can
also detect and shut down unused or underused EC2
instances.
AUTO SCALING
Auto Scaling is a service that allows you to scale your
Amazon EC2 capacity automatically by scaling out and
scaling in according to criteria that you define. With Auto
Scaling, you can ensure that the number of running
Amazon EC2 instances increases during demand spikes or
peak demand periods to maintain application performance
and decreases automatically during demand lulls or
troughs to minimize costs.
Auto Scaling Plans
Auto Scaling has several schemes or plans that you can
use to control how you want Auto
Scaling to perform.
1. Maintain Current Instance Levels
You can configure your Auto Scaling group to maintain a
minimum or specified number of
running instances at all times. To maintain the current
instance levels, Auto Scaling erforms
a periodic health check on running instances within an
Auto Scaling group. When Auto Scaling finds an unhealthy
instance, it terminates that instance and launches a new
one.
2. Manual Scaling
Manual scaling is the most basic way to scale your resources. You
only need to specify the change in the maximum, minimum, or
desired capacity of your Auto Scaling group. Auto Scaling manages
the process of creating or terminating instances to maintain the
updated
capacity.
3. Scheduled Scaling
Sometimes you know exactly when you will need to increase or
decrease the number of instances in your group, simply because
that need arises on a predictable schedule. Examples include
periodic events such as end-of-month, end-of-quarter, or end-of-year
processing, and also other predictable, recurring events. Scheduled
scaling means that scaling actions are performed automatically as a
function of time and date.
4. Dynamic Scaling
Dynamic scaling lets you define parameters that control
the Auto Scaling process in a scaling policy. For example,
you might create a policy that adds more Amazon EC2
instances to the web tier when the network bandwidth,
measured by Amazon CloudWatch, reaches a certain
threshold.
THANK YOU

You might also like