AWS Notes Material
AWS Notes Material
AWS Fundamentals
Datacenter
We may have one or more Data centres, depending on how large the
customer base is.
Cost of Building
Cost of Administration
Cost of Power Generators
Cost of Cooling
Cost of Cabling
Physically securing the place
Virtualization
You probably know a little about Virtualization if you have ever divided your
hard drive into different partitions. A partition is the logical division of a hard
disk drive to create, in effect, two separate hard drives.
KiranReddy. A
Hypervisor
The first step of the virtualization process is installing the hypervisor onto a
server.
In the second image above, a bare-metal server installed with a hypervisor
provides the user with a management suite to create virtual machines on the
server.
This is where the hypervisor comes in: the hypervisor is able to distribute the
underlying resources based on what each VM needs. Resources (like
memory, storage, processors, and networking) are pooled together so that
every VM can get exactly what it needs for its ideal performance.
You can think of the hypervisor as the traffic cop that controls
processor,memory, networking and storage management.
If one VM needs more memory than other apps, the hypervisor can allocate
more memory for that VM. If another needs more storage, the hypervisor can
allocate more storage. And so on.
Virtualization lets you run more applications on fewer physical servers. Rather
than one application running on one server with one operating system,
multiple VMs run multiple applications and operating systems on one physical
server.
Just in case this is still muddled or confusing, here’s how I would explain
virtualization.
Virtualization is like a school bus. Before the school bus was invented, every
parent used their own car to drive their kid to school, using extra gas and
resources, putting all of the kids into one vehicle wasn’t an option.
One day, the school bus was introduced, exposing the inefficiency of every
parent driving their kid to school separately. By using the school bus, parents
could use less gas and fewer vehicles, all while transporting more kids.
Benefits of Virtualization
Power Savings
Cooling Savings
Hardware Savings
Network savings, no need of extra network cables
Space Savings, lower number of physical servers
Resource Sharing, can create multiple machines on single server,
which saves Money by reducing cost.
Deploy multiple Applications & OS's
Full utilization of Hardware resources
Isolation, VM's are isolated from each other as if they are physically
separated
VM's can be migrated between different hosts
KiranReddy. A
With Virtualization solution you can reduce IT costs while increasing the
efficiency, utilization and flexibility of their existing computer hardware i.e,
simplified management of Data center.
Experts predict that shipping hypervisors on bare metal will impact how
organizations purchase servers in the future. Instead of selecting an OS, they
will simply have to order a server with an embedded hypervisor and run
whatever OS they want.
Cloud Computing
Testing of your new ideas for the applications becomes much easier.
KiranReddy. A
KiranReddy. A
IAAS
KiranReddy. A
PAAS
KiranReddy. A
KiranReddy. A
SAAS
KiranReddy. A
KiranReddy. A
KiranReddy. A
KiranReddy. A
The AWS Free Tier enables you to gain free, hands-on experience with
the AWS platform, products, and services.
These free tier offers are only available to new AWS customers, and are
available for 12 months following your AWS sign-up date.
FREE TIER FEATURES and you can register an account in same page
Visit
On the next page, provide your Email Address, Password, and AWS
Account Name (you can change this name in your account settings after
sign up).
Complete the remaining fields with your information. Then click “Create
Account and Continue” to proceed.
KiranReddy. A
Next, you’ll be asked to provide a credit card for your AWS Account.
Once you receive the call, you’ll input the number shown on your screen
using your dial-pad
KiranReddy. A
KiranReddy. A
KiranReddy. A
Amazon Cloud Computing resources are available across the world. In easy
words if we see this then Amazon Data Centres are available in different
geographical locations.
Organization can register their presence and launch their product using these
Data Centre in any Location
AWS Regions
AWS Availability Zones
AWS Edge Locations
Regions
Regions are designed to service AWS customers (or your users) that
are located closest to a region.
When viewing a region in the console you will only view resources in
one region at a time.
Availability of regions allow the architects to design applications to
conform to specific laws and regulations
Some AWS services work "globally" while some work within a specific
region only
When we provision an EC2 instance or S3 Bucket, then you would
select the region and that is where these are provisioned or stored in
that region.
One AWS region is a combination of multiple Availability Zones (AZs).
Availability Zone
As per AWS infrastructure, each geographical area is known as AWS Region
which is a logical Data Centre.
Each Region has multiple Physical Data Centres and these Physical Data
Centres are known as AVAILABILITY ZONE or AZs.
The Availability zone is where the actual data centres are located.
So within a Region there can be multiple Availability zones which are
physically separated but are connected through low latency and high
speed internet connections.
KiranReddy. A
Edge Locations
The higher the number of edge locations the better the content is
distributed all over the world / region.
It is similar to having your own data center inside AWS. The resources
are completely isolated from other VPC on AWS
A variety of connectivity options exist for your Amazon VPC. You can
connect your VPC to the Internet, to your data center, or other
VPCs, based on the AWS resources that you want to expose publicly
and those that you want to keep private.
Layered security
o Instance level - Security Groups (firewall on instance level)
o Subnet level - Network ACLs (firewall on the subnet level)
NOTE : First thing you need to understand is, VPC within a region
spans across Multiple Availability zones because of that it spans across
multiple data centres.
KiranReddy. A
Default VPC
The default VPC is meant to allow the user easy access to VPC without
having to configure it from scratch.
KiranReddy. A
Default VPC has CIDR, Security Group, NACL and Route Table
settings
Each instance launched in the default VPC (by default) has a private
and public IP address (defined on the subnet settings).
Internet Gateway
Route Tables
Subnets
NACL's
Security Groups
Internet Gateway
In the above diagram, Subnet 1 in the VPC is associated with a custom route
table that points all internet-bound(0.0.0.0/0) traffic to an Internet gateway.
The instance has an Elastic IP address, which enables communication with
the internet.
Router
Each Subnet will have a Route Table and router uses it to forward the
traffic within the VPC i.e SUBNET ASSOCIATION
Route tables will have entries to destinations
Route Tables
A route table contains a set of rules called routes, that are used to
determine where network traffic is directed.
You can create additional custom route tables for your VPC.
Each subnet must be associated with a route table, which controls the
routing for the subnet.
If you don't explicitly associate a subnet with a particular route table, the
subnet is implicitly associated with the main route table.
When you create a VPC, it automatically has a main route table. On the
Route Tables page you can view the main route table for a VPC by
looking for Yes in the Main column.
The main route table controls the routing for all subnets that are not
explicitly associated with any other route table.
Your VPC can have route tables other than the default table.
Custom route tables ensure that you explicitly control how each subnet
routes outbound traffic.
Subnets
When you create a VPC, it spans across all of the Availability Zones in
the region.
After creating a VPC, you can add one or more subnets in each
Availability zone.
KiranReddy. A
Each subnet must reside entirely within one availability zone and cannot
span zones
NACL
They support allow and deny rules for traffic traveling into or out of a
subnet.
Rules are evaluated in order, starting with the lowest rule number -
o for Example: if traffic is denied at a lower rule number and
allowed at a higher rule number, the allow rule will be ignored and
the traffic will be denied.
Default NACL
The default network ACL is configured to allow all traffic to flow in and
out of the subnets to which it is associated.
NACL Rules
The first rule found that applies to the traffic type is immediately applied,
regardless of any rules that come after it
Security Groups
When you create a VPC, you must specify a CIDR block for the VPC.
The allowed block size is between a /16 netmask (65,536 IP addresses)
and /28 netmask (16 IP addresses).
AWS recommends to create a CIDR with 10.0.0.0/16 for future purpose
The CIDR blocks of the subnets cannot overlap.
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html
KiranReddy. A
The first four IP addresses and the last IP address in each subnet CIDR
block are not available for you to use, and cannot be assigned to an instance.
For example, in a subnet with CIDR block 10.0.0.0/24, the following five IP
addresses are reserved:
CIDR Chart
KiranReddy. A
Subnet Calculator
https://www.site24x7.com/tools/ipv4-subnetcalculator.html
VPC Requirement
Network Requirement Given by CST
> CST is going to setup the environment i.e servers on AWS, their
clients are from london
KiranReddy. A
> These 8k servers are grouped into two subnets each with 4k servers
> In the future, they might add more number of subnets i.e application
servers subnets, load balancer subnet
> Coming to security of Database Servers, they should have only SSH
access from web servers
> SSH traffic enabled from web servers only
Solution
> CST is going to setup the environment i.e servers on AWS, their
clients are from london
> These 8k servers are grouped into two subnets each with 4k servers
> In the future, they might add more number of subnets i.e application
servers subnets, load balancer subnet
> Coming to security of Database Servers, they should have only SSH
access from web servers
> SSH traffic enabled from web servers only
KiranReddy. A
You can use Amazon EC2 to launch as many or as few virtual servers
as you need, configure security and networking, and manage storage.
Amazon EC2 enables you to scale up or down to handle changes in
requirements or spikes in popularity, reducing your need to forecast
traffic.
I want you to picture EC2 like a computer, and the components that
make it up like OS, CPU, HDD, NW, Firewall, RAM etc.
KiranReddy. A
EC2 - Features
Secure login information for your instances using Key Pairs (AWS
stores the public key and you store the private key in a secure place)
Storage volumes for temporary data that's deleted when you stop or
terminate your instance, known as Instance Store Volumes
KiranReddy. A
The instance store is ideal for temporary storage, because the data
stored in instance store volumes is not persistent through instance
stops, terminations or hardware failures.
Persistent storage volumes for your data using Amazon Elastic Block
Store, known as Amazon EBS volumes
A firewall that enables you to specify the protocols, ports, and source IP
ranges that can reach your instances using security groups
EC2 - Configuration
EC2 Facts
On-Demand Instances – Pay, by the second, for the instances that you
launch.
Savings Plans – Reduce your Amazon EC2 costs by making a
commitment to a consistent amount of usage, in USD per hour, for a
term of 1 or 3 years.
Reserved Instances – Reduce your Amazon EC2 costs by making a
commitment to a consistent instance configuration, including instance
type and Region, for a term of 1 or 3 years.
Spot Instances – Request unused EC2 instances, which can reduce
your Amazon EC2 costs significantly.
KiranReddy. A
Instance Lifecycle
On-Demand Instances
You have full control over its lifecycle—you decide when to launch,
stop, hibernate, start, reboot, or terminate it.
You pay only for the seconds that your On-Demand Instances are in the
running state.
The price per second for a running On-Demand Instance is fixed, and is
listed on the Amazon EC2 Pricing, On-Demand Pricing page
Reserved Instances
KiranReddy. A
Spot Instances
Amazon EC2 Spot Instances are spare EC2 compute capacity in the AWS
Cloud that are available to you at savings of up to 90% off compared to On-
Demand prices.
Because Spot Instances enable you to request unused EC2 instances
at steep discounts, you can lower your Amazon EC2 costs significantly.
The hourly price for a Spot Instance is called a Spot price. The Spot
price of each instance type in each Availability Zone is set by Amazon
EC2, and is adjusted gradually based on the long-term supply of and
demand for Spot Instances.
Your Spot Instance runs whenever capacity is available and the
maximum price per hour for your request exceeds the Spot price.
Spot Instances are a cost-effective choice if you can be flexible about
when your applications run and if your applications can be interrupted.
For example, Spot Instances are well-suited for data analysis, batch
jobs, background processing, and optional tasks.
Dedicated Hosts
With a Dedicated Host, you have visibility and control over how
instances are placed on the server.
Dedicated Instances
Installing Gitbash
Once Git Bash Windows installer is downloaded, run the executable file and
follow the steps.
KiranReddy. A
KiranReddy. A
KiranReddy. A
KiranReddy. A
KiranReddy. A
KiranReddy. A
SSH
Instance Setup
> Login to AWS > Services > Compute Section > EC2 > Launch Instance >
Select Amazon Linux 2 AMI > Choose t2.micro > Config Instance Details
{keep all default values} > Add Storage {default} > Add Tags {default} >
Configure Security Group > Review & Launch > Launch Instance > In Keypair
Section > Create new keypair (cst) > Launch Instance
> Select your machine type and click Next Configure Instance details.
In our case we will select the t2.micro instance as it is free tier eligible.
> Leave the defaults in Configure Instance Details, Add Storage and Add
Tags
> In the drop down menu select create a new key pair, give the key pair a
name and Download the Key Pair, then click launch Instances.
KiranReddy. A
Now in order to communicate with the servers, we need an ssh client like
Putty or Gitbash,
SSH Syntax
chmod 400 first.pem
ssh -i <file.pem> <username>@public-ip-address
ssh -i first.pem centos@public-ip-address
Use uname command to verify, if you get Linux, it’s successful
Download Putty
https://the.earth.li/~sgtatham/putty/latest/w64/putty.exe
KiranReddy. A
Download Puttygen
https://the.earth.li/~sgtatham/putty/latest/w64/puttygen.exe
PuTTY uses .ppk files instead of .pem files. If you haven't already generated a
.ppk file, do so now. For more information, see To prepare to connect to a
Linux instance from Windows using PuTTY.
> Navigate to where you downloaded your key, click all files, click on your key
and click open.
KiranReddy. A
> Now click Save Private key, when prompted click yes you want to save
without a passphrase.
> Now open putty and enter your public IP into the host name or IP address
field, then expand SSH on the left had side.
KiranReddy. A
> Click auth and then browse, navigate to where you saved your key and
select it.
Web Server
A web server’s main purpose is to store web site files and broadcast them
over the internet for you site visitors to see. In essence, a web server is simply
a powerful computer that stores and transmits data via the internet.
Web servers are the gateway between the average individual and the world
wide web.
All computers that host websites must have web server programs.
Apache Web Server
An open source web server used mostly for Unix and Linux platforms.
It is fast, secure and reliable.
Package - httpd
Port - 80
Protocol - http
Server Root - /etc/httpd
Main config file - /etc/httpd/conf/httpd.conf
Configuration Test - httpd -t
KiranReddy. A
LAB - Setup
> Launch Linux instance with AMI :: Amazon Linux 2 in web subnet
> Generally the code in the organizations will be stored in Source Code
Management Tools and for us it is Github
EC2 - IP Address
Private IP Address
All EC2 instances are automatically created with a PRIVATE
IP address.
The private IP address is used for internal (inside the VPC)
communication between instances.
Public IP Address
When creating an EC2 instance, you have the option to
enable (or auto-assign) a public IP address.
A public IP address is required if you want the EC2
instance to have direct communication with resources
across the open internet, i.e if you want to directly SSH
into the instance or have it directly serve web traffic.
Auto-assigning is based on the setting for the selected
subnet that you are provisioning the instance in.
Services -> EC2 -> Left pane -> NETWORK & SECURITY -
> Click Elastic IP's -> Allocate New Address -> Amazon Pool ->
Select/Checkmark EIP -> Actions > Associate Address > Select Web Server
Instance > Associate
> Now stop and start the server back, and see if the Elastic IP got
changed ? As you can see it's the same, which is useful in DNS
EC2 Storage
EC2 Storage
KiranReddy. A
attached drives)
Instance Store ( Ephemeral/temporary store)
EBS - Performance
EBS - Types
Amazon EBS provides the following volume types, which differ in performance
characteristics and price, so that you can tailor your storage performance and
cost to the needs of your applications. The volumes types fall into these
categories:
Solid state drives (SSD) — Optimized for transactional workloads
involving frequent read/write operations with small I/O size, where the
dominant performance attribute is IOPS.
Hard disk drives (HDD) — Optimized for large streaming workloads
where the dominant performance attribute is throughput.
Previous generation — Hard disk drives that can be used for workloads
with small datasets where data is accessed infrequently and
performance is not of primary importance. We recommend that you
consider a current generation volume type instead.
o https://aws.amazon.com/ebs/volume-types/
o https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-
volume-types.html
Instance Store
KiranReddy. A
Snapshots
KiranReddy. A
EBS - Snapshot
> Generally the code in the organizations will be stored in Source Code
Management Tools and for us it is Github
-> Now your goal is to launch another instance with the same ecomm website
from Snapshot, in another availability zone, let's say the first instance was
launched in 1A now the new instance we are launching should be in 1B with
the ecomm website up and running.
Limitations Of EBS
AMI
KiranReddy. A
You can launch multiple instances from a single AMI when you need
multiple instances with the same configuration.
You can use different AMIs to launch instances when you need
instances with different configurations.
The following diagram summarizes the AMI lifecycle. After you create
and register an AMI, you can use it to launch new instances.
You can copy an AMI to different AWS Regions for Disaster Recovery.
When you no longer require an AMI, you can deregister it.
The root storage device of the instance determines the process you
follow to create an AMI.
The AWS Marketplace is an online store where you can buy software
that runs on AWS, including AMIs that you can use to launch your EC2
instance.
KiranReddy. A
Amazon Linux 2 and the Amazon Linux AMI are supported and maintained
Linux images provided by AWS. The following are some of the features of
Amazon Linux 2 and Amazon Linux AMI:
LAB - AMI's
> AMI - OS | Apps | Additional S/W's
LAB - Setup
> Launch Linux instance with AMI :: Amazon Linux 2 in public subnet
> Generally the code in the organizations will be stored in Source Code
Management Tools and for us it is Github
AMI Process
-> EC2 Dashboard -> Left side we got AMI's -> Click AMI's
-> select Instance -> Right click -> Image -> Create Image { keep
all default }
-> EC2 Dashboard -> Left side we got AMI's -> Click AMI's
-> EC2 Dashboard -> Left side we got AMI's -> Click AMI's -> Select
AMI -> Launch Instance
Bootstrapping
Refers to a self-starting process i.e run set of
commands without external input.
With EC2, we can bootstrap the instance (during the
creation process) with custom commands (such as
installing software packages, running updates and
configuring other various settings).
User Data
If you are familiar with shell scripting, this is the easiest and most
complete way to send instructions to an instance at launch.
Adding these tasks at boot time adds to the amount of time it
takes to boot the instance.
You should allow a few minutes of extra time for the tasks to
complete before you test that the user script has finished
successfully.
User data shell scripts must start with the #! characters and the
path to the interpreter you want to read the script (commonly
/bin/bash).
KiranReddy. A
Benefits Of EFS
The EFS file system can be accessed by one (or more)
EC2 instances at the same time
Shared file access across all your EC2 instances.
Applications that span multiple EC2 instances can
access the same data.
You pay only for the amount of storage you are using.
EFS
You need NFS client, to mount the file system on EC2 Instances
Multiple EC2 instances in the same region, same VPC and in different
AZ's, can access amazon EFS file system at the same time.
This provides the common data source for workloads and applications
running more than one instance
EFS uses port 2049 for NFS file system not for instances
KiranReddy. A
To access EFS file system in VPC, you can create one or more mount
targets in the VPC
You can create only one mount target in each availability zone
If there are multiple subnets in an AZ, you can create a mount target in
one of the subnets, then all the instances in that AZ will share the mount
target
AWS recommends that you create mount targets in all the AZ's, so that
you can easily mount the file system on EC2 instances that you might
launch in any zone in future, as there are no charges for mount targets
EFS Use-Cases
Amazon EFS provides a durable, high throughput file system for content
management systems and web serving applications.
KiranReddy. A
Amazon EFS offers two storage classes: the Standard storage class, and the
Infrequent Access storage class (EFS IA).
Infrequent Access : It's a lower cost storage class that's designed for
infrequently accessed files(not accessed everyday), IA provides cost-
optimization for files not accessed every day.
By simply enabling EFS Lifecycle Management on your file system, files not
accessed according to the lifecycle policy you choose will be automatically
and transparently moved into EFS IA.
LAB - EFS
> Shared access to multiple instances
> Launch Linux instance with AMI :: Amazon Linux 2 in public subnet
tag it as PRIMARY
> Create EFS from Storage Section i.e in Services -> Storage -> EFS
> EFS works on port 2049(NFS), create a security group to allow NFS
KiranReddy. A
-> Now launch another Linux instance with AMI :: Amazon Linux 2 in public
subnet tag it as SECONDARY
Types Of Storage
AWS provides three popular services :
Simple Storage Service (S3)
Object Storage
S3 Essentials
Buckets
Data is stored in Buckets, Buckets are the main
storage containers of S3.
KiranReddy. A
Objects
Managing Access
By default, all amazon S3 resources are private.
Only a resource owner can access the
resources.
KiranReddy. A
Bucket Policy
Bucket Policy
For your bucket, you can add bucket policy to
ACL's(Legacy)
Bucket ACL
Storage Classes
A Storage Class represents the classification assigned to each
object in S3. Amazon S3 offers a range of storage classes
designed for different use cases.
Key Features:
Key Features:
Infrequent access
Key Features:
Key Features:
Archive
keep costs low yet suitable for varying needs, S3 Glacier provides
three retrieval options that range from a few minutes to hours. You
can upload objects directly to S3 Glacier, or use S3 Lifecycle
policies to transfer data between any of the S3 Storage Classes for
active data (S3 Standard, S3 Intelligent-Tiering, S3 Standard-IA,
and S3 One Zone-IA) and S3 Glacier. For more information, visit
the Amazon S3 Glacier page »
Key Features:
Key Features:
S3 Lifecycle policies
An object lifecycle policy is a set of rules that automate the
migration of an object's storage class to a different storage
class (or deletion) based on specified time intervals.
By default, lifecycle policies are disabled on a bucket.
Are customizable to meet your company's data retention
policies.
Great for automating the management of object storage and to
be more cost efficient.
Example:
I have a work file that I am going to access everyday for
the next 30 days.
After 30 days, I may only need to access that file once a
week for the next 60 days.
After which (90 days total) I will probably never access
the file again but want to keep it just in case.
KiranReddy. A
S3 Versioning
S3 versioning is a feature to manage and store versions of
an object
S3 versioning protects your data against accidental
deletion.
By default, versioning is disabled on all buckets.
Once versioning is enabled, you can only "suspend"
versioning. It cannot be fully disabled.
Suspending versioning only prevents new versions from
being created. All objects with existing versions will
maintain their older versions.
Versioning can only be set on the bucket level and
applies to ALL objects in the bucket.
Versioning and lifecycle policies can both be enabled on a
bucket at the same time.
Versioning can be used with lifecycle policies to create a
great archiving and backup solution in S3.
S3 Web Hosting
KiranReddy. A
https://github.com/Akiranred/ecomm
Root User
The user created when you first create your AWS account is called the
"root" user.
It's credentials are the email address and password used when signing
up for an AWS account.
By default, the root user has FULL administrative rights and access
to every part of the account.
This represents an AWS users that you may create (in IAM), who will
have varying degrees of access to the AWS account
We also have a different set of users like Developer users that have
access to the dev user account.
KiranReddy. A
Features
o Administer your AWS account
o Finding Services
Recently visited services section, or expand the All
services
list of all services, either grouped, or arranged
alphabetically
IAM Components
IAM is where you manage your AWS users, groups, roles and their
access to AWS accounts and services:
IAM is global to all AWS regions, creating a user account will apply to all
the regions.
By default, any new IAM new user you create in an AWS account is
created with NO access to any AWS services. This is a non-explicit
deny rule set on all new IAM users.
For all the users (besides the root user), permissions must be given,
that grant access to AWS services
Security Checks
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. In the navigation pane, choose Users and then choose Add user.
3. Type the user name for the new user. This is the sign-in name for AWS. If you
want to add more than one user at the same time, choose Add another user
for each additional user and type their usernames. You can add up to 10
users at one time.
Note
User names can be a combination of up to 64 letters, digits, and these
characters: plus (+), equal (=), comma (,), period (.), at sign (@), underscore
KiranReddy. A
(_), and hyphen (-). Names must be unique within an account. They are not
distinguished by case. For example, you cannot create two users named
TESTUSER and testuser.
4. Select the type of access this set of users will have. You can select
programmatic access, access to the AWS Management Console, or both.
Select Programmatic access if the users require access to the API,
AWS CLI, or Tools for Windows PowerShell. This creates an access
key for each new user. You can view or download the access keys
when you get to the Final page.
Select AWS Management Console access if the users require access
to the AWS Management Console. This creates a password for each
new user.
a. For Console password, choose one of the following:
Autogenerated password. Each user gets a randomly generated
password that meets the account password policy in effect (if any). You
can view or download the passwords when you get to the Final page.
Custom password. Each user is assigned the password that you type
in the box.
5. Choose Next: Permissions.
6. On the Set permissions page, specify how you want to assign permissions to
this set of new users. Choose one of the following three options:
a. Add user to group. Choose this option if you want to assign the users
to one or more groups that already have permissions policies.
b. Copy permissions from existing users Choose this option to copy all
of the group memberships, attached managed policies from an existing
user to the new users.
c. Attach existing policies to users directly. Choose this option to see
a list of the AWS managed and customer managed policies in your
account. Select the policies that you want to attach to the new user
7. (Optional) Set a permissions boundary. This is an advanced feature.
8. Choose Next: Tags.
9. (Optional) Add metadata to the user by attaching tags as key-value pairs.
10. Choose Next: Review to see all of the choices you made up to this point.
When you are ready to proceed, choose Create user.
11. To view the users' access keys (access key IDs and secret access keys),
choose Show next to each password and access key that you want to see. To
save the access keys, choose Download .csv and then save the file to a safe
location.
Important
This is your only opportunity to view or download the secret access keys, and
you must provide this information to your users before they can use the AWS
API. Save the user's new access key ID and secret access key in a safe and
secure place. You will not have access to the secret keys again after this
step.
12. Provide each user with his or her credentials. On the final page you can
choose Send email next to each user. Your local mail client opens with a
draft that you can customize and send. The email template includes the
following details to each user:
User name
KiranReddy. A
AWS strongly recommends that you do not use the root user for your
everyday tasks, even the administrative ones.
Instead, adhere to the best practice of using the root user only to
create your first IAM user.
So let's create a user called admin and will use this user as our daily
driver.
o Services → IAM → Users → Add User( name: admin) → Check
✅ both Programmatic access and Management console access
→ Custom Password → Next → Review → Says User has no
permissions → Create User
o I'll not set the permissions right away, will set the permissions
later on
Groups
Groups let you specify permissions for multiple users, which can make it
easier to manage the permissions for those users.
For example, you could have a group called Admins and give that group
the types of permissions that administrators typically need.
Any user in that group automatically has the permissions that are
assigned to the group. If a new user joins your organization and needs
administrator privileges, you can assign the appropriate permissions by
adding the user to that group.
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. In the navigation pane, click Groups and then click Create New Group.
3. In the Group Name box, type the name of the group and then click Next
Step.
Note
Group names can be a combination of up to 64 letters, digits, and these
characters: plus (+), equal (=), comma (,), period (.), at sign (@), underscore
(_), and hyphen (-). Names must be unique within an account. They are not
distinguished by case. For example, you cannot create groups named both
ADMINS and admins.
4. In the list of policies, select the check box for each policy that you want to
apply to all members of the group. Then click Next Step.
KiranReddy. A
IAM Policies
It is not necessary for you to understand the JSON syntax. You can use
the visual editor in the AWS Management Console to create and edit
customer managed policies without ever using JSON.
Resource – If you create an IAM permissions policy, you must specify a list of
resources to which the actions apply. If you create a resource-based policy,
this element is optional. If you do not include this element, then the resource
to which the action applies is the resource to which the policy is attached.
Condition (Optional) – Specify the circumstances under which the policy
grants permission.
Programmatic access: The IAM user might need to make API calls, use the
AWS CLI, or use the SDK Tools. In that case, create an access key (access
key ID and a secret access key) for that user.
KiranReddy. A
IAM Roles
AWS CloudFormation
AWS CloudFormation is a service that helps you model and set up your AWS
resources so that you can spend less time managing those resources and
more time focusing on your applications that run in AWS.
KiranReddy. A
You create a template that describes all the AWS resources that you want
(like Amazon EC2 instances or Amazon S3 Buckets), and AWS
CloudFormation takes care of provisioning and configuring those resources.
You don't need to individually create and configure AWS resources.
When you use AWS CloudFormation, you can reuse your template to set up
your resources consistently and repeatedly.
KiranReddy. A
For example, you can use a version control system with your templates so
that you know exactly what changes were made, who made them, and when.
If at any point you need to reverse changes to your infrastructure, you can use
a previous version of your template.
Tit-Bits
Templates
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-
formats.html
For example, if you created a stack with the following template, AWS
CloudFormation provisions an instance with user choice to select Key Pair
Name, Instance Type etc
Stacks
Templates Anatomy
JSON
KiranReddy. A
YAML
KiranReddy. A
LAB - VPC
KiranReddy. A
AWSTemplateFormatVersion: "2010-09-09"
Description: A VPC Template
Resources:
VPC: # IBM VPC Resource
Type: "AWS::EC2::VPC"
Properties:
CidrBlock: 10.0.0.0/16
InstanceTenancy: default
Tags:
- Key: Name
Value: IBM
AvailabilityZone: "us-east-1b"
MapPublicIpOnLaunch: 'false'
Tags:
- Key: Name
Value: IBM-Pvt-Subnet1
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId:
Ref: PvtSubnet1
RouteTableId:
Ref: PrivateRouteTable
VPC Peering
Peering Basics
Rules
Peering Limitations
Bastion Host
Taking a look at the diagram, traffic coming from AWS users from open
internet, through SSH, coming down through the IGW, and into the
Bastion Host
Coz, Bastion Host will be in our public subnet, that is associated with
Route Table with IGW attached, the Bastion Host then will act as a
portal for us to access any other internal resources, since we are inside
the VPC N/W, so if you recall, all the instances within a VPC regardless
of whether they are in public or private subnets can communicate with
each other.
Internet Gateway
An Internet Gateway (IGW) is a logical connection
> Allow only ssh from DL-Infra { Network } to Bastion i.e in Security Group of
Bastion only SSH from DL N/W i.e search for my ip in google
KiranReddy. A
> Launch instance in public subnet using Amazon Linux 2 tag it as Web
Server
> Web Server works on port 80, Allow port 80 from anywhere
> Launch instance in private subnet using Amazon Linux 2 and tag it as DB
Server
> Gets failed, create NAT gateway in public subnet and attach the routing in
PVT Route table
> Steps to create NAT Gateway: VPC Dashboard > NAT Gateways > Create
NAT Gateway > Select the Public Subnet > Elastic IP allocation: create new
EIP > Create NAT Gateway
Once NAT Gateway is created, attach the routing in the PVT RTB i.e
0.0.0.0/0 -> NAT-GW-ID
https://docs.aws.amazon.com/vpc/latest/userguide/
VPC_NAT_Instance.html#NATInstance
RDS
A Database is a store for datasets where :
KiranReddy. A
Relational DB
Non Relational DB
RDS
PostgreSQL
MySQL
MariaDB
Oracle
RDS Essentials
Service.
system (fully-managed).
line).
window
between them.
RDS Benefits
Automatic backups
Multi-AZ
> Like we installed Apache Web Server to deploy the website, we need
to install Apache Tomcat to go with dynamic applications
> wget
https://archive.apache.org/dist/tomcat/tomcat-7/v7.0.94/bin/apache-tomcat-
7.0.94.tar.gz
> cd apache-tomcat-7.0.94
> cd bin
> ./startup.sh { Hit enter }
> Also apply a custom tcp rule with port 8080 from anywhere in
security group, as tomcat works on port 8080 by default
> Go back to app server where tomcat is installed and perform below tasks
> cd /home/ec2-user
> cd aws-rds-java
Connection con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/jwt",
"Akiranred", "Admin123*");
"Akiranred", "Admin123*");
Connection con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/jwt",
"Akiranred", "Admin123*");
Now you can Register a user and verify the same by logging in
-> Databases -> Create Database -> Select MySQL -> Scroll down
check Only > enable options eligible for RDS Free Usage Tier -> Give some
name : DB instance identifier > username : Akiranred > password : Admin123*
> Select VPC -> select subnet group created > Public accessibility : no >
uncheck Enable deletion protection at end -> Create Database
KiranReddy. A
You now have options to select your engine. For this tutorial, click the
MySQL icon, select the value of edition and engine version as any
5.6.X, and select the Free Tier template.
KiranReddy. A
You will now configure your DB instance. The list below shows the
example settings you can use for this tutorial:
Settings:
DB instance identifier: Type a name for the DB instance that is unique
for your account in the Region that you selected. For this setup, we will
name it lamp.
Master username: Type a username that you will use to log in to your
DB instance. We will use username as root for this setup
Master password: Type a password that contains from 8 to 41 printable
ASCII characters (excluding /,", and @) for your master user password.
Confirm password: Retype your password
DB instance class: Select db.t2.micro --- 1vCPU, 1 GIB RAM. This
equates to 1 GB memory and 1 vCPU.
Storage type: Select General Purpose (SSD).
Allocated storage: Select the default of 20 to allocate 20 GB of storage
for your database. You can scale up to a maximum of 64 TB with
Amazon RDS for MySQL.
Enable storage autoscaling: If your workload is cyclical or unpredictable,
you would enable storage autoscaling to enable RDS to automatically
KiranReddy. A
scale up your storage when needed. This option does not apply to this
tutorial.
Multi-AZ deployment: Note that you will have to pay for Multi-AZ
deployment. Using a Multi-AZ deployment will automatically provision
and maintain a synchronous standby replica in a different Availability
Zone.
VPC security groups: Select Create new VPC security group. This will
create a security group that will allow connection from the IP address of
the device(web server) that you are currently using to the database
created.
Keep everything else default
Click Create Database.
Connection con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/jwt",
"Akiranred", "Admin123*");
TOO
Connection con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/jwt",
"Akiranred", "Admin123*");
TOO
use jwt;
> cp /home/ec2-user/aws-rds-java/target/LoginWebApp.war
/home/ec2-user/apache-tomcat-7.0.105/webapps
Now this is how the VPC with High availability and Fault tolerance
looks
KiranReddy. A
Now the difference b/w both the diagrams is, now we have introduced
something called ELB and Auto Scaling Group.
KiranReddy. A
Load Balancing
An ELB has its own DNS record set that allows for
direct access from the open internet access.
AWSTemplateFormatVersion: "2010-09-09"
Description: A VPC Template
Resources:
VPC: # IBM VPC Resource
Type: "AWS::EC2::VPC"
Properties:
CidrBlock: 10.0.0.0/16
InstanceTenancy: default
Tags:
- Key: Name
Value: IBM
CidrBlock: 10.0.1.0/24
AvailabilityZone: "us-east-1b"
MapPublicIpOnLaunch: 'true'
Tags:
- Key: Name
Value: IBM-Pub-Subnet2
- Key: Name
Value: IBM-Pvt-RT
SNS - Components
Topic
The group of subscriptions that you send a message to.
Subscriptions
An endpoint that a message is sent. Available endpoints
include:
HTTP
HTTPS
Email
Email-JSON
SQS
Application, Mobile APP notifications
(IOS/Android/Amazon/Microsoft)
SMS (Cellular text message)
Publisher
The "entity" that triggers the sending of a message
Example include:
Human
S3 Event
CloudWatch Alarm.
These services allows you to effectively keep tabs on the status of your
environments and who is taking what actions inside of it.
Cloud Watch
Monitoring Levels
Auto Scaling
Now this was how the VPC with High availability and Fault tolerance
looks
Now the difference b/w both the diagrams is, now we have introduced
something called ELB and Auto Scaling Group.
But if more then 100 clients come then this server cannot
handle those extra requests and becomes slow or unstable.
Now imagine your server got huge traffic may be due to
promotional offers.
What Auto Scaling does is, it analyzes the load coming in and
deploys the new servers to meet that demand, say around 300
people are coming in then, it will spin up new servers and set
the application for us automatically.
KiranReddy. A
-> Services -> SNS -> create Topic -> Name { mail } ->
AMI
-> Launch an Amazon Linux 2 Instance and setup food website with service
enabled
-> Launch another instance by selecting the above AMI and browse the
IP, the food site should load
Launch Configuration
-> Create a security group called Food-SG and allow the SSH and
HTTP traffic from anywhere
-> Services -> EC2 -> Auto Scaling Section -> Click Launch
Configurations -> Create Launch Configuration -> Select the food AMI -> In
Configuration details step, under Advanced Details, IP Address Type, Select
Assign a public IP address to every Instance -> In security groups select the
existing security group Food-SG to allow the SSH & HTTP traffic -> Review ->
Create Launch Configuration
-> Services -> EC2 -> Auto Scaling Section -> Click Auto Scaling
Groups -> Create Auto Scaling group from the earlier launch configuration we
used -> Group name - ASG -> Group Size : Launch with two instances ->
Network : Choose the VPC -> Subnets : Select the public subnets to launch
the instances -> Configure Scaling Policies -> Select the Use scaling policies
to adjust the capacity -> Scale in between the instances ie choosing the MIN
and MAX no of the instances so select between 2 & 5 -> Scroll down and
click on the link : scale the auto scaling using step or simple scaling
policies ->
Increase group size and Decrease Group Size
-> In Increase group size -> Add New Alarm -> Send notification to :
select the topic(email) -> Whenever the Average of CPU Utilization is >=70 ->
Create Alarm
-> In Decrease group size -> Add New Alarm -> Send notification to :
select the topic -> Whenever the Average of CPU Utilization is <=20 -> Create
Alarm
-> Next Configure Notification -> Configure Tags -> Review -> Create
Auto Scaling Group