Amazon Web Services - Practical Scenarios: C.V.Udayasankar
Amazon Web Services - Practical Scenarios: C.V.Udayasankar
By
C.V.UDAYASANKAR
www.udaytutorials.com 1
Contents
www.udaytutorials.com 2
www.udaytutorials.com 3
1. VPC(Virtual Private Cloud) VPC Between two different regions in same account
Practical Scenario 1 :
(i) I want to launch two ec2 instances one will used for application server and
it will be accessed over the Internet
(ii) second instance will be used for Database server . Access from outside world is restricted.
(iii) My application instance will connect to Database instance internally using VPC subnet
(iv) DB instance need to get updates through internet
www.udaytutorials.com 4
VPC stands for Virtual Private Cloud, it is virtual private network and is isolated from other virtual networks in your
AWS account. It's similar to on-premises Data centre.
It is logically isolated from other virtual networks in the AWS Cloud.
An Internet Gateway (IGW) in AWS is a logical connection between an Amazon VPC and the Internet. If a VPC does
not have an Internet Gateway, then the resources in the VPC cannot be accessed from the Internet
You can only have 1 Internet Gateway per VPC.
An Internet Gateway allows resources within your VPC to access the internet, and vice versa. In order for this to happen, there
needs to be a routing table entry allowing a subnet to access the IGW.
www.udaytutorials.com 5
IGW allows resources within your public subnet to access the internet
NAT Gateway:
It allows resources in a private subnet to access the internet (think yum updates, external database connections, wget calls
A NAT Gateway does something similar, but with two main differences:
1. OS patch, etc
2. It only works one way. The internet at large cannot get through your NAT to your private resources unless you explicitly
allow it.
www.udaytutorials.com 6
Step1: To Create VPC
Login to your AWS account, From the Services Tab → Select VPC →then Select Your VPC → click on “Create VPC”
Specify your VPC Name and CIDR (Classless Inter-Domain Routing), In my case I am using the followings.
www.udaytutorials.com 7
www.udaytutorials.com 8
Step2: Create Private & Public Subnets
In this step we will create two subnets, Pub_Subnet1 (10.1.0.0/24) and Pvt_Subnet2 (10.1.10.0/24) across the availability
zones.
We are calling these subnets as private because we can’t access instances from the Internet whenever EC2 instance is
getting IP from these subnets. Though after attaching Internet gateway these instances becomes reachable over internet.
From the VPC Dashboard click on Subnets option and then click on Create Subnet
Specify the followings
www.udaytutorials.com 9
www.udaytutorials.com 10
Similarly Create Pvt_Subnet1 with IPV4 CIDR “192.168.0.128/25”
www.udaytutorials.com 11
Step:3 Create a Route table and associate it with your VPC
From VPC Dashboard there is an option create a Route table.
Click on “Create Route Table”
Specify the Name of Route Table Public_routetable and Select your VPC, In my case VPC is udaytutorials-vpc
www.udaytutorials.com 12
Step:4 Create Internet Gateway (igw) and attached it to your VPC
From VPC dashboard there is an option to create Internet gateway. Specify the Name of Internet gateway
Now Add Route to your route Table for Internet, go to Route Tables Option, Select your Route Table, In my case it is
“Public-RouteTable“, click on Route Tab and Click on Edit and the click on “add another route”
www.udaytutorials.com 13
Mention Destination IP of Internet as “0.0.0.0/0” and in the target option your Internet gateway will be populated
automatically as shown below.
www.udaytutorials.com 14
Step:5 Change Route table of your VPC Subnet
I am going to change the route table of Pub_Subnet1 .From the VPC Dashboard, Click on Subnets, Select the
Pub_Subnet1 and the click on “Route Table” Tab and the click on Edit.
Change the default Route table to “Public-RouteTable” and then Click On Save
www.udaytutorials.com 15
www.udaytutorials.com 16
Step:6 Launch APP and DB Server Instance in your VPC
Launch APP Server in your VPC using Pub_Subnet1 and DB Server using Pvt_Subnet2.
APP server Instance :
www.udaytutorials.com 17
www.udaytutorials.com 18
DB server Instance :
www.udaytutorials.com 19
Verify whether you are able to access APP Server and DB server from public IP. By default the DB server is not accessible
through Public IP .
www.udaytutorials.com 20
Login DB server through Pvt IP 192.168.0.53 of App Server :
www.udaytutorials.com 21
2. NAT Gateway
Step:6 Configure the NAT Gateway
If we want to make the DB Server to access the internet we need to configure NAT Gateway .
To do this we need a Elastic IP.
Allocate a New Elastic IP from he VPC dashboard
www.udaytutorials.com 22
The selection for this one to have Pub_subnet not Pvt _Subnet
www.udaytutorials.com 23
www.udaytutorials.com 24
www.udaytutorials.com 25
After creating the NAT , we need to create the Routing table for Private_subnet
www.udaytutorials.com 26
Create a new Routing table Pvt_Routing table
www.udaytutorials.com 27
Edit the routing table Pvt_routingtable to allow the internet
www.udaytutorials.com 28
Edit the Pvt_Subnet1 to route through Pvt_routingtable
www.udaytutorials.com 29
www.udaytutorials.com 30
Finally the DB server can access the internet for updating the patches
www.udaytutorials.com 31
3) VPC Peering between 2 Different region but same Account :
Practical Scenario 2 :
A company has an AWS account that contains three VPCs [Dev(Red) and Prod(Green)] in the same region.
Test(Blue), is in another region but same account
Test is peered to both Prod and Dev. All VPCs have non-overlapping CIDR blocks. The company wants to
push minor code releases from Dev to Prod to speed up time to market. How would you help the company
accomplish this?
Solution:
Create a new peering connection Between Prod and Test along with appropriate routes
Or
Create a new peering connection Between Dev and Test along with appropriate routes.
VPC- Peering
A VPC peering connection is a networking connection between two VPCs that enables routing of traffic
between them using private IP addresses.
Instances in either VPC can communicate with each other as if they are within the same network
VPC peering connection can be established between your own VPCs, or with a VPC in another AWS
account in a single different region.
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway
nor a VPN connection, and does not rely on a separate piece of physical hardware.
www.udaytutorials.com 32
There is no single point of failure for communication or a bandwidth bottleneck.
www.udaytutorials.com 33
Step:1 Configure 3 VPCs
www.udaytutorials.com 34
Step:3 Configure 3 Internet Gateways
www.udaytutorials.com 35
Blue's Instance :
Red Instance :
www.udaytutorials.com 36
All 3 Instances :
www.udaytutorials.com 37
Step:5 Configure Peering Connection
We need to peer the (i) VPC Green toVPC Red
www.udaytutorials.com 38
www.udaytutorials.com 39
(ii) Create Peering VPC Green to VPC Blue .
www.udaytutorials.com 40
www.udaytutorials.com 41
Green's Route Table Configuration:
www.udaytutorials.com 42
Red's Routing Table Configuration :
www.udaytutorials.com 43
Step 6: Finally Login in each instance and test the connectivity
Red's Instance IP : 192.168.1.100
Green's Instance IP : 10.1.1.100
Blue's Instance IP : 172.16.1.98
From Red's Instance :
www.udaytutorials.com 44
Blue's Instance :
www.udaytutorials.com 45
4. VPC Peering Between two Different regions and Different Account
Practical Scenario 3 :
A company has an AWS account that contains three VPCs [Dev(Green) and Prod(Red)] in
the same region. Test(Blue), is in another region but different account
Create a new peering connection Between Prod and Test along with appropriate routes.
VPC Peering Between Two Different Accounts
www.udaytutorials.com 46
Step1 : Create VPC Two in Different Regions Separately
Creation of VPC is same as previous practical scenario
Step2 : Create a VPC Peering From N.Virginia
www.udaytutorials.com 47
www.udaytutorials.com 48
www.udaytutorials.com 49
www.udaytutorials.com 50
Step3 : Accept a VPC Peering Request From N.Virginia in Mumbai
www.udaytutorials.com 51
www.udaytutorials.com 52
Step4 : Configure the Routing table in both VPC's
1) N.Virginia Red's Routing Table Configuration:
www.udaytutorials.com 53
2) Routing table for peering between N.Virginia Red's and Mumbai Locations Blue :
www.udaytutorials.com 54
Step4 : Test the connectivity from both regions
Red's Instance IP : 192.168.1.100
Blue's Instance IP : 172.16.0.10
www.udaytutorials.com 55
5.STORAGE
S3 = Simple Storage Service = Object Storage ( Google Drive)
S3 Glacier = Archive Storage
EFS = Elatistic File System ( AWS)=Network File System= Linux Shared File System
FSx = Lustre or Windows shared folders
Storage Gateway = Its like a mediatior between your DC and AWS S3 Storage.
AWS Backup = EBS,EC2,S3,Storage Gateway
www.udaytutorials.com 56
5a. Simple Storage Service (S3) Bucket
Amazon simple storage service (Amazon S3) is an object storage service that offers
industry-leading scalability, data availability, security, and performance. An object is simply a
piece of data in no specific format: it could be a file, an image, a piece of seismic data or some
other kind of unstructured content.
It's similar to Google Drive.
It is used as storage for the internet. It has simple web services interface that helps developers to
store and retrieve data from anywhere around the globe. It is highly scalable, fast, inexpensive
storage, reliable and highly trusted database. Buckets are region specific
Is s3 Bucket free?
When you first start using Amazon S3 as a new customer, you can take advantage of
a free usage tier. This gives you 5GB of S3 storage in the Standard Storage class, 2,000 PUT
requests, 20,000 GET requests, and 15 GB of data transfer out of your storage “bucket” each
month free for one year.
www.udaytutorials.com 57
How many buckets can I have in s3?
By default, customers can provision up to 100 buckets per AWS account. However,
you can increase your Amazon S3 bucket limit by visiting AWS Service Limits. An
object can be 0 bytes to 5TB. For objects larger than 100 megabytes, customers should consider
using the Multipart Upload capability.
Is s3 a file system?
S3 is not a distributed file system. It's a binary object store that stores data in key-value pairs. ...
Each bucket is a new “database”, with keys being your “folder path” and values being the binary
objects (files). It's presented like a file system and people tend to use it like one.
www.udaytutorials.com 58
How much does s3 really cost?
Here's how much these types of storage can cost
S3 Standard S3 Standard –
Infrequent Access
REQUESTS
www.udaytutorials.com 59
www.udaytutorials.com 60
www.udaytutorials.com 61
www.udaytutorials.com 62
Additional Features of S3 Bucket
www.udaytutorials.com 63
1) Versioning:
Versioning in AWS S3 can be described simply as keeping incremental copies of the same file as you make
modifications on the go. Versioning or version control is nothing but tracking the changes made to the files,
which can be reverted to, at any point of time.
It can be utilized to preserve, recover and restore early versions of every object you store in your Amazon S3 bucket.
Unintentional erase or overwriting of objects can be easily regained with versioning.
Aws S3 versioning allows you to store multiple copies of same resource instead of plain replace of the resource. So if you
had a file abc.txt and in case you re uploaded the file abc.txt , your old file will eventually not be visible. But if you have
versioning enabled you can go back and check all the previous versions of your abc.txt.
www.udaytutorials.com 64
www.udaytutorials.com 65
2) Static Web site Hosting
Static website hosting on Amazon S3 is one of the very popular use cases of Amazon S3. It allows you to
host an entire static website and on very low cost. Amazon S3 is a highly available and scalable hosting
solution.
Step1: Upload the Html files
www.udaytutorials.com 66
Step2: Make the policy under permissions tab of Index.html
www.udaytutorials.com 67
The policy code has been written in Json
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::udaytutorrials-s3-bucket/*"
}
]
}
www.udaytutorials.com 68
Step3 : Select the Static Website Hosting
www.udaytutorials.com 69
www.udaytutorials.com 70
Website URL: http://udaytutorrials-s3-bucket.s3-website-us-east-1.amazonaws.com
Step4 : Access the website in browser
www.udaytutorials.com 71
3) Server access logging:
Server access logging provides detailed records for the requests that are made to a bucket. Server access logs
are useful for many applications. For example, access log information can be useful in security and access
audits
Step1: Select ‘Enable Logging’, provide target bucket name and target prefix and click ‘Save’.
www.udaytutorials.com 72
Step2: Navigate to ‘Permissions’ and select S3 log delivery group and provide access for log delivery. Click ‘Save’.
www.udaytutorials.com 73
Step3: To view the logs, navigate to ‘Overview’. Server Access Logs has been delivered to target S3 bucket.
www.udaytutorials.com 74
6.S3 Glacier
Amazon S3 Glacier and Amazon S3 Glacier Deep Archive are cold data storage services as part of
Amazon’s popular S3 Cloud Storage platform. Cold Data Storage is a term used to refer to any data stored
on the cloud which is rarely or less frequently accessed and retrieved. The data may not be needed for
several months, years or even decades.
S3 Glacier is an extremely low-cost storage service that provides durable storage with security features for
data archiving and backup.
Glacier and Glacier Deep Archive are primarily designed as a long-term backup solutions for individuals
and businesses. These platforms provide a comprehensive solution to securely store your data at an
affordable price. Amazon claims to provide over 99.99% durability.
AWS Glacier is widely used for scientific data storage, digital data storage, regulatory and compliance data
storage, healthcare data storage, and many other purposes.
You can retrieve any information stored on your cloud platform within just 10-15 minutes.
www.udaytutorials.com 75
Step1 : Before going to S3 Glacier ,We need to have Glacier Client software in our laptop .So We need to create an
user in Identity and access management
www.udaytutorials.com 76
www.udaytutorials.com 77
Step2: Login to the Fast Glacier application with the use of Security and Identity key
www.udaytutorials.com 78
Step2: Go To Storage service-> S3 Glacier -> Create a vault
www.udaytutorials.com 79
Step2: Go to Fast glacier application and check the Vault has been created or not
www.udaytutorials.com 80
Step3: You can do whatever you want
www.udaytutorials.com 81
7.EFS-Elastic File system
It's a kind of Network file system (that means it may have bigger latency but it can be shared across several instances;
even between regions).
It is expensive compared to EBS but it gives extra features.
It's a highly available service.
It's a managed service.
You can attach the EFS storage to a EC2 Instance.
Can be accessed by multiple EC2 instances simultaneously.
Since December 2016 it's possible to attach your EFS storage directly to on-premise servers via Direct Connect.
Amazon VPC security groups and network access control lists allow you to control network access to your EFS
resources.
SSD-based storage. Grow or shrink as needed.
Can grow to petabyte scale, with throughput and IOPS scaled accordingly.
Amazon EFS supports the Network File System version 4 (NFSv4) protocol.
Will use standard file and directory permissions (chown and chmod) to control access to the directories and files.
www.udaytutorials.com 82
Step1: Go to Storage service then select the EFS
www.udaytutorials.com 83
Step 2: Go to Storage service then select the EFS to configure
www.udaytutorials.com 84
www.udaytutorials.com 85
Step 2: Go to each Ec2 and update the /etc/fstab entry to mount the NFS share from AWS EFS
www.udaytutorials.com 86
www.udaytutorials.com 87
Step 2: Create a file from any one of the server and watch the status of Shared folder from other 2 server
The file created in one server will be replicated in other 2 server
www.udaytutorials.com 88
www.udaytutorials.com 89
8.AWS Backup
AWS Backup is a centralized backup service which allows you to back up your application data in AWS
Cloud and on-premises, in an easier and cost-effective manner. AWS Backup is a fully managed backup
solution based on the policies. It simplifies and automates the process of backup management and enables
you to fulfil your regulatory and business backup compliance requirements.
With AWS Backup, the customer will be able to configure the policies for data backup and track the backup
process for the AWS resources such as Amazon RDS databases, Amazon EFS file systems, AWS Storage
Gateway Volumes, Amazon DynamoDB tables, and Amazon EBS volumes. AWS Backup makes you
protect your AWS resources in just a few clicks in the AWS Backup console.
When a customer builds an application in the AWS cloud, the application data can be distributed across
various AWS services such as block storage, database services, file systems, and object storage. Though
these AWS services have in-built backup capabilities, customers create scripts to implement retention
policies, automate backup scheduling, and consolidate backup activity across AWS services.
www.udaytutorials.com 90
HOW IT WORKS
The process is shown in the following figure:
1. Create a backup plan 2. Assign resources to the plan 3. Monitor the backup process 4. Restore the backup
www.udaytutorials.com 91
Backup Creation
Step 1: To start, Open the AWS Backup Console and click on the Create backup plan.
Step 2: You will find the three Start options in Create backup plan – Here, we’ll start from scratch and Build a new plan.
Let’s name the Backup plan as Udaytutorialsbackup1
www.udaytutorials.com 92
Step 3 : Backup rule configuration
www.udaytutorials.com 93
www.udaytutorials.com 94
www.udaytutorials.com 95
www.udaytutorials.com 96
Manual Backup / On Demand Backup
www.udaytutorials.com 97
www.udaytutorials.com 98
Backup Restoration
Step 1: Choose Restore backup from the dash board
www.udaytutorials.com 99
The restoration is in the form of snap shot
www.udaytutorials.com 100
9.AWS Storage Gateway
We can use AWS Storage Gateway (ASG) service to connect our local infrastructure for files etc with Amazon cloud
services for storage. There are three storage solutions available: 1) Volume based 2) File based 3) Tape based.
The AWS Storage Gateway can run either on-premises as a virtual machine (VM) or as an EC2 instance directly in AWS.
www.udaytutorials.com 101
1) File Gateway
A file gateway supports a file interface into Amazon Simple Storage Service (Amazon S3) and combines a
service and a virtual software appliance.
www.udaytutorials.com 102
By using this combination, you can store and retrieve objects in Amazon S3 using industry-standard file
protocols such as Network File System (NFS) and Server Message Block (SMB).
The software appliance, or gateway, is deployed into your on-premises environment as a virtual machine
(VM) running on VMware ESXi or Microsoft Hyper-V hypervisor. The gateway provides access to objects
in S3 as files or file share mount points. With a file gateway, you can do the following:
You can store and retrieve files directly using the NFS version
You can store and retrieve files directly using the SMB file system
You can access your data directly in Amazon S3 from any AWS Cloud application or service.
You can manage your Amazon S3 data using lifecycle policies, cross-region replication, and
versioning. You can think of a file gateway as a file system mount on S3.
www.udaytutorials.com 103
Step 2: Choose the file gateway
www.udaytutorials.com 104
www.udaytutorials.com 105
Step 4: Launch the AWS Storage Appliance Ec2-Instance and login to configure
www.udaytutorials.com 106
Step 5: Configure Network
www.udaytutorials.com 107
Step 5: Do the Network connectivity test
www.udaytutorials.com 108
Step 7: Name the Storage Gateway
www.udaytutorials.com 109
www.udaytutorials.com 110
www.udaytutorials.com 111
www.udaytutorials.com 112
www.udaytutorials.com 113
The file share has been created to mount it
www.udaytutorials.com 114
Step 8: Mount the shared folder to mount the S3 bucket
www.udaytutorials.com 115
Step 9: Create some files in the shared folder to replicated in S3 Bucket
www.udaytutorials.com 116
Adding VMware Virtual Machine as File Gateway
Step:1 Select VMware ESXi click next to configure the appliance , since we won’t have appliance in our
environment we will download the image and Deploy in your virtual environment .
Lets stop here and deploy the Appliance then we will continue
www.udaytutorials.com 117
Step2: Deploy the AWS Storage Gateway
Open the vSphere Web Client, right click the cluster / host and select Deploy OVF Template option
www.udaytutorials.com 118
Review the OVA details and click next
www.udaytutorials.com 119
Step4 : Enter a Name for the Appliance , Select the folder where to be placed the VM and Click Next
www.udaytutorials.com 120
Step5: Select the Storage where to store the VM , disk format and click Next
www.udaytutorials.com 121
Step6: Select the Network and Click Next
www.udaytutorials.com 122
www.udaytutorials.com 123
Step7 : Once deployed the Appliance , You can power on and configure the IP address
www.udaytutorials.com 124
Add a new Hard drive of 150 GB from Virtual Hardware with new SCSI controller with type of VMware Paravirtual.
www.udaytutorials.com 125
Step9: Now power on the AWS appliance Virtual Machine and Login to console using default username and
password
( admin /password )
www.udaytutorials.com 126
Step10: Configure the Network by selecting Option 2
Select the Configure Static IP option using 3 and add the details and enter “Y” to save the configuration
www.udaytutorials.com 127
Press Return to continue.
it will restart the networking and you can see configuration has been updated .
www.udaytutorials.com 128
Next you have add the DNS to get access to the internet using option 6
Provide the detail and Enter “Y” to apply the details.
www.udaytutorials.com 129
Enter x to complete the configuration.
www.udaytutorials.com 130
Step10 : Now we can continue to configure the ASW Gateway from Amazon Service Console
Enter the gateway IP ( Appliance IP) and Click on Connect to gateway option
Next we have to activate the gateway , Provide a Gateway name and click on Activate gateway .
www.udaytutorials.com 131
You can view message ” Gateway is now active ” and it will look for the disk available on the Gateway to configure if you
have more than 1 disk , choose the disk required for the cache then click Save and Continue.
www.udaytutorials.com 132
You will see message as Successfully created gateway and select the gateway and you can view the details
www.udaytutorials.com 133
We have successfully configured AWS Gateway , Next we have to create a S3 Bucket required to create a file share.
The remaining steps are same as File gateway - Using Amazon Ec2
www.udaytutorials.com 134
2.Volume Gateway
We will deploy the storage gateway on the Vmware ESXi server and the volume type that we are going to use is gateway-
cache volume. So let’s begin:
From the AWS Management Console, select “Storage Gateway” and this will start the process to setup and activate the
gateway.
As you can see, there are four steps to set up and activate the gateway. We will choose the option where we can store locally
the most recently accessed data in parallel with the storing of the data in Amazon S3:
Continue to the next step and choose that you want to deploy the VM on a VMware ESXi server:
www.udaytutorials.com 135
Now download the storage gateway VM:
Once the VM is downloaded, you can move further with the VM deploying. The wizard will explain on each step what you
need to do and how to configure the ESXi server. First you will need to connect to the ESXi server using vSphere client as
shown below:
www.udaytutorials.com 136
Then deploy the VM as explained:
www.udaytutorials.com 137
Once the VM is deployed, you should see it in the list of your VMs:
Move further with the synchronization of the time between the guest and the host. You have the exact steps on how to do
this:
www.udaytutorials.com 138
Next it is time to make some adjustments to the VM by adding two hard drives and configuring the SCSI controller.
Right click on the VM and then choose “Edit Settings”:
www.udaytutorials.com 139
You will get a list with the hardware that the VM has and we need to add more. First we will add the hard disks required for
the cache storage and upload buffer. Each one of these two hard disks will be used later when we activate the storage
gateway. We will add first the hard disk for the upload buffer. Click on “Add”:
www.udaytutorials.com 140
Confirm that you want to create a new disk:
Enter the size that you want. Because this is a testing environment, we will go with a small disk:
www.udaytutorials.com 141
As you can see, the hard disk was added. At the same time, another piece of hardware was added which we will discuss
later:
www.udaytutorials.com 142
Add the second hard disk that will be used to cache storage and you should have something like this:
It’s time to make some adjustments to the SCSI controller to be VMware Paravirtual. This is one prerequisite and you will
be asked to do this during the storage gateway deployment step.
Again edit the VM, select the SCSI controller and click on “Change Type”:
www.udaytutorials.com 143
And select the required controller type:
At the end of all these changes, you should have something like this:
www.udaytutorials.com 144
Once you do that and you go back to the AWS Management Console, you will see at the next step where you need to
provide the local disk storage that you need to use paravirtualized controller on the VM. We did this already and we can
move further:
www.udaytutorials.com 145
The last thing that you are asked to do is to create the disks for cache storage and upload buffer. We did this as well:
www.udaytutorials.com 146
Once you are done with this step, you will need to activate the gateway.
In order to do this, we need first to power on the VM. Once the VM is powered, it needs to have an IP address. The network
interface of the VM is configured in such a way that is trying to get the IP address through DHCP. You must have a DHCP
server somewhere in the network that can assign an IP address to your VM. You can also wait for VM to boot and then
connect to the console and configure manually the IP address.
In my case, the VM got its IP address through DHCP. The IP address is required when you get to the step of activating the
storage gateway:
www.udaytutorials.com 147
After the VM booted, you can continue the wizard from AWS Management Console. At the last step, you will be required
to put the IP address that was assigned to the VM. Click on “Proceed to Activation” to continue:
www.udaytutorials.com 148
Fill in the details like the timezone where the gateway is running and the name of the gateway. In order to activate the
gateway, the browser must run on a machine with network connectivity to the gateway host. Click on “Activate my Storage
Gateway” to successfully activate the gateway:
Next you will see a list with your storage gateways. Right now we have only one of them and if “Gateway” tab is selected,
you will see a few of its details.
www.udaytutorials.com 149
Now the storage gateway is activated, but you cannot use it for now in the way you intend to because there is no storage
volume added. As you might remember, the storage volume will be used to keep your data.
We will continue in the next part of the series and see how you can create a storage volume and how you can access it from
a computer.
By reaching this point of the article, you should have a good understanding of how to deploy a storage gateway
www.udaytutorials.com 150
To create a storage volume, select the “Volumes” tab and then click on “Create Volume”:
Next you will have to go through a few steps. Some of them can be skipped, but in a later part of the series we will come back and make real use
of them.
The first step will be to configure the local storage. Remember that in part two of the series, we added two hard disks to the storage VM. One
was for cache storage and one was for the upload buffer. We will make use of them here:
www.udaytutorials.com 151
There is the possibility to configure alarms in case the upload buffer and cache storage disks get filled in above specific threshold that you
configure. You can either choose to set the alarms or you can skip setting them up. This is for the upload buffer utilization:
www.udaytutorials.com 152
And this one is for the cache storage:
www.udaytutorials.com 153
Now the interesting part follows. You will need to configure the storage volume. This is the virtual hard disk where you will be backing up your
data. You can configure anything up to 32TB. You will also get the iSCSI target name (iqn.1997-05.com.amazon:myvolume) that you will need
later to attach the volume to your server. The host IP address and port cannot be modified. The host IP address is the IP address that was
assigned to the storage gateway VM. Click on “Create Volume” to finish:
www.udaytutorials.com 154
Optionally, you can configure the CHAP authentication as part of volume creation:
www.udaytutorials.com 155
You will find the volume just created in the “Volumes” tab where you can see the size and the status:
Select the volume and then choose the “Details” tab from below to get detailed information about the volume:
www.udaytutorials.com 156
More interesting information is contained in the “iSCSI Target Info” tab. Basically this is the information that will be required when you will
start using the iSCSI initiator software on the client.
And you are done with the volume creation. Next you will need to use an iSCSI initiator software to connect to the gateway’s volume.
www.udaytutorials.com 157
A iSCSI initiator allows the user to connect a host to an external iSCSI based storage array through an Ethernet NIC.
Based on the operating system that you are using, the software needed is different. It can be already installed like it is on Windows 7 or you
might need to install it if you are using Ubuntu.
For our testing we will use Windows 7 because the software is already installed.
Start the iSCSI Initiator software, select the “Discovery” tab and then click on “Discover Portal”:
The discover portal contains the IP address of the storage gateway and the port on which the iSCSI initiator should connect to. We will use the
information that we got during the storage gateway deployment and volume creation:
www.udaytutorials.com 158
Once you click on OK and then go to “Targets” tab, you will see that our target was auto discovered but it is inactive:
www.udaytutorials.com 159
Select the target to which you want to connect and click on “Connect”:
www.udaytutorials.com 160
You will be asked for confirmation and you can enable extra options:
www.udaytutorials.com 161
Now you are connected to the target:
www.udaytutorials.com 162
Now it’s time to use the volume. For this, you will need to go to the Control Panel, then on Computer Management and select the Storage
section. Yow should see a new disk that is not initialized:
www.udaytutorials.com 163
Right click on this disk and initialize it:
www.udaytutorials.com 164
Now that the disk was initialized, it’s time to create a new volume to which we will assign a drive letter like for any other partition that you
might have. Right click on the disk and select “New Simple Volume”:
www.udaytutorials.com 165
Follow the process where you will specify the volume size, the drive letter and optionally you can format the new drive and eventually you
should get something like this:
As you can see, during volume creation, I assigned drive letter “E” and now along with the initial drive that I had, I can see the new drive in “My
Computer”:
Now you can copy data on the new drive, as it would be a local hard disk. But you actually copy the data to the storage gateway and eventually
to the Amazon S3 bucket.
www.udaytutorials.com 166
Let’s see quickly how you connect using the iSCSI initiator from Ubuntu.
www.udaytutorials.com 167
www.udaytutorials.com 168
www.udaytutorials.com 169
www.udaytutorials.com 170
www.udaytutorials.com 171
www.udaytutorials.com 172
10.VPC END POINTS
Practical Scenario 4 :
You have an application running on an Amazon EC2 instance that uploads 10 GB
video objects to Amazon S3. Video uploads are taking longer than expected inspite of
using multipart upload cause of internet bandwidth, resulting in poor application
performance. Which action can help improve the upload performance?
VPC Endpoint enables creation of a private connection between VPC to supported AWS services
and VPC endpoint services powered by Private Link using its private IP address
VPC Endpoint does not require a public IP address, access over the Internet, NAT device, a
VPN connection or AWS Direct Connect
Traffic between VPC and AWS service does not leave the Amazon network
Endpoints are virtual devices, that are horizontally scaled, redundant, and highly
available VPC components that allow communication between instances in the VPC and
AWS services without imposing availability risks or bandwidth constraints on your network
traffic.
www.udaytutorials.com 173
Endpoints currently do not support cross-region requests, ensure that the endpoint is created
in the same region as your bucket.
AWS currently supports two types of Endpoints
VPC Interface Endpoints
VPC Gateway Endpoints
VPC Endpoint policy is an IAM resource policy attached to an endpoint for controlling
access from the endpoint to the specified service.. Endpoint policy, by default, allows full
access to the service.
Endpoint policy does not override or replace IAM user policies or service-specific policies
(such as S3 bucket policies).
A VPC Gateway Endpoint is a gateway that is a target for a specified route in the route table,
used for traffic destined to a supported AWS service.
VPC Gateway Endpoint currently supports S3 and DynamoDB services.
www.udaytutorials.com 174
VPC Interface Endpoints:
VPC Interface endpoint enables connectivity to services powered by AWS Private Link.
Services include some AWS services for e.g. Cloud Trail, Cloud Watch etc., services hosted
by other AWS customers and partners in their own VPCs (referred to as endpoint services),
and supported AWS Marketplace partner services.
www.udaytutorials.com 175
www.udaytutorials.com 176
Step1: Create the VPC End point from Dash board
www.udaytutorials.com 177
www.udaytutorials.com 178
Step2: Choose the Private subnet vpc
www.udaytutorials.com 179
www.udaytutorials.com 180
Step3: Login to Private server through Public server and access the S3 bucket
www.udaytutorials.com 181
11.Security Groups Vs NACL
Firewall: A firewall is software or firmware that prevents unauthorized access to a network.
It can allow inbound and outbound traffic based on
1)port numbers
2) source and destination IPs
3) Security groups
Stateless Firewall : Need to allow both Inbound and Outbound ports to allow /deny the traffic.
State full Firewall : Need to open only Inbound ,the FW will remember the connectivity and
allows the other side.
SECURITY GROUPS:
i) It's a State full firewall.
ii) Its applicable in Resource level such as EC2 Instances,Load balancer,RDS etc.
iii) By default the security group settings are denied , we need to allow the traffic
which are needed .
www.udaytutorials.com 182
NACL(N/w Access Control List):
i) It's a State less firewall .
ii) We need to allow the Inbound and Outbound .
iii) Its applicable in VPC Subnets only.
iv) By default all the traffics are allowed.
SECURITY GROUPS:
Step1: Create a new security group for Green EC2-Instance by choosing its VPC
www.udaytutorials.com 183
www.udaytutorials.com 184
Step2: Create Inbound and Out bound Rules to allow the traffic
www.udaytutorials.com 185
Step3: Assign the newly created Security Group to Green-EC2 Instances
www.udaytutorials.com 186
Practical Scenario 5:
You have an NGINX Web application running on an Amazon EC2 instance that needs to be
access through Port 80, SSH through 22 and Ping requests from any IP
Solution:
Step1: Create rule in Security group .
www.udaytutorials.com 187
Step2: Allow the required IP in the Inbound Rule of Security group
www.udaytutorials.com 188
Step3: Now Validate it by accessing the server from SSH and launch the web application
in the browser
www.udaytutorials.com 189
NETWORK ACCESS CONTROL(NACL):
In NACL we need to define both the Inbound and outbound rules .
www.udaytutorials.com 190
Note: Be careful while modifying the existing NACL in production environment because this rule imposed on
subnets which impacts to EC2 instances under its subnet.
www.udaytutorials.com 191
www.udaytutorials.com 192
12.IPSec VPN -VPG - OPEN VPN
To establish a VPN ( Virtual Private Network ) connection in AWS, We need
i) Virtual Private gateway (VPG)
ii) Customer Gateway (CG)
It's similar to traditional Site-to-Site VPN .
Customer gateway is the VPN Router device available in Customer's On premises Ex: Cisco
ASA Router .
Virtual Private Gateway is the Virtual Router kind of device available in AWS side . We don't
need in-depth information about this device .Just click to create VPG and attach it your required
VPC . AWS will automatically assign 2 public IPs.
VPN tunnel is established after traffic is generated from customer’s side of VPN connection.
www.udaytutorials.com 193
SITE TO SITE VPN
www.udaytutorials.com 194
Practical Scenario 5:
You have deployed a Database server in AWS and you need to access it from your Home /
Office private Network .How you will do ?
www.udaytutorials.com 195
Step1: Launch Instance without Public IP and select a required VPC which doesn't have
public access .
www.udaytutorials.com 196
Step2: Create VPN Gateway and attach VPN to its required VPC
www.udaytutorials.com 197
www.udaytutorials.com 198
www.udaytutorials.com 199
Step2: Create Customer Gateway
Login to the Cisco VPN router which is located in Home / Office Premises
www.udaytutorials.com 200
Put the Public IP of Customer's VPN Router in the Customer Gateway
www.udaytutorials.com 201
www.udaytutorials.com 202
Step4 : Create a VPN Connection
www.udaytutorials.com 203
We need to download the VPN configuration details in txt format and take those details to
configure the 2 Tunnels in Cisco VPN router of Home /Office Premises
www.udaytutorials.com 204
www.udaytutorials.com 205
Edit the VPC - Routes to allow the Home/Office's Private Network Range by Selecting VPG
www.udaytutorials.com 206
Validate it by login in to the AWS server from Home
www.udaytutorials.com 207
www.udaytutorials.com 208
OPEN VPN ---> POINT TO SITE VPN
It's a cheap VPN services available in AWS , No need of VPN router in Home /Office .
Step1: Launch an EC2 Open VPN Instance .
www.udaytutorials.com 209
Let the Security Group be default
www.udaytutorials.com 210
www.udaytutorials.com 211
Step2: Login to the Open VPN Ec2-Instance : [email protected]
www.udaytutorials.com 212
Answer all the questions to configure the Open VPN
www.udaytutorials.com 213
Step3: Change the password of the user - openvpn
Step4:Connect the VPN from your laptop using Open VPN Client software
www.udaytutorials.com 214
Step5: Access the AWS server from your Home / office Laptop
www.udaytutorials.com 215
www.udaytutorials.com 216
13. Transit Gateway
AWS Transit Gateway is a service that enables customers to connect their Amazon Virtual
Private Clouds (VPCs) and their on-premises networks to a single gateway. Any new VPC is
simply connected to the Transit Gateway and is then automatically available to every other
network that is connected to the Transit Gateway.
Simply , A transit gateway is a network transit hub that you can use to interconnect your virtual
private clouds (VPC) and on-premises networks.
Practical Scenario 6:
You have a 3 VPCs , These all 3 VPCs need to communicate each other. How to accomplish
without configuring it in a complicated method .
www.udaytutorials.com 217
www.udaytutorials.com 218
Components of transit gateway:
Attachment: You can attach a VPC or VPN connection to a transit gateway.
Route table: A transit gateway has a default route table and can optionally have additional route tables. A
route table includes dynamic and static routes that decide the next hop based on the destination IP address of
the packet. The target of these routes could be a VPC or a VPN connection. By default, the VPCs and VPN
connections that you attach to a transit gateway are associated with the default transit gateway route table.
Route Associations: Each attachment is associated with exactly one route table. Each route table can be
associated with zero to many attachments.
Route propagation: A VPC or VPN connection can dynamically propagate routes to a transit gateway route
table. With a VPC, you must create static routes to send traffic to the transit gateway. With a VPN
connection, routes are propagated from the transit gateway to your on-premises router using Border
Gateway Protocol (BGP).
If we ping across VPC, the ping will not respond as no connection present between VPC’s.
We have 3 instances which belongs to 3 VPC Network
www.udaytutorials.com 219
www.udaytutorials.com 220
Step1: Create Transit Gateway
www.udaytutorials.com 221
Step1: Create Transit Gateway Attachment for all 3 available VPC
www.udaytutorials.com 222
www.udaytutorials.com 223
AWS will automatically handle the routing part for these attachments
www.udaytutorials.com 224
www.udaytutorials.com 225
www.udaytutorials.com 226
Step2 : Edit the Route Tables of all VPC
www.udaytutorials.com 227
www.udaytutorials.com 228
Step3: Now validate the ping requests from any one of the servers
www.udaytutorials.com 229
Step4 : Create a Customer Gateway if required to access from Home / Office Network .
Step5 : Create a Transit Gateway attachment
Choose the VPN
www.udaytutorials.com 230
14. EBS- Elastic Block Storage
Amazon Elastic Block Store is known as a block storage system of AWS. A block storage volume works
similarly to a hard drive. You can store any type of files on it or even install a whole Operating System on it.
Network-attached Storage (NAS) can be offered by Amazon EBS. They work along with EC2 instances and
subsequently attach, create, and accumulate an Amazon EBS volume to instances. The instance can
resemble a hard drive on your computer. At once, one volume only can be attached to the instance, but it is
possible to remove an Amazon EBS volume from one EC2 instance and attach it to another.
Now let’s understand what is AWS EBS Volume in technical terms.
Although Amazon does offer local storage for every EC2 Instance that you can use while you run the
instance but as soon as the instance is shut down, the data in that local storage is also lost. Therefore, if you
need to save the data, you would need Amazon EBS with your EC2 instance.
EBS volumes are either backed up by SSD (Solid-state drive) or a hard disk drive (HDD).
www.udaytutorials.com 231
SSD (Solid-state drive): SSD backed EBS Volumes are optimized specially for transactional workloads
where the volume is supposed to perform a lot of small read and write operations.
HDD (Hard disk drive): HDD backed volumes are specifically designed for large workloads.
There are five types of EBS volumes. You can use whatever works best for your use case at
the time of launching a new instance.
1. General Purpose SSD (gp2)
This is the volume that EC2 chooses by default as the root volume of your instance. It provides a balance of
both price and performance.SSD stands for Solid State Drive which is multiple times faster than HDD (Hard
Disk Drive) for small input/output operations. Having it as the root volume for your instances can
significantly improve the performance of your server.
Its performance is measured in IOPS (Input-Output Operations per second), means how many input and
output operations our server can per second.These EBS volumes provide a ratio of 3 IOPS per GB with the
ability to burst up to 3000 IOPS for extended periods of time. They support up to 10,000 IOPS and 160
MB/s of throughput.1 IOPS — 256 KB/s of read or write operation. IOPS refer to operations on blocks that
are up to 256 KB in size.
www.udaytutorials.com 232
ranges from 100 IOPS to 32,000 IOPS.They support up to 500 MB/s of throughput and can be used as root
volume for an EC2 instance.Here, you are charged for the provisioned IOPS along with the storage space of
your volume.
3. Throughput Optimized HDD (st1)
These are low-cost magnetic storage volumes which define performance in terms of Throughput.
These are designed for large, sequential workloads like Big Data, Data warehouses, and log processing. You
will probably use these volumes for your Hadoop cluster.They provide throughput of up to 500 MB/s and
cannot be used as root volume for an EC2 instance.
www.udaytutorials.com 233
These can be used as root volumes for EC2 instances.
Step2: Provide the size of the new volume and availability zone according to an instance availability
zone. Scroll down and click ‘Create Volume’
www.udaytutorials.com 234
www.udaytutorials.com 235
www.udaytutorials.com 236
Before Disk Adding
www.udaytutorials.com 237
Create a Snapshot from EBS
What Is an EBS Snapshot?
An EBS snapshot is a point-in-time backup of your EBS volume. It is a “copy” of the data on your EBS
volume. If you are looking for a disaster-recovery solution for your EBS volume, this is the solution.
If you want to “backup” your EC2 instance, then you want to create EBS snapshots of the EBS volumes
attached to the instance.
Is an EBS Snapshot a Full or Incremental Backup?
Yes. An EBS snapshot is actually both a full backup and an incremental backup.
When an EBS snapshot is created, only the data on the EBS volume that has changed since the last EBS
snapshot is stored in the new EBS snapshot. In this way, it’s an incremental backup. Internally, the EBS
snapshots chain together. When an EBS snapshot is used to restore data, all data from that EBS snapshot can
be restored as well as the data from the previous snapshots. In this way, the snapshot is a full backup.
www.udaytutorials.com 238
3) You can share an EBS snapshot with another AWS account.
4) You can copy an EBS snapshot from another account that has shared the EBS snapshot with your account.
And most importantly, you can create a fresh EBS volume from your EBS snapshot.
What Can’t I Do with an EBS Snapshot?
1) You cannot access data directly from the snapshot.
2) You cannot copy the EBS snapshot to Glacier for cheaper storage.
3) You cannot restore an EBS snapshot into or onto an existing EBS volume.
Note: Plan ahead. Backup early and backup often. EBS snapshots are invaluable when disaster happens.
Practical Scenario 7:
One of your EC2 Instance accidently got deleted , How you would take precautionary steps and
restore the instance
www.udaytutorials.com 239
Step1 : Create a Snapshot for the Volume
www.udaytutorials.com 240
Create a snap shot for the Instance
www.udaytutorials.com 241
You can assign permission for the snap shot
www.udaytutorials.com 242
Snap shot completetion status
www.udaytutorials.com 243
Restore a Snapshot from EBS
We can restore an instance /Volume using snapshot if it was deleted accidently . Lets create some files in the
Green sever and delete the Instance .
www.udaytutorials.com 244
Step1: Create An AMI Image From The EBS Snapshot
www.udaytutorials.com 245
www.udaytutorials.com 246
Step2 : Delete the Instance
www.udaytutorials.com 247
Step3 : Validate the files are present even after restoring
www.udaytutorials.com 248
Schedule snapshot of EBS volume using Lifecycle Manager
Step 1: Login to AWS console and click EC2 under compute.
www.udaytutorials.com 249
Step 2: Navigate to the Lifecycle Manager under ELASTIC BLOCK STORE and click ‘Create
Snapshot Lifecycle Policy’
www.udaytutorials.com 250
Warning: Additional charges apply to EBS volume and snapshot.
www.udaytutorials.com 251
Step 3: Provide schedule name and description and select your volume using the related tag. You can
also provide schedule time and interval.
www.udaytutorials.com 252
Step 4: Provide a snapshot retention period and click Create Policy.
www.udaytutorials.com 253
You will be redirected to Lifecycle Manager
if you wait for a bit, your snapshots should be taken you’ll notice that any of your EBS volumes that
were properly tagged will be snap shotted
www.udaytutorials.com 254
www.udaytutorials.com 255
15. Elastic Load Balancer-Network, Application
What Is Load balancer?
A load balancer serves as the single point of contact for clients. The load balancer distributes
incoming application traffic across multiple targets, such as EC2 instances, in multiple
Availability Zones. This increases the availability of your application.
A load balancer accepts incoming network traffic from a client, and based on some criteria in
the traffic it distributes those communications out to one or more backend servers.
Load balancing is efficiently distributing the incoming traffic across a group of backend servers.
Load balancers are key to building great internet applications, because they give your
application the following benefits:
www.udaytutorials.com 256
Redundancy (One application server could die, but as long as there is at least one application
server left the load balancer can still direct client traffic to the remaining working application
server.)
Scalability (By running two servers behind a load balancer you can now handle 2x the traffic
from clients. Load balancers make it easy to add more and more backend servers as your traffic
increases.)
Types of Elastic Load Balancers are classified in AWS as i) Application Load Balancer
ii) Network Balancer iii) Classic Load Balancer
Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and
operates at both the request level and connection level. Classic Load Balancer is intended for
applications that were built within the EC2-Classic network
Application LB is referred to as a “level 7” load balancer, while Network LB is referred to as a
“level 4” load balancer. These levels are a reference to the Open Systems Interconnection (OSI)
model:
www.udaytutorials.com 257
www.udaytutorials.com 258
Network- Load Balancer
www.udaytutorials.com 259
To accomplish Network Load balancer , We need the following requirements.
1) To register a domain name in godaddy.
2) To register a Name server details taken from AWS Route3 in godaddy.com.
3) Three subnets 192.168.1.0/192.168.2.0/192.168.3.0 and 1 VPC.
4) 3 Ec2 Instance with NGNIX web server installed .
5) 192.168.1.10 /192.168.2.10/192.168.3.10:8080 --> Dev /Test environment ip of Web Server for Load
Balancer.
6) 192.168.1.10 /192.168.2.10/192.168.3.10:80 --> Prod environment ip of Web Server for Load Balancer.
www.udaytutorials.com 260
www.udaytutorials.com 261
Copy the name server names and put in the Godaddy.com as we have registered our domain .
www.udaytutorials.com 262
www.udaytutorials.com 263
After updating the Name servers in Go daddy, The Name server information are as below.
www.udaytutorials.com 264
www.udaytutorials.com 265
Step1 : Create Target groups from EC2 Dashboard
www.udaytutorials.com 266
Step2 : Add the Instance from Targets
www.udaytutorials.com 267
Step3 : Now Create the Load Balancer from the EC2 Dash board
www.udaytutorials.com 268
www.udaytutorials.com 269
www.udaytutorials.com 270
www.udaytutorials.com 271
Step3 : Verify the Registered targets are healthy from the Target Group
www.udaytutorials.com 272
Step5: Publish the Website to the world by mapping our DNS of LB in the Route53
www.udaytutorials.com 273
www.udaytutorials.com 274
Step6: Validate by accessing our website through browser .
Step7: Stop the Nginx service in the server 192.168.2.10 and check the website is working.
Step7: Lets add one more instance as registered Target in the Target group
to the Load balancer for Efficiency .
www.udaytutorials.com 275
www.udaytutorials.com 276
www.udaytutorials.com 277
Step8: Lets add the subnet3 in the Load balancer
You can see the newly registered subnet is in progress to register completely
www.udaytutorials.com 278
Step8: Stop the Nginx service in the server 192.168.2.10 &192.168.1.10 and check the website is working.
Step9: Lets Create Target group for Test /Dev environment to access the web server through Port 8080
www.udaytutorials.com 279
www.udaytutorials.com 280
Step10: Lets Register the targets by adding the 3 Instances on Port 8080
www.udaytutorials.com 281
Step11: Add a listener in Load balancer for the port 8080 to the Dev Target group
www.udaytutorials.com 282
Step 13 : To make our website secure, We need to create a SSL certificate from AWS Certificate Manager.
www.udaytutorials.com 283
www.udaytutorials.com 284
www.udaytutorials.com 285
www.udaytutorials.com 286
Step 14 : Copy the CName and Value from AWS Certificate manager and paste in Route53 of Hosted Zones
www.udaytutorials.com 287
www.udaytutorials.com 288
Step 15: Check the status of the Certificate
www.udaytutorials.com 289
Step16: Add the Listener from Load Balancer
www.udaytutorials.com 290
APPLICATION LOAD BALANCER
Suppose if an enduser access the diffrent links as below , The links should be routed to assigned webservers.
but in the back end the Load balancer route the requests to the below concerned web servers
www.udaytutorials.com 291
Step1a : Create a target group
www.udaytutorials.com 292
Step1b: Edit the Health check from each target groups to assign the root path for each links
www.udaytutorials.com 293
Step2 Register the Instance in the required Target Group for the following path to be routed
Home-> Blue NLB1-> 192.168.1.10
Songs -> Blue NLB2->192.168.2.10
Movies-> Blue NLB3-> 192.168.3.10
www.udaytutorials.com 294
Step3: Create a Application Load Balancer
www.udaytutorials.com 295
www.udaytutorials.com 296
www.udaytutorials.com 297
www.udaytutorials.com 298
Step4: Add the Listener from Load Balancer
www.udaytutorials.com 299
Step5: Create an Alias target in the Route53 for the Application Load Balancer
www.udaytutorials.com 300
Step7 : Validate the Name resolution is working fine after 15 minutes
Step6: Modify the root path in each web server and start the NGINX Web service.
www.udaytutorials.com 301
Step7: Validate it accessing the website through Browser
www.udaytutorials.com 302
To make the Web server in Secure mode- https://udayawstest.xyz
Step 2 : Assign the SSL Certificate generated from AWS certificate Manager
www.udaytutorials.com 303
Step 3: Validate it by accessing the website through https mode
www.udaytutorials.com 304
16. Auto Scaling
AWS Auto scaling helps us to setup application scaling for multiple resources across multiple services in a short interval. It
is a service provided by AWS which allows you to scale up or down your infrastructure horizontally by automatically
adding or removing EC2 instances based on user-defined policies, health status checks or schedule.
Practical Scenario:
An online shopping website is deployed in AWS Load balancer, Suddenly 1 lakh customers accessing the website at a
time .The Load balancer performance is lowering and end users annoyed of slow response .How to improve the efficiency.
www.udaytutorials.com 305
Solution
www.udaytutorials.com 306
Pre requisites:
1.AMI with your applications pre configured.
2. A Load balancer with no instances registered.
3. An Email notification for scale -out-in (Optional).
4. Cloud watch Monitoring tool.
The instance that you want to attach must meet the following criteria:
The instance is in the running state.
The AMI used to launch the instance must still exist.
The instance is not a member of another Auto Scaling group.
The instance is in the same Availability Zone as the Auto Scaling group.
If the Auto Scaling group is associated with a load balancer, the instance and the load balancer must both be in EC2-
Classic or the same VPC.
www.udaytutorials.com 307
Step1: Create an image of an instance which has the complete Application configuration
www.udaytutorials.com 308
Step2: Create Auto Scaling group
www.udaytutorials.com 309
www.udaytutorials.com 310
www.udaytutorials.com 311
www.udaytutorials.com 312
Step3: Create an alarm for Increase & Decrease Group Size
www.udaytutorials.com 313
We can see the Activity history in the Auto Scaling group and the Instance got automatically launched
www.udaytutorials.com 314
www.udaytutorials.com 315
Step4 : Create a Load to the CPU by stress command to test the Auto Scaling
www.udaytutorials.com 316
The Instance will be automatically created whenever the CPU load goes beyond the defined limit
www.udaytutorials.com 317
The Instance will be automatically deleted whenever the CPU load goes below the defined limit
www.udaytutorials.com 318
www.udaytutorials.com 319
www.udaytutorials.com 320
17.Simple Systems Manager
AWS Systems Manager lets you remotely and securely manage the configuration of your managed
instances. It helps you automate management tasks.
AWS Systems Manager is a management service that helps you automatically collect software
inventory, apply OS patches, create system images, and configure Windows and Linux operating
systems.
It also manages on-premises servers and virtual machines, and other AWS resources at scale.
AWS Systems Manager lets you easily automate complex and repetitive tasks such as applying OS
patches across a large group of instances, making regular updates to AMIs, and enforcing configuration
policies.
Systems Manager has a simple interface to define your management tasks and then select a specific set
of resources to manage.
Systems Manager is available now at no cost to manage both your EC2 and on-premises resources.
www.udaytutorials.com 321
Benefits
MANAGE HYBRID CLOUD SYSTEMS
With Systems Manager you can manage systems running on AWS and in your on-premises data center through a single
interface. Systems Manager uses a light-weight agent installed on your EC2 instances and on-premises servers that
communicates securely with the Systems Manager service and executes management tasks. This helps you manage resources
for Windows and Linux operating systems running on Amazon EC2 and in data center infrastructure such as VMware ESXi,
Microsoft Hyper-V, and other platforms.
EASY TO USE AUTOMATION
AWS Systems Manager lets you easily automate complex and repetitive tasks such as applying OS patches across a large
group of instances, making regular updates to AMIs, and enforcing configuration policies. Systems Manager has a simple
interface to define your management tasks and then select a specific set of resources to manage. Tasks can be configured to
run automatically based either on the results of software inventory collection or events registered by Amazon Cloud Watch
events.
IMPROVE VISIBILITY AND CONTROL
Systems Manager helps you easily understand and control the current state of your EC2 instance and OS configurations.
With Systems Manager, you can collect software configuration and inventory information about your fleet of instances and
the software installed on them. You can track detailed system configuration, OS patch levels, application configurations, and
other details about your deployment. Integration with AWS Config lets you easily view changes as they occur over time.
www.udaytutorials.com 322
MAINTAIN SECURITY AND COMPLIANCE
Systems Manager helps keep your systems compliant with your defined configuration policies. You can define patch
baselines, maintain up-to-date anti-virus definitions, and enforce firewall policies. With Systems Manager, you can maintain
software compliance and improve your security posture.
REDUCE COSTS
Systems Manager helps you reduce costs by providing easy to use, automated tools for tracking, updating and maintaining
your software and OS configurations. With Systems Manager, you can automatically maintain systems that are compliant so
you don’t waste time on manual updates, or add risk associated with non-compliant systems.
SECURE ROLE-BASED MANAGEMENT
Systems Manager helps improve your security posture in several ways. Through integration with AWS Identity and Access
Management (IAM), you can apply granular permissions to control the actions users perform. All actions taken by Systems
Manager are recorded by AWS Cloud Trail, allowing you to audit changes throughout your environment.
www.udaytutorials.com 323
www.udaytutorials.com 324
What is AWS Systems Manager - State Manager?
AWS Systems Manager State Manager is a secure and scalable configuration management service that automates the
process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define.
www.udaytutorials.com 325
What is AWS Systems Manager- Patch Manager
Patch Manager is to scan your instances for missing patches or scan and install missing patches. You can install patches
individually or to large groups of instances by using Amazon EC2 tags.
What is AWS Systems Manager- Parameter Store ?
AWS System Manager Parameter store provides secure, hierarchical storage for configuration data management and
secrets management. We can store data such as,
a. passwords
b. database strings
c. license codes
What is AWS Systems Manager- Run Command ?
Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows Power
Shell, or the AWS SDKs. Linux Shell script . Run Command is offered at no additional cost.
www.udaytutorials.com 326
a maximum duration, a set of registered targets (the instances that are acted upon), and a set of registered tasks. You can add
tags to your Maintenance Windows when you create or update them.
www.udaytutorials.com 327
1) Setting up IAM Role for System Manager
Step1 : Go to Service Dashboard, Select IAM( Identity Access Management) and then create SSM Role
www.udaytutorials.com 328
Step2 : Choose the privileges for the new role
www.udaytutorials.com 329
www.udaytutorials.com 330
Step3: Attach the policies for this role
www.udaytutorials.com 331
2) Installing SSM Agent:
Step1 : Deploy a Server -Linux and Windows and choose the IAM Role -->Udaytutorials-SSM
www.udaytutorials.com 332
Step2 : By default the SSM Agent is installed in Both Windows server and Linux Server
www.udaytutorials.com 333
SYSTEMS MANAGER WORKING PROCESS-Public Subnet
Step1: Connect the servers using session Manager from Dashboard
www.udaytutorials.com 334
www.udaytutorials.com 335
www.udaytutorials.com 336
SYSTEMS MANAGER WORKING PROCESS-Private Subnet
My Amazon Elastic Compute Cloud (Amazon EC2) instance doesn't have internet access. How can I manage my
instance using AWS Systems Manager?
Follow these steps:
Step1: Launch an Instance in Private Subnet
Step 2: Create an AWS Identity and Access Management (IAM) instance profile for Systems Manager. You can
either create a new role, or add the needed permissions to an existing role.
Step 3: Attach the IAM role to your private EC2 instance.
Step 4: The security group must allow inbound traffic from your instance on port 443.
Step 5: Create a VPC endpoint for Systems Manager.For Service Name, select com.amazonaws.region.ssm
www.udaytutorials.com 337
Be sure to create the endpoint in all subnets in the VPC.
For Enable Private DNS Name, select Enable for this endpoint.
www.udaytutorials.com 338
www.udaytutorials.com 339
Step 6: Start a Session
www.udaytutorials.com 340
AWS Systems Manager- Run Command
We can use the following steps to list all services running on the instance by using Run Command from the Amazon EC2
console.
www.udaytutorials.com 341
Step2: For Command document, choose AWS-RunPowerShellScript for Windows instances, and AWS-
RunShellScript for Linux instances.
Step3: For Commands, type Get-Service for Windows, or ps -aux | less for Linux.(Optional) For Working Directory,
specify a path to the folder on our EC2 instances where we want to run the command.
www.udaytutorials.com 342
For Target instances, choose the instance we created. If we don't see the instance, verify that we are currently in the same
region as the instance we created. Also verify that we configured the IAM role and trust policies as described earlier.
(Optional) For Execution Timeout, specify the number of seconds the EC2Config service or SSM agent will attempt to run
the command before it times out and fails.
For Comment, providing information is recommended so that it will help us identify this command in our list of commands.
For Timeout (seconds), type the number of seconds that Run Command should attempt to reach an instance before it is
considered unreachable and the command execution fails.
Choose Run to execute the command. Run Command displays a status screen. Choose View result.
Step4:
To view the output, choose the command invocation for the command, choose the Output tab.
www.udaytutorials.com 343
www.udaytutorials.com 344
AWS Systems Manager- Automation
Automation allows you to safely automate common and repetitive IT operations and management tasks across AWS
resources.
The below real time scenario explains in detail about automation
www.udaytutorials.com 345
An Out-Dated Windows-Os AMI needs to get updated automatically using systems manager
automation
www.udaytutorials.com 346
Step2 : Choose Document name prefix, AWS-UpdateWindowsAMI
www.udaytutorials.com 347
www.udaytutorials.com 348
www.udaytutorials.com 349
Step3 : Now execute the created document owned by me
Step4 : Monitor the progress of the Automation and Automatic creation of AMI
www.udaytutorials.com 350
www.udaytutorials.com 351
AWS Systems Manager- Patch Manager & Maintance window
Patch Manager is to scan your instances for missing patches or scan and install missing patches. You can install patches
individually or to large groups of instances by using Amazon EC2 tags.
Maintenance Windows give us the option to Run Tasks on EC2 Instances on a specified schedule.
Step 01: Select EC2 —> Select Patch Baselines (under the Systems Manager Services Section)
Step 02: Click on Create Patch Baseline
www.udaytutorials.com 352
Step 03: Fill in the details of the baseline and click on Create
www.udaytutorials.com 353
Go to Patch Baseline and make the newly created baseline as your default.
At this point, the instances to be patched are configured and we have also configured the patch policies. The next section we
provide AWS the when (Date and Time) and what (task) of the patching cycle.
www.udaytutorials.com 354
Maintenance Windows Configuration
As the name specifies, Maintenance Windows give us the option to Run Tasks on EC2 Instances on a specified schedule.
What we wish to accomplish with Maintenance Windows is to Run a Command (Apply-AWSPatchBaseline), but on a
given schedule and on a subset of our servers. This is where all the above configurations gel together to make patching work.
Configuring Maintenance windows consist of the following tasks.
IAM role for Maintenance Windows
www.udaytutorials.com 355
Step 1: Create a Role in IAM with the policy AmazonSSMMaintenanceWindowRole
www.udaytutorials.com 356
Step 02: Enter the Role Name and Role Description
www.udaytutorials.com 357
Step 03: Click on Role and copy the Role ARN and Click on Edit Trust Relationships
www.udaytutorials.com 358
Step 05: Add the following values under the Principal section of the JSON file as shown below
“Service”: “ssm.amazonaws.com”
www.udaytutorials.com 359
Step 06: Click on Update Trust Relationships (on the bottom of the page)
www.udaytutorials.com 360
Step 02: Enter the details of the maintenance Windows and click on Create Maintenance Windows
www.udaytutorials.com 361
Register Targets and Register Tasks for this maintenance window.
Step 01: Select the Maintenance Window created and click on Actions
Step 02: Select Register Targets
www.udaytutorials.com 362
Step 03: Enter Owner Information and select the Tag Name and Tag Value
Step 04: Select Register Targets
www.udaytutorials.com 363
At this point the targets for the maintenance window have been configured. This leaves us with the last activity in the
configuration which is to register the tasks to be executed in the maintenance window.
www.udaytutorials.com 364
Step 03: Select AWS-ApplyPatchBaseline from the Document section
www.udaytutorials.com 365
Step 04: Click on Registered targets and select the instances based on the Patch Group Tag
Step 05: Select Operation SCAN or Install based on the desired function (Keep in mind that an Install will result in a server restart).
Step 06: Select the MaintenanceWindowsRole
Step 07: Click on Register Tasks
After completing the configuration, the Registered Task will run on the Registered Targets based on the schedule
specified in the Maintenance Window
www.udaytutorials.com 366
The status of the Maintenance Window can be seen in the History section (as Shown below)
www.udaytutorials.com 367
AWS Systems Manager- Inventory
Step 1: Click on Setup Inventory from the SSM Panel and select to get Inventory for all instances if needed
www.udaytutorials.com 368
Step 2:Select the instances
www.udaytutorials.com 369
Step3:Select the frequency to collect the inventory and Inventory type
www.udaytutorials.com 370
Step4: Go to Particular instances to view the collected inventory details
www.udaytutorials.com 371
AWS Systems Manager- Parameter Store
AWS System Manager Parameter store provides secure, hierarchical storage for configuration data management and secrets
management. We can store data such as, a) passwords b) database strings c) license codes.
Using Parameter store is very simple to get set up
www.udaytutorials.com 372
Now we can create our first parameter. In this case we’ll make believe that this is a service account password for use in my
lab. We’ll first give it a name which I’ve called /hollowlab/Example. If you’re wondering about the slashes in that name,
they’re used as hierarchies.
If you have to manage a giant list of parameters all in a flat list it might be too cumbersome to sort through. A better way
might be to organize these into hierarchies (think of a folder structure) so you can group parameters. Maybe you’d do this
by department, division or application version or environment? Again, the complexities we’ll leave up to you. For now I’ve
got a root hierarchy of hollowlab and a parameter named “Example”.
After this, select the parameter type. I’ve selected a basic string here, but if you’ve got a sensitive password, it might be a
better idea to use a “Secure String” to obfuscate the actual value from your users. After all the password should be secret
right?
Lastly, enter in the value(s) of the parameter and click the “Create Parameter” button.
www.udaytutorials.com 373
Now you’ve got a parameter stored in the service and are ready to either create additional parameters or to start using that
parameter in your code.
www.udaytutorials.com 374
How to Access Your Parameters
There are several ways in which to access the parameter that we’ve just created. You can use the other EC2 Systems
Manager services such as Run Command, State Manager and Automation, or other services such as AWS Lambda or AWS
EC2 Container service. In this example we’ll just use a familiar service such as Run Command to see if we can access that
parameter successfully.
Open up the Run Command service from the EC2 console and create a new command to execute. I’m running my command
on a Linux host deployed in EC2 with the EC2SystemsManager role as described in this post. Since it’s a Linux machine
I’ll execute a shell script, but you could also do this from a PowerShell script if your partial to Windows for your Operating
System.
www.udaytutorials.com 375
www.udaytutorials.com 376
The next step is to select which instances we’ll be executing our command on. I’ve selected my Linux instance with the
SSM Agent and role installed. After that comes the critical piece, the command. In the commands box, we’ll enter “echo
{{ssm:/hierarchy/ParameterName}}” which will simply print out what the parameter value is. In my example I’ve used
“echo {{ssm:/hollowlab/Example}}. Now clearly this is a silly exercise because all it does is print it to a screen, but should
give you the idea about how it can be leveraged for those really important scripts that you’re dreaming up as you read this
post.
www.udaytutorials.com 377
When you’re ready run the command.
You’ll see your command run in the “Run Command” console within EC2 and then you’ll see a link that shows “View
Output”. If you click that link we should see what happened when the command ran in our Linux instance.
And as we’d hoped, the fictitious password was output to the screen.
www.udaytutorials.com 378
www.udaytutorials.com 379
18.Cloud watch
Amazon Cloud Watch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon Cloud
Watch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. You
can use Amazon Cloud Watch to gain system-wide visibility into resource utilization, application performance, and operational health.
Create IAM Role with relevant permission and attach to Linux instance.
Install the Cloud Watch agent in the instance.
Prepare the configuration file in the instance.
www.udaytutorials.com 380
Cloud Watch Metrics
Additionally, custom metrics can be configured for applications, services and event log files created by your applications.
The following concepts are important for your understanding Cloud Watch metrics:
Namespaces: a container for Cloud Watch metrics. Metrics in different namespaces are isolated from each other so that metrics from
different applications are not accidentally aggregated for computing statistics.
Metrics: represents a time ordered set of data points that are published to Cloud Watch. It can be thought of as a variable that we need to
monitor and the data points are the values of the variable over time. Metrics exist only in the region they are created.
Dimensions: a name or a value pair that uniquely identifies a metric. You can assign a maximum of 10 dimensions to a metric. Dimensions
help you design a structure for your statistics plan.
Statistics: are metric data aggregation over time specified by the user. Aggregation are made using the namespace, metric name, dimensions
and the data point unit of measure within the time period you specify.
Percentiles: as the name suggests, the percentile indicates the relative standing of a value in a dataset. It helps you get a better understanding
of the distribution of your metric data. Percentiles are used to detect irregularity
Alarms: used to initiate actions on your behalf. An alarm monitors a metric over a specified interval of time, and performs the assigned
actions based on the value of the metric relative to a threshold over time.
www.udaytutorials.com 381
Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
Step1: Install the Linux server with IAM role and Enable the Detailed monitoring
www.udaytutorials.com 382
Step2: Cloud Watch Monitoring Scripts should be installed in the Linux Box.
The monitoring scripts demonstrate how to produce and consume custom metrics for Amazon CloudWatch.
These sample Perl scripts comprise a fully functional example that reports memory, swap, and disk space
utilization metrics for a Linux instance.
The following steps show you how to download, uncompress, and configure the CloudWatch Monitoring
Scripts on an EC2 Linux instance.
curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O
Run the following commands to install the monitoring scripts you downloaded:
www.udaytutorials.com 383
The package for the monitoring scripts contains the following files:
CloudWatchClient.pm – Shared Perl module that simplifies calling Amazon CloudWatch from other
scripts.
mon-put-instance-data.pl – Collects system metrics on an Amazon EC2 instance (memory, swap, disk
space utilization) and sends them to Amazon Cloud Watch.
mon-get-instance-stats.pl – Queries Amazon Cloud Watch and displays the most recent utilization statistics
for the EC2 instance on which this script is executed.
awscreds.template – File template for AWS credentials that stores your access key ID and secret access key.
mon-put-instance-data.pl
This script collects memory, swap, and disk space utilization data on the current system. It then makes a
remote call to Amazon Cloud Watch to report the collected data as custom metrics.
www.udaytutorials.com 384
Step2: Cloud watch monitoring scripts execution
The following examples assume that you provided an IAM role or awscreds.conf file. Otherwise, you must
provide credentials using the --aws-access-key-id and --aws-secret-key parameters for these commands.
The following example performs a simple test run without posting data to Cloud Watch.
./mon-put-instance-data.pl --mem-util --verify --verbose
The following example collects all available memory metrics and sends them to Cloud Watch, counting
cache and buffer memory as used
./mon-put-instance-data.pl --mem-used-incl-cache-buff --mem-util --mem-used --mem-avail
www.udaytutorials.com 385
To get utilization statistics for the last 12 hours, run the following command:
./mon-get-instance-stats.pl --recent-hours=12
CPU Utilization
Average: 1.06%, Minimum: 0.00%, Maximum: 15.22%
Memory Utilization
Average: 6.84%, Minimum: 6.82%, Maximum: 6.89%
Swap Utilization
Average: N/A, Minimum: N/A, Maximum: N/A
www.udaytutorials.com 386
Viewing Your Custom Metrics in the Console
Step 4: After you successfully run the mon-put-instance-data.pl script, you can view your custom metrics in
the Amazon CloudWatch console.
To view custom metrics
1. Run mon-put-instance-data.pl as described previously.
2. Open the CloudWatch console at https://console.aws.amazon.com/cloudwatch/.
3. Choose View Metrics.
4. For Viewing, your custom metrics posted by the script are displayed with the prefix System/Linux.
www.udaytutorials.com 387
www.udaytutorials.com 388
www.udaytutorials.com 389
Step 5: Send Notifications
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages the delivery or sending of messages
to subscribing endpoints or clients.
SNS can also deliver messages by SMS to 200+ countries. SNS uses the publish/subscribe model for push delivery of messages.
SNS supports several transports, such as HTTP/S, SMS and email, and can deliver push messages to multiple recipients at once.
SNS is often used to push messages directly to other supported AWS services, such as Lambda or Simple Queue Service (SQS). SNS is
integrated with AWS Cloud Trail so that SNS actions are captured, logged and delivered to an S3 bucket.
www.udaytutorials.com 390
www.udaytutorials.com 391
Step 6: Setting Alarm
www.udaytutorials.com 392
www.udaytutorials.com 393
www.udaytutorials.com 394
www.udaytutorials.com 395
www.udaytutorials.com 396
www.udaytutorials.com 397
Step 7: Create some load
www.udaytutorials.com 398
Step 7: Monitor the notifications for the generation of the Alarm
www.udaytutorials.com 399
19. Data Bases
AWS offers 14 databases that support diverse data models and include the following types of databases:
relational, key-value, document, in-memory, graph, time series, and ledger databases.
You can also operate your own database in EC2 and EBS.
There are many options to choose from, depending on your specific needs.
Relational Databases
AWS has Relational Database Service that supports 6 different types:
Microsoft SQL Server
Oracle
MySQL Server
Postgre SQL
Amazon Aurora
MariaDB
Amazon Aurora is MySQL and Postgre SQL compatible, as well as being substantially cheaper and faster
than them.
Data Warehousing
Data warehousing is used for business intelligence to pull and analyze very large and complex data sets.
Red shift
Amazon Red shift is a peta byte-scale data-warehouse service which provides fast query performance.
www.udaytutorials.com 400
Non-Relational Databases (NoSQL)
These types store data without structured linking mechanisms (NoSQL). This allows the database to hold
exceptionally large amount of data.
Dynamo DB
Elastic Cache
Neptune
www.udaytutorials.com 401
www.udaytutorials.com 402