0% found this document useful (0 votes)
2 views

Class Notes (AutoRecovered)

The document outlines steps for setting up and managing various AWS services including Maven builds, Tomcat server deployment, Apache HTTP server configuration, VPC creation, and EBS file system administration. It details user data scripts, load balancer setup, VPC peering, and AWS networking concepts, along with IAM, S3, and RDS management. Additionally, it covers practical labs and examples for implementing these services effectively.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Class Notes (AutoRecovered)

The document outlines steps for setting up and managing various AWS services including Maven builds, Tomcat server deployment, Apache HTTP server configuration, VPC creation, and EBS file system administration. It details user data scripts, load balancer setup, VPC peering, and AWS networking concepts, along with IAM, S3, and RDS management. Additionally, it covers practical labs and examples for implementing these services effectively.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

PS1=”Prompt String Name” to customize the appearance of terminal

MAVEN Build:

1. Create an instance and install java and maven:


a. apt-get update
b. apt-get install openjdk-11-jdk
c. apt-get install maven
2. Clone the repository to the instance and mvn install to build the project.
3. WAR file is available in repository/target/
4. Push everything to remote GIT.

TomCatServer STEPS (Project Testing):

1. Create Test Environment Instance.


2. Install Java and TomCatServer
a. apt-get update and apt-get install openjdk-11-jdk
b. wget <weblink of tomcat tar.gz file> to download the file
c. tar -zxvf <tomcat file name> to extract the file like portable installation
3. Place the WAR file inside the apache-tomcat/webapps folder
4. Go to apache-tomcat/bin folder and sh startup.sh to run the tomcat server.
5. Make sure inbound rules enabled for port 8080 and test the application in browser
a. <public-ip of test envi>:8080/<projectname> in browser; ex: http://3.84.157.192:8080/gamutkart/
6. Go to apache-tomcat/bin folder and sh shutdown.sh to stop the tomcat server.

Establish connection between the two instances:

1. ssh-keygen in the root directory of current instance. A pub file will be created in /root/.ssh/ folder.
2. In the test instance, paste the key line from that pub file into the file -> /root/.ssh/authorized_keys
3. systemctl restart ssh in both the instances to restart the ssh service
4. ssh root@<private-ip-of-other-instance> to connect
5. scp <file-with-path> root@<private-ip-of-destination>:<path> to copy the files
6. exit to disconnect the remote.

Apache httpd Server on AMAZON Linux EC2:

1. Create EC2 using AmazonLinux OS.


2. Install httpd Server
a. sudo su ; yum update -y ; yum install -y httpd
3. Run httpd Server
a. systemctl start httpd
b. systemctl enable httpd to enable Apache to start on boot
c. added inbound rule for port 80 for ipv4
d. Public ip of the EC2 is used directly to test the httpd server.
e. Index.html will be showed from /var/www/html directory on browser.
4. Stop httpd Server: systemctl stop httpd
User Data:

#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "print me" > /var/www/html/index.html

EBS File System Administration:

Volume file system types: ext4, xfs, btrfs, vfat, tmpfs, vfat

lsblk : list of volume blocks

df : file system disk space usage

df -Th : human readable file system disk space usage

file -s <file or directory> : determine file type

mkfs -t xfs <volume file> : creates xfs file system for the volume ebs

mount <source xfs volume> <mountpoint directory> : this mounts the file system volume to a specified folder

cat /etc/mtab : it has currently mounted file systems entries

vim /etc/fstab : list of filesystems to be mounted at boot time of ec2

mount -all : mounts the listed filesystem volumes in /etc/fstab

yum install xfsprogs : an administration and debugging tool for xfs

xfs_growfs -d <volume file> : this adds the unallocated data to the file system

Unmount Volume:

 sudo umount <source volume> example: sudo umount /dev/xvdf or


 sudo umount <mount point>

fdisk <volumefile>

AWS Server:

1. Create httpd Server on EC2. And Stop the Instance.


2. AMI Concept:
a. create Image from EC2 section
b. new instance Creation with Image
i. Select Launch Instance from AMI Panel or
ii. Select the required AMI while launching new Instance
c. we can change the availability zone of image, by copying the image

EC2 User data : Script of commands used to launch the services or apps while creating the EC2

LoadBalancer Concept:

1. Set up two instances at least:


a. Create two instances (using UserData Script with httpd Server)
b. Individually differentiated webpages should be set in /var/www/html
2. Create Classic LoadBalancer:
a. select internet facing as we are testing from outside of cloud.
b. select required Availabity Zones.
c. select SecurityGroup which has port 80 enabled for http ipv4.
d. HealthCheck run on port 80 http to index.html
e. select required Instances
f. Check Summary and Create.
3. Test the LoadBalancer:
a. HealthCheck will response as soon as the LoadBalancer is created.
b. DNS Name of LoadBalancer is the testing website how the instances are pinged from LoadBalancer.
c. Try every possible way to make the HealthCheck to give error to check how LoadBalancer is doing its job.

Basic Networking, VPC:

Public and Private IP are defferentiated. CIDR(Classless Inter-Domain Routing) devided into 4 octates to create 32 bit
format ip notation and a prefix. It is expained with Subnet concept. Created VPC and subnets with CIDR notations.

LAB: Creating VPC Strucure and connect with outside world

Create VPC

Create Public Subnet

Create Instance in it

Create IGW

Create RouteTable : and route it with the IGW

Security Group rule ssh port 22

Created a VPC with CIDR 192.168.0.0/16. Created a Public_Subnet with CIDR 192.168.100.0/24 and a Private_Subnet
with 192.168.200.0/24. Launched a Public_Instance associated with Public_Subnet and Private_Instance with
Private_Subnet. IGW(InternetGateWay) is Created and Attached to VPC. RouteTable is Created and routed to that IGW.
SecurityRule is added with ssh on port 22 for that Public_Instance. Now with the Public_IP, the Public_Instance is
connected to my Putty(in My_Laptop). Finally using the pem_Key, the Private_Instance is get Connected from
Public_Instance using the command: ssh -i <pem_Key_of_Private_Instance> <ip_of_Private_Instance>

Created a VPC and 2 subnets for Webserver and Database machine. Configured IGW, RouteTable and SecurityGroup to
make the Webserver talk to the Internet. And ssh connection to the Database from Webserver.

VPC Peering:

Default VPC: (172.31.0.0/16)

1. 233 private
2. Default VPC and Subnet is created.
3. EC2 (172.31.3.233)is launched in that Subnet with security group and router as mediator
4. Connected

LAB_Custome_VPC: ( 192.168.0.0/16)

1. IGW_LAb is created and attached


2. Subnet_Lab_a created (192.168.100.0/24)
3. RouteTable_Lab created and as mediator
4. EC2_Lab (18.61.35.148) is launched in it. ( 192.168.100.154 private)

Created Peering Connection:

1. Selected Reequester, Acceptor.


2. Accepted request
3. Added Router rules for both the VPC s for this Perring

Testing:

1. Connected to EC2 @ Default VPC from MobaXterm


2. EC2_Lab is connected from that using Private IP.

Finally both the VPCs are get connected within the same Zone.

Cerated EC2 in Subnet_A of Default_VPC(172.31.0.0/16) with proper SecurityGroup and Router rules. Created EC2_Lab
instance in Subnet_Lab_A(10.0.0.0/10) of VPC_Lab(10.0.0.0/8) with proper SecurityGroup and Router rules. Now created
a Peering_Connection with Default_VPC as Reequester and VPC_Lab as Acceptor. Given the Proper routing between the
two VPCs. Now connected to the EC2 in Default_VPC from mobaXterm(my PC) using Public_IP. From that successfully
connected to the EC2_Lab using Private_IP using command ssh -i <pem_file> <private_IP> . Thus using the Peering
Concept both the instances from different VPCs are get connected without going to the internet.

VPC_Database 10.00.0.0/16

Subnet_Database_a 10.00.0.0/18

NACL:
Folks forgot my interactive Presentaion given 2 months back on NACL and SG

I gave the interactive Presentaion on NACL vs SG with Practical on 8 th July only.

Cerated EC2 in a Subnet of Custom_VPC with proper SecurityGroup and Router rules. Connected the EC2 from
MobaXterm(Laptop). Created and manipulated the rules for the NACL. And tested how the traffic is allowed with ssh and
http from outside of infrastructure.

Situation

Task

Action

Expained how to answer in Interview using STAR Methodology. TransitGateWay is explained how it overcomes the
normal N:N peering relation among the VPCs. Expianed and differentiated how the companies are using AWS network
with VPN and Direct connections via AWS Co-Locations to get maximum security.

STAR Methodology. TGW. AWS Co-Locations. Homework: I have studied the related documentation

Created 3 VPCs and one subnet for each of them. Launched 3 instances in those subnets individually. Instance_A is
configured such that the traffic can flow through the IGW and rest will be in Private. Created TansitGateWay for those
VPCs and Attatchments were individually configured. Now, for each Subnet, SecurityGroup Rule is added(pointing
SuperSet_CIDR) with respected TGW Attachments to make Hub_and_Spoke model Peering. Now everything is ready to
test the TrafficFlow from Instance_A and even among them.

Transit Gate Way (TGW):

1. Created 3 VPCs

VGW – virtual private gateway

CGW – customer gateway: ip, asn number

site to site VPN: need VGW and CGW

Direct Connect GW ( co locations) : recides in region

VPN Types. DirectConnect. Homework: DirectConnect FAQ. Draw diagram. Practiced past Labs.

High level expaination of VPN (client to site and site to site) and DirectConnect by including VGW
(VirtualPrivateGateWay) and CGW (CustomerGateWay). CGW is formed using IP and ASN number. DirectConnect is
resided in region and connected to VGW from Corporate Data Center.
IAM. User. Group. I gave presentation on Complex_Diagram

Identity and Access Management is explained in high level with the Authorization and Authentication. User is created
and practiced how the Policies plays their role. Group is created and attached the user. Union of User’s policy and Group
policy is considered when it comes to Authorization.

Authentication and Authorization

user created with policy attached

Created a group with ec2 full access

AWS-CLI:

API Endpoint involvement in Console is explained. Installed AWS CLI and configured using AccessKey, SecretKey, Region
and OutputFormat to launch EC2 instance using Documentation Commands. When configured, credentials and config
files will be created in home/ec2-user/.aws folder. Created a roll for instance service and attached to the instance. Now
that instance can be manipulated irrespective of the user policy.

we can automate using CLI.

S3.

Explained SimpleStorageService with its Characteristics, PublicAccess and how it is encrypted. Created S3 via console and
differentiated the types of its URLs. ex: https://<BUCKET_NAME>.S3.<REGION>.amazonaws.com

use-case of EBS is explained. Differentiated the types of S3 using Documentation. Explained how the Life-Cycle of S3
flows from S3 Standard to Glacier in the form of Cost-effectiveness.

AWS pricing Calculator

S3- Lifecycle rule. Object Versioning

added Rule for object in S3 by selecting Scope, RuleActions, Transitions by mensioning Respecive Days. Versioning is
shown practical by adding the updated object and restored the old versions.

S3 Sucurity Policy:

- User Based Security with IAM Policy


- Custom Policy is Created for "Resource": "arn:aws:s3:::bucket11111ven/*" by adding actions "s3:DeleteObject"
and "s3:GetObject". Then enabled public access for that bucket to view the objects with public url.

Encryption:

- server side with S3 managed, AWS managed and dual layer.


KMS:

- I have practiced creating a Custom Policy for "Resource": "arn:aws:s3:::bucket11111ven/*" by adding actions
"s3:DeleteObject" and "s3:GetObject". Then enabled public access for that bucket to view the objects with public
url. Understood how to secure the S3 with policies and encryptions.

Static Website on S3:

- Create a meaningfull bucket name


- upload the files related to static webpage
- enable static website hosting
- enable public access and policy
- test the website

Cross Region Replication (CRR): Automatically replicates the objects in s3 to other region for data redanncy, better latacy
and Compliance.

- enable versioning
- IAM roles for source and destination buckets

I have hosted a Static Website using S3 by creating a meaningfull bucket name and files related to website and enabled
static website hosting and public access with policies as well. I have understood and practiced how Cross Region
Replication (CRR) works by enabling versioning and IAM roles for Source and Destination buckets.

Amazon EFS: EBS can be shared across the Availability Zones with data Concurrency.

NFS: NFS port (2049)

SAMBA:

Fully managed

Elastic

Concurrent Access

Durability, Availability

Lab: EFS -Sharing EBS between Instances.

Created Instance_A and Instance_B in Availability zone A and B respectively with Ubuntu Machines. Added NFS port
2049 Security Group. Created EFS within the same VPC. installed NFS on both the machines using apt-get install -y nfs-
common. Mounted the EFS on both the machines on a specific mount point directory using mount <dns_of_EFS>
<directory>. Now, created a file using Instance_A to check it can be edited using Instance_B. Finally added the automated
mounting while on boot by adding <dns_of_EFS> <mount_directory> nfs4 defaults,_netdev 0 0 and by matching the tab
format in /etc/fstab file.
-------------------------- EFS - NFS ( removing Mask )--------------------
sudo systemctl status nfs-common
sudo systemctl status nfs-client.target
sudo systemctl enable nfs-common
file /lib/systemd/system/nfs-common.service

I was able to unmask the service by removing the file by:


sudo rm /lib/systemd/system/nfs-common.service

sudo systemctl daemon-reload


sudo systemctl start nfs-common

sudo systemctl is-enabled nfs-common

-------------------- RDS ----------------------


Showed how to start creating Database using Engine Options with Engine Type, Edition, Version, Availability and
Durability, UserName, Credentials Management, Instance Cofiguration, Storage Settings, Connectivity Options, Tags and
Monitoring. And Expained 3-tier Architecture.

RDS Pricing link: Managed Relational Database - Amazon RDS Pricing - Amazon Web Services

POC: proof of concept.

Downtime in AWS. Types of DB-Instance

Explained how to overcome the Downtime in AWS with different stages of designing Architectures. Those Architectures
were differentiated with the types of DB-Instance like Single DB instance, Multi-AZ DB instance and Multi-AZ DB cluster.
Importance of ReadReplicas in Multi-AZ DB clusters with the master and slave Architecture.

LAB: RDS

Using RDS, created DB-instance of mysql. Created an EC2_Server and installed mysql-client using apt-get install mysql-
client. Connected this EC2-Server with DB-instance using mysql -h mydb.c3vbx2fl014h.ap-south-1.rds.amazonaws.com -P
3306 -u admin -p. Entered required password to get connected. Performed DB Operations Successfully.

Endpoint port: 3306

-
-

RDS Snapshot. BlueGreen Deployment

We can send snapshot to other region with Copy Snapshot and can be restored using Restore Snapshot . We can Share
snapshot to public or to a specific Account using Share Snapshot. Migrate Snapshop is explained with instanceEngine
type. Upgrade snapshot can change the server engine version. Blue Green Deployment is used to eliminate the
downtime when we deploy the application.
DATABASE Types:

SQL DataBases and NoSql Databases were explained with examples and use cases. Document Db, Column Db, Key-value
Db, Graph Db were differeintiated with examples. ultra fast data retrieval is achieved using in-Memory Db. Time Stamp
based data retieval is achieved using Time-Series Db. Explained DataWareHousing.

ELASTIC BEANSTALK:

Elastic Beanstalk is PaaS (platform as a service) from AWS. I have launched Beanstalk instance by configuring the
Beanstalk Settings with sample python code and free tier eligible. In this webserber envirnment, I have selected Python
as platform with default sample code. Selected Elastic Beanstalk Service Role. Selected Availability Zone with respect to
VPC. We can observe the currently running Events under Events Column page. Finally by opening the assigned domain in
the web browser we can surf the sample website.

Project_1:
Project is assigned with name “Build and Secure a Scalable Blogging Platform on AWS”.

https://github.com/discover-devops/AWS_project/blob/main/Project_1/Build_Blogging_platform.md

ElasticBeanstalk -CustomPythonCode:
Launched an Instance to test the custom python code. Connected to MobaXterm and created application.py and
requirements.txt files. Installed Python package. Opened the Python environment and installed dependencies and the
python app was run locally with python application.py. Website will run on http://<Instance_Public_IP>:8080. The same
code file was copied to S3 and used its link to add to create Elastic Beanstalk. And the same is tested using DomainName.

------------- EB CLI ------------

curl https://pyenv.run | bash

git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git

python ./aws-elastic-beanstalk-cli-setup/scripts/ebcli_installer.py

pip install awsebcli

eb –version

You might also like