Class Notes (AutoRecovered)
Class Notes (AutoRecovered)
MAVEN Build:
1. ssh-keygen in the root directory of current instance. A pub file will be created in /root/.ssh/ folder.
2. In the test instance, paste the key line from that pub file into the file -> /root/.ssh/authorized_keys
3. systemctl restart ssh in both the instances to restart the ssh service
4. ssh root@<private-ip-of-other-instance> to connect
5. scp <file-with-path> root@<private-ip-of-destination>:<path> to copy the files
6. exit to disconnect the remote.
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "print me" > /var/www/html/index.html
Volume file system types: ext4, xfs, btrfs, vfat, tmpfs, vfat
mkfs -t xfs <volume file> : creates xfs file system for the volume ebs
mount <source xfs volume> <mountpoint directory> : this mounts the file system volume to a specified folder
xfs_growfs -d <volume file> : this adds the unallocated data to the file system
Unmount Volume:
fdisk <volumefile>
AWS Server:
EC2 User data : Script of commands used to launch the services or apps while creating the EC2
LoadBalancer Concept:
Public and Private IP are defferentiated. CIDR(Classless Inter-Domain Routing) devided into 4 octates to create 32 bit
format ip notation and a prefix. It is expained with Subnet concept. Created VPC and subnets with CIDR notations.
Create VPC
Create Instance in it
Create IGW
Created a VPC with CIDR 192.168.0.0/16. Created a Public_Subnet with CIDR 192.168.100.0/24 and a Private_Subnet
with 192.168.200.0/24. Launched a Public_Instance associated with Public_Subnet and Private_Instance with
Private_Subnet. IGW(InternetGateWay) is Created and Attached to VPC. RouteTable is Created and routed to that IGW.
SecurityRule is added with ssh on port 22 for that Public_Instance. Now with the Public_IP, the Public_Instance is
connected to my Putty(in My_Laptop). Finally using the pem_Key, the Private_Instance is get Connected from
Public_Instance using the command: ssh -i <pem_Key_of_Private_Instance> <ip_of_Private_Instance>
Created a VPC and 2 subnets for Webserver and Database machine. Configured IGW, RouteTable and SecurityGroup to
make the Webserver talk to the Internet. And ssh connection to the Database from Webserver.
VPC Peering:
1. 233 private
2. Default VPC and Subnet is created.
3. EC2 (172.31.3.233)is launched in that Subnet with security group and router as mediator
4. Connected
LAB_Custome_VPC: ( 192.168.0.0/16)
Testing:
Finally both the VPCs are get connected within the same Zone.
Cerated EC2 in Subnet_A of Default_VPC(172.31.0.0/16) with proper SecurityGroup and Router rules. Created EC2_Lab
instance in Subnet_Lab_A(10.0.0.0/10) of VPC_Lab(10.0.0.0/8) with proper SecurityGroup and Router rules. Now created
a Peering_Connection with Default_VPC as Reequester and VPC_Lab as Acceptor. Given the Proper routing between the
two VPCs. Now connected to the EC2 in Default_VPC from mobaXterm(my PC) using Public_IP. From that successfully
connected to the EC2_Lab using Private_IP using command ssh -i <pem_file> <private_IP> . Thus using the Peering
Concept both the instances from different VPCs are get connected without going to the internet.
VPC_Database 10.00.0.0/16
Subnet_Database_a 10.00.0.0/18
NACL:
Folks forgot my interactive Presentaion given 2 months back on NACL and SG
Cerated EC2 in a Subnet of Custom_VPC with proper SecurityGroup and Router rules. Connected the EC2 from
MobaXterm(Laptop). Created and manipulated the rules for the NACL. And tested how the traffic is allowed with ssh and
http from outside of infrastructure.
Situation
Task
Action
Expained how to answer in Interview using STAR Methodology. TransitGateWay is explained how it overcomes the
normal N:N peering relation among the VPCs. Expianed and differentiated how the companies are using AWS network
with VPN and Direct connections via AWS Co-Locations to get maximum security.
STAR Methodology. TGW. AWS Co-Locations. Homework: I have studied the related documentation
Created 3 VPCs and one subnet for each of them. Launched 3 instances in those subnets individually. Instance_A is
configured such that the traffic can flow through the IGW and rest will be in Private. Created TansitGateWay for those
VPCs and Attatchments were individually configured. Now, for each Subnet, SecurityGroup Rule is added(pointing
SuperSet_CIDR) with respected TGW Attachments to make Hub_and_Spoke model Peering. Now everything is ready to
test the TrafficFlow from Instance_A and even among them.
1. Created 3 VPCs
VPN Types. DirectConnect. Homework: DirectConnect FAQ. Draw diagram. Practiced past Labs.
High level expaination of VPN (client to site and site to site) and DirectConnect by including VGW
(VirtualPrivateGateWay) and CGW (CustomerGateWay). CGW is formed using IP and ASN number. DirectConnect is
resided in region and connected to VGW from Corporate Data Center.
IAM. User. Group. I gave presentation on Complex_Diagram
Identity and Access Management is explained in high level with the Authorization and Authentication. User is created
and practiced how the Policies plays their role. Group is created and attached the user. Union of User’s policy and Group
policy is considered when it comes to Authorization.
AWS-CLI:
API Endpoint involvement in Console is explained. Installed AWS CLI and configured using AccessKey, SecretKey, Region
and OutputFormat to launch EC2 instance using Documentation Commands. When configured, credentials and config
files will be created in home/ec2-user/.aws folder. Created a roll for instance service and attached to the instance. Now
that instance can be manipulated irrespective of the user policy.
S3.
Explained SimpleStorageService with its Characteristics, PublicAccess and how it is encrypted. Created S3 via console and
differentiated the types of its URLs. ex: https://<BUCKET_NAME>.S3.<REGION>.amazonaws.com
use-case of EBS is explained. Differentiated the types of S3 using Documentation. Explained how the Life-Cycle of S3
flows from S3 Standard to Glacier in the form of Cost-effectiveness.
added Rule for object in S3 by selecting Scope, RuleActions, Transitions by mensioning Respecive Days. Versioning is
shown practical by adding the updated object and restored the old versions.
S3 Sucurity Policy:
Encryption:
- I have practiced creating a Custom Policy for "Resource": "arn:aws:s3:::bucket11111ven/*" by adding actions
"s3:DeleteObject" and "s3:GetObject". Then enabled public access for that bucket to view the objects with public
url. Understood how to secure the S3 with policies and encryptions.
Cross Region Replication (CRR): Automatically replicates the objects in s3 to other region for data redanncy, better latacy
and Compliance.
- enable versioning
- IAM roles for source and destination buckets
I have hosted a Static Website using S3 by creating a meaningfull bucket name and files related to website and enabled
static website hosting and public access with policies as well. I have understood and practiced how Cross Region
Replication (CRR) works by enabling versioning and IAM roles for Source and Destination buckets.
Amazon EFS: EBS can be shared across the Availability Zones with data Concurrency.
SAMBA:
Fully managed
Elastic
Concurrent Access
Durability, Availability
Created Instance_A and Instance_B in Availability zone A and B respectively with Ubuntu Machines. Added NFS port
2049 Security Group. Created EFS within the same VPC. installed NFS on both the machines using apt-get install -y nfs-
common. Mounted the EFS on both the machines on a specific mount point directory using mount <dns_of_EFS>
<directory>. Now, created a file using Instance_A to check it can be edited using Instance_B. Finally added the automated
mounting while on boot by adding <dns_of_EFS> <mount_directory> nfs4 defaults,_netdev 0 0 and by matching the tab
format in /etc/fstab file.
-------------------------- EFS - NFS ( removing Mask )--------------------
sudo systemctl status nfs-common
sudo systemctl status nfs-client.target
sudo systemctl enable nfs-common
file /lib/systemd/system/nfs-common.service
RDS Pricing link: Managed Relational Database - Amazon RDS Pricing - Amazon Web Services
Explained how to overcome the Downtime in AWS with different stages of designing Architectures. Those Architectures
were differentiated with the types of DB-Instance like Single DB instance, Multi-AZ DB instance and Multi-AZ DB cluster.
Importance of ReadReplicas in Multi-AZ DB clusters with the master and slave Architecture.
LAB: RDS
Using RDS, created DB-instance of mysql. Created an EC2_Server and installed mysql-client using apt-get install mysql-
client. Connected this EC2-Server with DB-instance using mysql -h mydb.c3vbx2fl014h.ap-south-1.rds.amazonaws.com -P
3306 -u admin -p. Entered required password to get connected. Performed DB Operations Successfully.
-
-
We can send snapshot to other region with Copy Snapshot and can be restored using Restore Snapshot . We can Share
snapshot to public or to a specific Account using Share Snapshot. Migrate Snapshop is explained with instanceEngine
type. Upgrade snapshot can change the server engine version. Blue Green Deployment is used to eliminate the
downtime when we deploy the application.
DATABASE Types:
SQL DataBases and NoSql Databases were explained with examples and use cases. Document Db, Column Db, Key-value
Db, Graph Db were differeintiated with examples. ultra fast data retrieval is achieved using in-Memory Db. Time Stamp
based data retieval is achieved using Time-Series Db. Explained DataWareHousing.
ELASTIC BEANSTALK:
Elastic Beanstalk is PaaS (platform as a service) from AWS. I have launched Beanstalk instance by configuring the
Beanstalk Settings with sample python code and free tier eligible. In this webserber envirnment, I have selected Python
as platform with default sample code. Selected Elastic Beanstalk Service Role. Selected Availability Zone with respect to
VPC. We can observe the currently running Events under Events Column page. Finally by opening the assigned domain in
the web browser we can surf the sample website.
Project_1:
Project is assigned with name “Build and Secure a Scalable Blogging Platform on AWS”.
https://github.com/discover-devops/AWS_project/blob/main/Project_1/Build_Blogging_platform.md
ElasticBeanstalk -CustomPythonCode:
Launched an Instance to test the custom python code. Connected to MobaXterm and created application.py and
requirements.txt files. Installed Python package. Opened the Python environment and installed dependencies and the
python app was run locally with python application.py. Website will run on http://<Instance_Public_IP>:8080. The same
code file was copied to S3 and used its link to add to create Elastic Beanstalk. And the same is tested using DomainName.
python ./aws-elastic-beanstalk-cli-setup/scripts/ebcli_installer.py
eb –version