Notes - Guvi
Notes - Guvi
Difference:
1.apt, apt-get
apt-get may be considered as lower-level and "back-end", and support other APT-based tools.
apt is designed for end-users (human) and its output may be changed between versions.
2.update, upgrade
Software updates (patches) modify an existing program. Upgrades replace a program with its
next major version.
Web Search
● Ngnx
● OWASP top 10
● Sans top 25
Users in Linux
Group, owner/root, others
Modes in Linux
Read -4, write-2, execute-1 (total-7)
Flavours of Linux
● Fedora - red hat family
● Ubuntu
● Debian
● Centos (till version 8) , Centos Stream(After version 8)
● OpenSUSE, Arch Linux - Create own version of Linux
● Linux Mint - Used for Media Site applications
● Gentoo - Highly customized Linux variant on hardware level
● Slackware - The oldest distribution of Linux. Very simple, highly customisable
● Alpine Linux - Lightweight distribution, highly secure. Used for highly containerized
applications(recommended)
● Kali Linux - Used for ethical hacking
Commands
2.touch
Creates an empty file
3. ps-ef | grep nginx
Check process
Search for a pattern
4.cat
Gives/displays content in a file
cat > file.txt - can type content in it then click ctrl+d to exit
5.top
Lists a lot of processes
6.echo
Print content
Put content in a file - echo “hello” > file1.txt
7.vim
Create a file with CLI editor (vim editor, vi editor, nano editor)
8.whereami
Tells current directory
9.pwd
Tells current directory
10.curl
Access URL of websites
11.wget
Access URL of websites and download
12.man
Tells purpose of a command (eg; man grep)
13.chmod
Chmod 500 abc.com
read and execute - 4+1=5. Now root has these permissions. Group and others have no
permissions
-d rw_ r__ r_x - Now root has read execute permission, group has read permission and others
have read execute permissions (6,4,5).
-d represents directory
- represents file
Root, group, others
14.ls-l or ll
Long list
Linux
Suggestion-august 1991
Released - Sept 1991
3.Application
Involves all user applications
Unix OS List
● BSD or FreeBSD (Berkeley software distributions)
● Oracle or SunSolaris
● AIX - A proprietary OS from IBM(high end and mainframe)
● HP-UX - Mainframe hardware from HP
Linux flavours
● Ubuntu(based on Debian) - most popular especially for those who are new to linux
● Fedora (Sponsored by RedHat) - Known for its cutting edge innovations
● Debian - Community driven project, very stable and it’s a foundation for many other
distributions(like ubuntu)
● RedHat Enterprise Linux(RHEL) - Distribution from RedHat designed mainly for
enterprise. Known for their long term support. They are known for their enterprise level
features (like security).
● CentOS - Free and open source clone of RHEL. All utilities not available as RHEL
● Arch Linux - Highly known for its simplicity and customization. Can customize from
scratch (eg: manjaro)
● OpenSUSE- Comes with 2 main flavours 1. Tumbleweed -it is a rolling release 2. Leap -
This is a regular release
● Linux Mint - based on ubuntu. It will give detailed and traditional desktop experience. It
comes with a built-in special tool for media.
● Gentoo - Source based distribution. It is highly customizable and it is optimized for user
specific hardware.
● Slackware - One of the oldest distribution known for its simplicity and minimalism
● Alpine Linux - Lightweight distribution with security, simplicity and resource efficiency.
Popular for containerized applications.
● Kali Linux - Designed for digital forensics and penetration testing.
ssh -p 22 [email protected]
ifconfig
ipaddr
File system
System used to manage files
- ->file
-d -> directory
-l -> point to another file/directory
Type of root:
1. Root account - acc/username
2. Root as (/) - root as file directory
Change password
sudo su -
passwd //change password for root user
passwd joshna //change password for a specific user
Create a file
touch file1
Add user
user add test_user
chgrp test_user file1
group permission will change to test_user
Listing
ls - Lists files and directories
ls -l - Long list with details like permission, type of group etc
ls -lrt - List in reverse order
ls -a - Lists also the hidden files
Display processes
ps-ef, top - Display all processes
ps-ef | grep joshna- display all the processes with the name joshna
df -f - Shows the file system
du -sh *- display the size of he files
Zip files
zip -r file3.zip file 3
unzip -r file3.zip file 3
Cloud
Features
1.On demand resource provisioning (Scalability)- According to the requirement, the services
automatically increases and decreases
2.Global Availability
Regions - Specify a geographical location (Eg:mumbai)
Data Centers - Number of data centers available within the region (Eg: 3 data centers in
mumbai)
App Deployment
Static vs Dynamic
Layers(top to bottom)
Source code
Middleware
EC2 instance(hardware)
Middleware - Packages or product file that are required for the application to run
1. Web server - If application is static go with this
a. Apache - Works well with linux(free tier)
b. IIS(Internet information services) - Works well with windows
c. IHS - Chargeable, need license to use this(not free)
2. App server - If application is dynamic go with this
a. WAS(Websphere application server) - Chargeable
b. Weblogic - It is an oracle product. Chargeable
c. JBoss - Free tier
Make sure to do this to the EC2 instances of both Windows and Linux
● This is done so that anyone with any IP address can come in
● EC2 -> security -> Edit inbound rules -> add rule ->HTTP 0.0.0.0/0
(companies use, chargeable)Static IP address - The ip address will not change every time
the system restarts
(not preferred, free)Dynamic IP address - The ip address will change every time the system
restarts
Propagation time - High for dynamic IP address, so static IP address is preferred. Dynamic IP
addresses will take 72 hours to load.
Ami
Actions -> Instance settings->create image - >
Task: Take a snapshot or ami. Delete existing machine. Bring a new machine using ami or
snapshot. Check if data is still there.
Monolithic -> one server for the entire application. Works well with classic load balancer
Microservices ->Split modules and assign servers for each. Split your application into multiple
servers.
Could watch -> monitoring service According to the seriousness of the application, we can
increase capacity of a machine(70%).
2. Create launch template (this is a template for the autoscaling group to create ec2
whenever necessary)
ec2-> auto scaling group -> create auto scaling group - create launch template- (same steps as
creating an ec2 instance), give some template description - password - volume (10gb) - select
existing security group(that ALL TCP-its not recommended still for learning it’s fine) - user data -
type the following
#! /bin/bash
yum install httpd -y
service httpd start
echo “Hello aLL from $(hostname) $(hostname -i)” > /var/www/html/index.html
If Desired capacity(2) minimum number of servers to run at all time, that number of instance
does not exist, then it will ask to create (auto scaling will automatically create).
Connection:The template is taken care of by the autoscaling group. We connected auto scaling
with load balancer. So all are interlinked. So load balancer automatically connects the ec2
instance.
Deletion : first delete auto scaling group(ASG). One u delete ASG, ec2 instances will
automatically be deleted. Next delete the template and then delete the load balancer.
OSI layers
● Physical layer- encoding signals, physical specifications
● Data Link layer- local address(communication within the system)
● Network layer- Global address(connect to different networks)
● Transport layer-Transmit data using the transmission protocols(TCP,UDP)
● Session layer- manage the connection
● Presentation layer- Encrypts, compress, encodes
● Application layer- Near to user in order to perform application service
Network layer(layer 3)
● IP -> transfer bits and bytes
● Unreliable -> informations are sent directly through the network (it is fast, but there is
high possibility of data loss)
2. Plot Loan
#! /bin/bash
yum install httpd -y
service httpd start
mkdir /var/www/html/plotloan/
echo “This is my Plotloan instance” > /var/www/html/plotloan/index.html
Select the load balancer -Listener and routes - HTTP 80- Add rule - NAME: homeloan -Add
condition - Select path - Path: /homeloan* - forward to target group - target group -select
homeloan-target - Priority: 70(weightage to that particular path) - next- create
Do the same for plotloan
Web url:
1. load balancer url/homeloan
2. load balancer url/plotloan
Deletion
1. Delete load balancer
2. Delete target group
3. Delete instances
Screenshots- ALB
1. Output 2 ss (WEB)
2. Listeners and groups
3. Instances
4. Target groups
5. Bin bash script
1. Create two instances nlb_instance1 and nlb_instance2 and add the following bash script
○ #! /bin/bash
yum install httpd -y
service httpd start
echo “Hello all from $(hostname) $(hostname -i)” > /var/www/html/index.html
2. Create a target group and add path (/index.html). Add the 2 EC2 instances to the target
group.
3. Create a network load balancer and add the target group to it.
4. Once the network load balancer is active, paste the dns of it to the web browser and the
output will be displayed.
Functions
● Storage unit.
● Collections of objects.
● Single level container - contains multiple files/folders.
● Upload/download very easily.
● The name of the bucket must be globally unique. Access data in the bucket using URL
(unique one).
● Bucket creation(default size)
○ Bucket per account = 100 buckets
○ Bucket per region = 20 buckets
○ Can increase this using technical centre
● Number of Objects that could be stored inside a bucket
○ Any number of objects could be stored in a bucket
○ Size per object - 5TB
Upload a file
● Open bucket - upload - add files - upload
● When you try to upload a file in a public bucket, it will ask for access control (Object level
permissions)
● This is because it is a public bucket and sometimes you don’t want the public to access
all the files in the bucket. Eg: A file(object) with key pairs information.
● Give public access - then it will ask for encryption(don’t encrypt as of now) -
Upload
● (Reason: when you create a file as private only you can read but when you encrypt, the
people with the key can access that file). Create encryption key and give decryption key
to necessary people who you want to access the file.
● PUBLIC FILE - Open the object(file) - Private object url will be present - paste in
browser - access denied
● Reason: You are accessing the object using a web browser. AWS wont know that it is
the admin accessing through the browser.
● PRIVATE FILE - Open the object(file) - Private object url will be present - paste in
browser - access denied
● Reason: Because it is a private file. There is an open button in AWS itself. Through that
only you can access the file. This open button is only enabled for the owner and disabled
for others.
DELETION
● Delete object
● Delete bucket
Object Lock(chargeable)
○ Stores using Write once read many(WORM)
○ Helps prevent objects from being deleted or overwritten by someone for a fixed
amount of time or indefinitely.
○ Object Lock works only in versioned buckets.
○ Log file stored with object lock - To ensure that no one changes it.
Encryption(chargeable)
○ SSC - S3 Server side encryption (AWS is encrypt the data and it will support to
decrypt the data)
○ SSC - KMS Key management service (Encryption manually done)
○ DSSE - KMS (Encryption manually done with double verification step)
1. URL
a. IAM -> create alias (right side bar) ->name: proj name, team name and my name
->Sign in url for IAM users in this account
b. Name format for alias
i. Client name - module - environment (test, production, stage) - eg: hdfc-
cc-stage
2. Group
a. user group -> create user group -> name: ec2_admin(meaning: admin will have
full access to the ec2 instances) ->Permission policies: Search ec2 - select
Amazonec2FullAccess -> create.
b. Now we have created a group. In it, we can add users.
3. Users
a. IAM -> users -> create user -> Provide user access to the AWS management
Console(check that box) -> I want to create an IAM user ->autogenerated or
custom - admin@123 ->check the box “Users must create new password for next
sign in” ->next
b. Set Permissions: Add user to group - check the group -> click next -> create user
c. Retrieve password: Will have info about console sign in details(sign in URL,
username, password). Save it
d. Go to that link - give username and pass word - change password (Password
Reset) - logged into AWS console as a user
e. Changes made by the user will be reflected in the root user’s account. So in
general the users are given very little permissions.
4. Policies
a. Scenario : Task given by client to Ec2_admin: create the same level of ec2
instances as S3 storages. But this ec2_admin has access only to ec2 instances
when he goes to S3, it shows access denied. So now EC2_admins asks for
access to the overall admin.Now overall admin creates a policy to (Read - list
buckets).and associates it with ec_admin user or if needed associate policy to a
group.
b. Create policy:IAM - Policies (left bar) - create policy - S3 - actions allowed (List -
list buckets) - resources(all) - next - name:s3-listbuckets -add description - create
c. Associate policy with user: User - Permissions - Add permissions - attach
policies directly - Filter(Custom managed) - Select s3-listbuckets - save
5. Roles
a. Create role: IAM - roles - create role - s3-s3-listbucket (cutomised policy) - give
a name -create
b. Role is assigned to a service(EG: assigned to ec2 server)
c. In order to differentiate, create 2 instances and we can associate role to one
instance. - ec2withoutrole, ec2withrole
d. One instance - advances details - IAM instance profile - ec2-s3role -launch
instance
e. Another instance - don’t associate any role.
f. Check ec2withrole instance -> click on connect -> connect and bring remote
desktop directly
g. >>>aws s3 ls //lists the buckets in aws
h. Create a bucket (side la in aws)
i. Will get output in ec2withrole instance and get error message in ec2withoutrole
Random how to do
Attach role after creation of instance
Attach role - check instance- actions -security -modify IAM role
2. Create a budget
● Click account profile - Billing and cost management - budgets - create budget - use a
template -monthly budget - budget amt: 3$ - email:[email protected] - create
● You will receive notification when
○ When your actual spend reaches 85% of the budget
○ Your actual spend reaches 100%
○ If your forecasted spend is expected to reach 100%
● Billing and cost management - cost explorer - new cost and usage report (analyse how
much you spent in the past month through visualizations)
Notification workflow
● Sender ->sends a message/notification to queue (checks if receiver is available) if yes,
send to receiver —---> receiver
● Sender —--->sends a message queue (checks if receiver is available. If not, it waits for
a specific amount of time (how much time we set up). After that it sends to dead letter
queue(DLQ) (chargeable service)
● If DLQ is not there, then the message will automatically be deleted
● DLQ - DLQ also checks if the receiver is available or not. If available, it sends the
message. If not active, it can wait for a retention period(14 days). After that the message
gets deleted
● DLQ can only hold 1000 messages. After 1001 message, the 1st message will be
deleted and the last message will join.
Cloud Trail
● Continuously log your AWS account activity
● Why use it?
○ Auditing Purpose
○ Used to identify Unauthorised access
○ Identify misuse
○ Troubleshoot
● Already when you go to cloudtrail-event history, you can see the logs
● Then why set up trail?
○ Event history shows you the last 90 days of management events
○ For auditing, we need information/logs from the past 1 year
Setting up trail
● Cloud trail - dashboard - create trail - name: mylogs - let it create a bucket automatically
- uncheck encryption - uncheck log file validation - cloudwatch (uncheck) - log events:
check management events - create trail
● Go inside cloud trail(mylogs) - AWS logs - Account id - CloudTrail
● You’ll get output after cloud watch is enabled (next task)
Cloud watch
● It is a monitoring tool.
● Highly used by Support/delivery teams.
● Monitor the infrastructure and trigger alarms - by this we can troubleshoot and fix issues.
● It is a chargeable service
○ Basic - Free, monitor resources 5 minutes once
○ Detailed - Paid, monitor resources every 1 minute
● Go to cloudwatch - alarms (alarm will be triggered any minute. First it will show
insufficient data) - STATE: In Alarm
● Check alarm details in EC2 - STATE: In Alarm
● Will get mail and SQS from SNS
Deletion
● Delete cloud trail
● Delete bucket and its contents
● Delete alarm
● Delete Instance
● Delete SNS-Role
● Delete topic
● Delete subscriptions
● Delete SQS queue
● Create an EC2 Instance
● Create an EBS volume and attach to the instance.
● Create SNS topic for Notifications
● Add subscribers for the SNS topic
● Create Cloudwatch Alarm for EBS Volume of any Metric and Select the SNS topic.
Documentation:
HANDS ON
● Cloudformation - stacks -create stack
● Prepare template
○ Choose from existing template (select this)
○ Build from infrastructure composer (Create a template using visual builder)
■ Infrastructure composer - Resource: search for ec2 instance and search
keypair
■ Then you can change the yaml code as needed.
● Specify template
○ Amazon S3 URL
○ Upload a template file
○ Sync from Git
● Change ami id (of amazon linux) and ssh key name(key pair password) when you use
existing template (line 11 and line 15)
● Run the template using command prompt
○ >>>aws –version
○ If it is not recognised, install AWS CLIf from documentation as an msi, run the
application and install it.
(https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
● Below is the command to create a stack in cloud formation and upload a template from
local
● >>>aws cloudformation create-stack –stack-name joshna –template-body
“file://give/full/path/to/infra-1.yml” –parameters “file://give/full/path/to/parameters.yml”
● Now our desktop windows CLI does not know where to create this stack. So we get the
below error,
13-02-35 Route53
● I will give domain name to Route53 and Route53 will give me back domain address - 2
way handshake
HANDS ON
What are we doing?
● Create EC2 instances
● Create a domain (Route53: hosted zone)
● Add record in it with simple or weighted policy
● Wait for it to be active
● Check if domain works through CLI and browser
AWS STEPS:
● Go to AWS - create 3 ec2 machines - make sure that the security group has all TCP -
Add the below script in the advanced settings: user data
○ #! /bin/bash
○ yum install httpd -y
○ service httpd start
○ echo “This is my Route53 Application” > /var/www/html/index.html
● Route53 - get started - dashboard
○ DNS management - create hosted zone - domain name: joshnaavsha.site -
public hosted zone - create
● Hostinger - domain - manage - leftside (DNS/Nameservers) - Change nameservers -
Copy paste value/route traffic from (Hosted zones in Route53) in Hostinger
● Simple routing policy
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field - Record type: Simple routing policy -
create record
○ Remember: Every time you restart the instance, the public IP address
changes( dynamic IP address). So make sure to change the value in the
record every time.
○ Here, we can create only one record (as it is a simple routing policy)
○ Wait for domain name status to change from pending -> insynk
○ Now the domain name will work. But it may take up to 24 hours sometimes
○ Open browser and search for the domain name
● Weighted routing policy
○ Can host more than 1 record
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field - Record type: Weighted routing policy
- weight: 80 - create record
○ Remember: Every time you restart the instance, the public IP address
changes( dynamic IP address). So make sure to change the value in the
record every time.
○ Like this create another record with weight 20
○ Wait for domain name status to change from pending -> insynk
○ Now the domain name will work. But it may take up to 24 hours sometimes
○ Open cmd
○ >>>nslookup kloudevops.online
● Routing policy: Geolocation - route based on location
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field -Add location (United States) - record
ID (Oregon) - create record
○ Create another record with location as India, mumbai
○ When you use the command,
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.224
■ This means, you are routed to the instance in mumbai( India) and not
routed to the instance in the United States.
● Routing policy: Latency based
○ Scenario: There are 2 servers. One in Mumbai and the other in America. Where
do you think you will be routed? You will be routed to the instance which is has
less traffic so the latency is less.
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field Select policy: Latency - select region
(mumbai) - create record
○ Create another with location as hyderabad
○ When you use the command,
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.252
■ Which means I am routed to the Hyderabad server. Meaning hyd server
is free.
■ If mumbai is free, we will be connected to mumbai server. Nearby or far
doesn't matter.
● Failover policy
○ There will be 2 instances.
■ Primary and secondary server
■ If the primary server fails, the secondary server will come into play.
○ Create health check for the primary server instance - Name: - then configure the
details
■ Protocol: HTTP
■ Ip address
■ Domain name
■ Port: 80
■ Path: /index.html
○ If health check fails, send notification through sns(disable for now) -create
○ Wait to get status : healthy
○ Create record - paste ip address of primary instance - Policy: Failover - select
health check id - create
○ Create another record with secondary instance ip address - Policy: Failover - no
need health check id - create
○ You can check that using
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.124
■ Which means the primary server is currently running
○
○ Health check changes to unhealthy. So server will be changed to secondary
server ip address
○ You can check that using
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.252
■ Which means the instance has been changed to secondary server
NS - name server
SOA - Service-Oriented Architecture
Routing policy
1. Simple routing policy - when you use only one webserver (mostly not used in
companies)
2. Weighted routing policy - More than one webserver could be added
WWW BUCKET
● Go inside www bucket - edit bucket policy - put the below code
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::http://www.kloudevops.site/*"
]
}
]
}
○ This policy is to get all the objects with no restrictions (like all types of
file : html, js etc)
○ Have to replace arn with your bucket’s arn. arn: get from properties in the
bucket
● Go inside the www bucket - properties -scroll all the way down - edit - enable -
hosting type: host static website - index document: index.html
3.ROUTE53
5.Creating a distribution
FOR WWW
● AWS - cloudfront - create a cloudfront distribution - origin domain: (S3 - wwwbucket -
properties - scroll all the way down - static website hosting - url will be present - copy
and paste here)
● Default cache behavior - viewer protocol policy:: Redirect http to https
● Settings - add name: www domain name for the cname -Attach certificate - leave others
default - create distribution
WORKING:
Words in the diagram: (because it is not clear)
● Kloudevops.site
● Hostinger
● Route53
● Cloudfront
● S3
● Source code folder
● Application
DELETION
● Cloudfront - disable (takes a long time)
● Then delete distribution
● Then delete certificates
● Route53 - delete records
● Mandatory:
○ Enable accidental termination
○ Must have username and password
● Works on vertical scaling
● What data is stored in the database?
○ Application data
○ User data
● Our database must always have a replica DB (secondary database).
● These 2 databases must always be in sync with each other.
○ How is the sync done?
■ Automatically by AWS
■ No manual intervention required
● This replica is placed in a different region (For safety. If the databases in one region
goes down the database in another region can take position)
● Database Permission:
○ It only has read and write permission
○ No execute permission
● The replica will only have read permission
○ Whatever data written in the primary db is automatically replicated in the
secondary database.
○ So there is no need for write permission
● When the primary db goes down and the secondary db takes position as primary db
○ It will automatically get write permission as well.
● Is database the same as DBMS?
● DB - Container to store data
● DBMS - Software used to manage the database.
Relational database
● Used to store structured data
● SQL
○ Structured query language
○ Developed by IBM
○ Declarative language - because they maintain well defined standards
○ Invented in the 1970’s
Creation
● Create private network: RDS - manage relational database service - left dashboard
(subnet groups) - Create DB subnet group (take default network as private network as of
now - bc we did not learn VPC yet.) - name: pvt-network - Description: Network for DB -
VPC (select default) - select us-east-2a and 2b -select 2 subnets - create
● RDS - databases - create database - Standard create - MySQL - Templates: free tier -
Db instance identifier(used to identify db in RDS area): primary-db - admin - self
managed - password: admin123 - DB size: db.t3.micro - Additional Storage: uncheck
autoscaling - Don’t connect to EC2 instance - select default VPC - subnet group: pvt-
network(network u created above) - Public access: Yes - VPC: existing - default (make
sure you have All TCP or MYSQL enabled in security group) - availability zone: us-east-
2a (replicated db must be in another availability zone-us-east-2b. But as of now, we are
not replicating the db as it is chargeable. In real time, we will replicate)- Additional
configuration: initial database name: mydb(this name is used to connect via instance) -
enable automated backup - backup retention period: 1 - Backup Window: no preference
- Uncheck encryption (needed in real time, now chargeable) - Check backup replication -
uncheck maintenance (Used to update mysql automatically - like weekly once or based
on preferance) - no preference - uncheck enable deletion protection - create database.
Can take up to 10 minutes. Status: available
● Create an instance in same region - storage 30 GB
● Open db - connectivity and security: endpoint (this is the point where the read and write
take place)
● Connect the ec2 instance - username: root - connect
○ Terminal opens
○ >>>mysql –-version
○ >>>yum install mysql -y
○ >>>mysql –version
○ Mysql -h <paste endpoint here> -P 3306 -u admin -p
■ -h refers to host
■ -u refers username
■ -p refers to password
■ -P refers to port
○ Now you will be connected with mysql db
○ Open workbench in desktop
■ Connection name: myapp-db
■ Hostname: <paste endpoint>
■ Username: admin
■ Pasword: admin123
■ Port: 3307
○ You can run queries through the desktop application or the remote linux machine
(we connected in both)
○ Application
■ >>>Create database movies;
■ >>>Show databases
○ Remote Linux
■ >>>Create database author
■ Try to create a table, add rows, alter, update, delete etc
○ Now check for replica
■ Go to database in aws
● Actions - create read replica - create read replica - DB instance
identifier:replica-db - 20 GB - uncheck autoscaling - Additional
configuration:port number: 3307 - create read replica
■ ROLE:
● name: replica-db Role: Replica
● name: primary-db Role: Primary
■ You will get replica under primary db
■ Primary and replica will have the same endpoint
■ Same username and password for replica and primary
■ Delete primary-db - Role:Instance
AURORA DB(chargeable)
● Cluster based db
● Has separate endpoint for read and write operations
○ This will result in faster operations
○ Performance will be high
● Create aurora
○ RDS - create database - standard - aurora(mysql) - Templates: Dev/Test - DB
cluster identifier: aurora - username: admin - password: admin123 - Aurora
Standard - db.t3.medium - Don’t create replica - pvt-network - default VPC -
disable everything - create db
● Role:
○ A Cluster and the writer instance will be created.
○ When you go inside you will get 2 endpoints. Type: Writer and Reader endpoints
○ In mysql only one instance will be created
Stopping db
● What to do if you want to stop your db and still want to retain existing data
● DB - can stop temporarily but it will start after a while (chargeable)
● What to do?
○ Take a snapshot (less charge but if you don’t even want that do the below 2
steps)
○ Exports in Amazon S3
○ Move to archive mode in S3
○ Archive mode in S3 will hardly incur any charges
○ Similarly you can restore from S3 (RDS - database - restore from S3)
Deletion
● Delete db
● Delete subnet group
● Delete ec2
18-2-25 EFS
● EFS - Elastic file system
● Shared storage that works only on linux
● Uses NFS - network file sharing
● Default port number- 2049
● Mount shared volume in all the machines - create in one machine and this file will be
mounted to the remaining machines
● It will not be mounted to the entire machine, instead only to a shared directory (Eg:
sending link of a shared folder)
Creation:
● Create 2 ec2 machines - create a security group (Add rule:NFS, SSH) -number of
instances: 2
● Connect both the instances
● AWS - Elastic file system - Create file system - customize - name:myefs - regional (If
one data center goes down, another will come into position) - uncheck automatic backup
- uncheck encryption - PErformance settings: go with default - next - use default VPC -
Availability zones: change to the security group we created
● Review and create
● Go inside the file system
● Mount -> mount via dns -> copy dns name
● EC2:
○ >>>sudo su - -> copy the command
○ >>>df -h
○ We need a dedicated share volume directory
○ So let us create a directory in both the instance
○ Instance 1
■ mkdir test
○ Instance 2
■ mkdir new
○ Paste the link for sudo mount - in the end, add the directory name
■ Eg: sudo mount <dns name>/new
○ Go inside the directory
■ >>>cd new/
■ Try to create a file inside the directory. And give ls in the other instance
■ The file will be there
■ So the same content is mounted in both the instances
● Deletion:
○ File system
○ Instances
TASK
● Create lambda and event (using the above steps, assign roles)
● Sprint - Splitting tasks into many
● Cloud watch - set trigger to inform lambda on sunday 12 at night to delete resources
(python code)
● Paste code in editor(lambda) -code to delete all the resources
● Inside function -> configuration -> timeout: change from 3 sec to 15 min
● Create an empty CLB
● Run the code(It will take some time) Status: succeeded
● Once you get output, Left side - Click Deploy and then Test
● Now when you check CLB, it will be gone
● Now automate the process using cloudwatch
○ EventBridge - rule - name - Schedule - continue in everbridge schedule -
recurring schedule - Cron based schedule(https://crontab.guru/)
■ Setup trigger after a min (for now to check output)
■ Check if you get output
API
● Middleman between user and backend
● Fully manage service - create, manage and secure API
● API - front door for application
● Free tier - can make up to 1 million API calls
● How does it work?
○ User sends request
○ API gateway receives the request
○ Forward to backend
○ API Processes the response from db
○ User gets data from API
○ Flow: User -> API gateway -> lambda EC2 ->API gateway ->user
● Why use API gateway?
○ API gateway also helps in authentication. ,monitoring, security etc
○ API is cost effective. It saves a lot of many when you use serveless
connection(like lambda)
○ Helps with scalability
○ Rate limiting and throtlity
● Types of API
○ HTTP API - low latency, cost effective, has built in features like OIDC (Open ID
connect), OAuth2 (Authentication protocol to approve one application to
communicate with another) , Native CORS (Cross Origin Resource Sharing - a
security mechanism that allows web pages to access resources from external
APIs while preventing malicious sites from accessing data without permission.)
○ WebSocket API (Eg:Chat application)
○ REST API (full control)
○ REST API Private
● Steps:
○ Create role -> lambda_role (Add Administrator Role) ->
○ Use the same lambda function.
○ Left side - Click Deploy and then Test
○ API gateway ->REST API -> New API - name:myapi - create API
○ Go inside the API -> create method -> type: GET - Integration type: Lambda
function - select lambda function - create method
○ Deploy this via browser: go inside the api - Deploy API (Top right corner) - Stage:
new stage - stage name: prod - give some description - deploy
○ After deploying, you will get the invoke URL
○ Copy paste in browser to verify the same output in browser
● Deletion
○ Delete stage
○ Delete API
○ Delete lambda function
○ Delete IAM Role
○ Delete cloudwatch loggroup (Cloudwatch -log- loggroups)
● Lambda Function
a. Serverless computing service
b. Run code without managing server
● Setup
a. Create bucket - name: event-lambda-projects - default - disable encryption -
create
b. SNS - Create topic - standard - name: myeventtopic - create topic
■ Go inside topic - create subscription - protocol: email
■ Confirm email subscription in mail
c. Lambda - create function - name: mylambdafunction -runtime: Python(Latest) -
create function
■ Create event - name: myemailtest - save
■ Paste code - change arn of your sns
■ Deploy (left side)
■ Test - Got error (No permission for S3)
■ Add permission(Trigger) - 2 ways to create a trigger
● Go to S3 bucket - Properties - event notification - event name:
myevents3 - Object creation: Put - leave others default -
destination: Lambda function -Choose from lambda function -
create
OR
● Lambda - go inside lambda function - add triggers - select S3 -
choose bucket - All object create events - acknowledge and add -
confirm by refreshing lambda
d. Communication between 2 servers is still pending - create role
● IAM - roles - mylambdafunction-role-zdnj - Add permissions -
SNSfullAccess and S3fullAccess - add
● Note: This role is a default role which was already created by
lambda when we created that function. So no need to manually
add this role to lambda.
e. Upload a file in S3 and check if you are getting notification through SNS
f. Log groups
● Cloudwatch - log events - log groups - check logs
g. Deletion:
● Delete buckets and objects
● SNS
○ Delete topic
○ Delete subscriptions
● Delete lambda function
● Delete IAM role
● Delete cloudwatch event logs
Workflow
1. Create amazon S3 bucket
a. Create bucket - name: amazon-s3-bukket-lambda- create
b. Upload a test object
2. Create a policy in IAM
a. IAM - policies - create policies - paste json script - next - name: s3-trigger-
ptutorial - create policy
3. Create role
a. IAM - role - create a role - use case: Lambda - choose policy which we created -
role name: lambda-s3-trigger-role - create role
4. Create lambda function
a. Lambda - create function - author from scratch - name:s3-lambda-trigger-function
-attach role - create function
b. Paste code - deploy
5. Create amazon S3 trigger
a. Lambda - go inside lambda function - add triggers - select S3 - choose bucket -
All object create events - acknowledge and add - confirm by refreshing lambda
6. Test with dummy event
a. Lambda - go inside lambda function - create event - event: json: paste code
(change AWS region, S3 bucket name(line23), object key (go inside bucket -copy
key) line 30) - save - Test (in the test, event page not code)
b. Go to cloudwatch - log groups - check log(log will be created for the S3 file)
c. Every time a file is uploaded in the S3 bucket, a log is created (this is what the
code is for)
Deletion
1. Delete lambda function
2. Delete role
3. Delete bucket
4. Delete cloudwatch logs
5. Delete policy
20-02-25 VPC
IP address
1. Unique identifier of a system/server
2. Internet protocol address helps to communicate via Internet
3. Two version:
a. IPv4 - 32 bit
b. IPv6 - 128 bits
4. IPv4 - 2 Types
a. NEtwork ID
b. Host ID
c. Each byte got 8 bits - totally 4 bytes - So 4 x 8 = 32 bits
5. IPv4 range - (0 to 255)
d.
Architecture:
STEPS:
1. Purchase VPC
a. AWS - VPC - create VPC - name:myvpc - IPv4 CIDR: 10.0.0.0/16 - Tenancy:
Default - create VPC
2. Divide this network into two
a. VPC: 10.0.0.0/16
b.
16. Deletion
a. EC2 instances
b. NAT gateway
c. VPC
d. Elastic IP
21-02-25 CI/CD
● Practise developer uses to release software faster with higher quality and few errors
● Automates: coding -> testing -> deployment
Continuous Integration (CI) -> Helps integrate code done by many developers into one shared
codebase
Benefits of CI
● Catch bugs early in the development process itself
● Ensure that code is always in working state
● Helps reduce time spent on manual testing
CD (Continuous Delivery)
● Automate the process for preparing the code for deployment
Benefits of CD
● Code is ready to be deployed
● Speeds the release with minimal manual work
Continuous Deployment(CD)
How does CD work?
● Post testing: Once code passed all the tests, it will automatically be deployed to the
production environment
● Monitoring: Deploy application is monitored using tools to male sure it is working as
expected
● Rollback if needed: If issues are detected, an automated rollback to a stable version
Benefits of CD:
● Faster delivery of new features - bug fixes
● Developers get immediate feedback
● Reduce manual work: human error reduced
CI/CD steps:
● Source stage: Detects code changes in the repository
● Build stage: Compile code and build analysis
WORKING:
1. GITHUB
a. Clone the repository and do some changes
>>>git -v
>>>git clone “github-link”
>>>pwd
>>>cd aws-elastic-beanstalk-express-js-example
>>>ls
>>>vi app.js
b. Do some changes in the code
c. Open terminal in VS code
>>>git add .
>>> git commit -m “changes made”
>>>git push
DELETION
● Delete application
● Code pipeline
● S3 bucket
● Code build
● Queue
● Sns topic and subscriber
24-02-24
CD/CP Project
● CD -Continuous Deployment
● CP -Continuous Pipeline
● Services used in this project
○ EC2 (2 machines - one for developer to write code and one for production)
○ S3 (Acts as repository, like github in the previous project)
○ CD - code deploy
○ CP - code pipeline (Automates)
○ SNS - notification
○ CloudWatch - monitoring
IAM - set up roles/permissions
STEPS:
CD STEPS:
1. Create 2 IAM roles
a. EC2 - S3
b. CD -Role
2. IAM user
a. Developer
3. 2 EC2 server
a. Developer machine
b. Production machine
4. Configure developer to the dev machine (So that developer can write code in it)
5. Install CD agent in production machine(Because CD need its agent ti be running in the
production server)
6. Sample code in dev machine
7. S3 bucket
8. Code deploy application on developer machine (Dev machine -> S3)
9. Deployment group (prod server) -> Destination location
10. Deployment (Pick code from S3 bucket) -> Pickup location
11. Test my output
9 and 10 steps are given to CD
CP STEPS
1. Create code pipeline
2. Change source code
3. Zip file
4. Cp file to S3
5. Check for output
Workflow:
1. Create 2 IAM roles
a. EC2 - S3
i. AWS - IAM - Roles - Rolename: EC2-S3 - Use caseEC2 - S3FullAccess -
create
b. CD -Role
i. AWS - IAM - Roles - Rolename:cd-role - Use case: Code deploy - default
permission - create
2. IAM user - Developer
a. AWS - IAM - User - User name: joshna-developer - next - attach policies directly
- S3FullAccess and CodeDeployFullAccess
b. Give User CLI access to configure ec2 machine as a developer (>>>aws
configure): Go inside the user - create access key - get credentials
3. 2 EC2 server
a. Developer machine
i. Dev-Machine - Amazon linux - 30GB - launch instance
b. Production machine
i. Name - additional tags - Name: AppName - Value: SampleApp
ii. Amazon linux - t2.micro - select default security group - 30 GB -
Advanced details: attach IAM instance profile: EC2-S3-Role - Launch
instance
4. Configure developer to the dev machine
a. Open putty - Hostname: public ip address of Dev-machine
b. Connection- SSH-Auth-Credentials: upload ppk file - open
c. Accept - ec2-user - Logged in
i. >>> sudo su -
ii. >>> aws configure
iii. Paste Access key, secret key, region, enter
5. Install CD agent in production machine
a. Connect Prod Machine using Putty: Open putty - Hostname: public ip address of
Dev-machine
b. Connection- SSH-Auth-Credentials: upload ppk file - open
c. Search in google: installing CD agent
(https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-
operations-install-cli.html)
i. >>> sudo su -
ii. yum update -y
iii. sudo yum install ruby (Because code deploy code is in ruby)
iv. sudo yum install wget -y
v. wget
https://aws-codedeploy-us-east-2.s3.us-east-2.amazonaws.com/latest/
install
vi. ls -> give execute permission to install because we need to execute
vii. chmod +x ./install
viii. sudo ./install auto
ix. systemctl status codedeploy-agent (make sure you get active and
running)
x. If you get error, start the service and check again - systemctl start
codedeploy-agent
x. vi httpd_start.sh
#!/bin/bash
systemctl start httpd
systemctl enable httpd
xi. vi httpd_stop.sh
#!/bin/bash
systemctl stop httpd
systemctl disable httpd
xii. ll - we notice that these three install, start and stop sh files don’t have
execute permission. So give execute permission to these files
xiii. Chmod 755 * -> give execute permission to all files
xiv. ll - check if these files got execute permission
7. S3 bucket
a. Create public S3 bucket: AWS-S3-Create bucket - name: gir-sampleapp-24 -
uncheck block all public access - ACLs enabled - disable encryption - enable
bucket versioning - create bucket
8. Bring Code deploy application on developer machine (Dev machine -> S3)
a. Go to developer machine
b. Go inside sampleapp directory(where source code is available): Create a code
deploy application:
i. >>>aws deploy create-application –application-name sampleapp
ii. You would get the application ID as the output
iii. Check if it is there in Code Deploy AWS (Can directly create application
through GUI or CLI as we did here)
c. How to bring source code from ec2 machine to S3 bucket using the
application
i. >>>aws deploy push –application-name sampleapp –s3-location
s3://gir-sampleapp/sampleapp.zip
ii. sampleapp.zip - refers to the application name we created in the dev
machine(html file)
iii. Now the files in ec2 dev machine will be uploaded in the s3 machine as a
zip file
iv. Go to AWS - buckets: Go inside your bucket and check if you see these
files in zip format
9. Deployment group (prod server) -> Destination location
a. AWS - Code deploy - Go inside your application - create deployment group -
group name: mycdgrp - Attach service role: cd-role - check Amazon EC2 -
Production server - Matching instance: 1 - disable load balancer - create
deployment group
10. Deployment (Pick code from S3 bucket) -> Pickup location
a. Go inside mycdgrp - Create deployment - Amazon S3 - Select bucket location -
select zip file - create deployment
b. Note: First time deployment is a manual process
c. Note: We might get an error: Too many instances are running. Delete instances
and leave console for 5-6 hours
d. If no error: Success
11. Test my output
a. Paste IP of code deploy in browser
b. Check for output
Code Pipeline
● AWS - CodePipeline - create new pipeline - Category: Build from custom pipeline -
Pipeline name: mypipeline Execution mode: queued - Service role: New service role -
next - source provider: s3 bucket - select bucket name and object key (Go inside object
and copy object key) - next - skip build stage - skip test stage - deploy: Code Deploy -
Application name: sampleapp - choose ec2 group name - review and create pipeline
● Wait for pipeline to succeed
Check working
● Change code in index.html
● Come outside sample app
● Zip the sampleapp file
○ >>>zip -r sampleapp.zip . ->. Means all files inside the directory
● Push the zipped file into s3
○ >>>aws s3 cp sampleapp.zip s3://gir-sampleapp24
● Check pipeline
● Go to S3 - toggle show versions (see both versions)
● Paste IP of code deploy in browser
● Check for output
Deletion
● Code pipeline
● Code deploy - applications - delete
● Delete cloud trails
● Cloud watch - delete logs and rules
● EC2 instances
● Delete user
● Delete roles
● Delete policies
● S3 - empty buckets and delete them
25-02-25 - PROJECT 9
Terms:
● Requester VPC: The VPC that initiates the peering request
● Acceptor VPC: VPC that accepts the peering request
● CIDR: Bring IP addresses together and avoid overlapping
● Route tables: Update route table for VPC’s and VPC’s to know each other
● DNS resolution: Private SND names to avoid unwanted confusions
● Transitive Peering (Not supported): We need to bring in direct connection. Transitive
communication is not possible.
ZONE B
1. Create VPC on Mumabi and give IP address - 20.200.0.0/16
a. AWS - VPC - VPC only - name: vpc-b - IPv4 CIDR: 20.200.0.0/16 - create VPC
2. Create 2 subnets - Public and Private
a. Select VPC - create subnets- create subnets - subnet name: pubsub-b -
availability zone: us-east-1a / Mumbai- IPv4 subnet CIDR block - 20.200.10.0/24
b. Add new subnet - subnet name: privsub-b - availability zone:us-east-1b /
Mumbai- IPv4 subnet CIDR block - 20.200.20.0/24
3. Create IGW and attach to VPC
a. Internet gateways - name: myigw-b - create internet gateway
b. Attach myigw to public subnet
4. Create 2 route tables (1 for public subnet and 1 for private subnet)
a. VPC - Left side - route table - Name: pubrt-b- Select VPC - create route table
b. VPC - Left side - route table - Name: privrt-b - Select VPC - create route table
5. Set up internet gateway(IGW) for the public route table (Connect IGW and public
route table)
a. Go inside the public route table - down click on edit for routes
b. IGW (0.0.0.0/0) and choose IGW (This is for customers to connect to our
application) - save
6. Subnet Association
a. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with public subnet
b. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with private subnet
7. NAT - Network Address Translator (To connect private subnet through the public
subnet)
a. VPC - left side - NAT gateways - create NAT gateway - name: mynat-b - subnet:
select public subnet - Elastic IP address Allocation ID: Allocate - create
b. Check Elastic IP - it will be active
c. Wait for NAT status to be available
8. Connect NAT and private subnet
a. VPC - Go inside the private route table - down click on routes - edit routes
b. IGW (0.0.0.0/0) and choose Nate gateway(This is for customers to connect to our
application) - save changes
9. Security Groups
a. Create security group - name: pubsg-b - Description: public security group of B-
select myvpc - (Inbound rules) Add rule - ALL TCP - 0.0.0.0/0
b. Create security group - name: privsg-b - Description: private security group of B-
select myvpc - (Inbound rules) Add rule -Type: rdp - Source type:
custom:10.100.20.0/24 (IP of private subnet in ZONE A)
c. Add rule : Type: All ICMP Source: 10.100.20.0/24
10. Create Application - 2 EC2 machines
a. EC2 - Instances - Launch an instance - name: pubec2-b- RedHat - Network
settings: edit - VPC: select myvpc - select public subnet - Auto assign public IP:
Enable - key: pem - Select existing security group: pubsg-b - Launch Instance
b. EC2 - Instances - Launch an instance - name: privec2-b -RedHat - Network
settings: edit - VPC: select myvpc - select private subnet - Auto assign public IP:
Disable- key: pem - Select existing security group: privsg-b - Launch Instance
c. PUT CODE IN ADVANCED SETTINGS:
#! /bin/bash
yum install httpd -y
service httpd start
echo “Hello aLL from $(hostname) $(hostname -i)” >
/var/www/html/index.html
S3 - private and public bucket, Versioning and reverting back to old version
URL, groups(2 groups ), users(4 users) - single user in 2 groups and shuffle like that, create
user with multi factor , customize policies, attach role, user -> URL and CLI
User-mfa-
Purchase a domain
https://www.hostinger.com/
11-02-2025
1. Create budget
2. Create cloud trail check bucket
3. Create queue and test
4. Create SNS (Add 3 subscribers: 3rd one using phone)
5. Cloud watch - Detailed documentation and screenshots
6. Follow documentation and finish project (metric from outside)
Customize the script from github for any 3 services and bring iaas
Tasks - day 11
1. Create db - Take replica - Run 10 unique queries -Check for output
2. Snapshot - delete db - export to s3 - delete snapshot - restore from s3
3. Project
1. Using AWS documentation link
Detailed documentation - for 2 & 3
TASKS - DAY 12
● Create
22.0.0.0/16
Pub - 22.0.30.0/24
Private - 22.0.40.0/24
Day 16 - CD/CP
● Add test in CP (Manual Approval)
● Enable cloud watch and setup notifications for CD and CP
● Go to Code deploy - selection mysampleapp - top, create notification for all stages
● Go to code pipeline and set notifications
● SS - success and failure notification
VPC Peering
● Pub - Homepage
● Priv a - Login
● Priv b - Dashboard
CAPSTONE PROJECT
● If 3 tier architecture, multi tier and full stack -must implement VPC peering
Mandatory
● VPC is mandatory
● SNS
● Database
● Route53 (domain name)
● Create using Cloudformation (scripts) - attach scripts
● Own architecture diagram
● POC
By today:
● Architecture diagram
● POC
Project 5
● Scalable wordpress deployment with auto healing and load balancer
● EC2, RDS, EFS, ALB, Auto Scaling, R53, cloudformation, cloudwatch - healing and
high availability.
Today
● Put code in github
● Detailed documentation
Team
Joshna
Ragavi - manager
Yogesh
Janardhan
Harini
Shalwin
Joshna
Ajay
Janani
Bhargav
Joshna
Yogesh
Topics
● Reverse proxy
● 3 tier architecture
● CIDR blocks and IP addressing