0% found this document useful (0 votes)
50 views87 pages

Notes - Guvi

The document outlines key differences between Linux commands and package managers, detailing various Linux distributions and their characteristics. It also discusses cloud features, application deployment methods for both Windows and Linux, and the importance of load balancing and auto-scaling in cloud environments. Additionally, it covers file system structure, user permissions, and the management of resources in cloud computing.

Uploaded by

joshnaacsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views87 pages

Notes - Guvi

The document outlines key differences between Linux commands and package managers, detailing various Linux distributions and their characteristics. It also discusses cloud features, application deployment methods for both Windows and Linux, and the importance of load balancing and auto-scaling in cloud environments. Additionally, it covers file system structure, user permissions, and the management of resources in cloud computing.

Uploaded by

joshnaacsha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 87

Notes

Difference:
1.apt, apt-get
apt-get may be considered as lower-level and "back-end", and support other APT-based tools.
apt is designed for end-users (human) and its output may be changed between versions.

2.update, upgrade
Software updates (patches) modify an existing program. Upgrades replace a program with its
next major version.

Web Search
● Ngnx
● OWASP top 10
● Sans top 25

Users in Linux
Group, owner/root, others

Modes in Linux
Read -4, write-2, execute-1 (total-7)

Flavours of Linux
● Fedora - red hat family
● Ubuntu
● Debian
● Centos (till version 8) , Centos Stream(After version 8)
● OpenSUSE, Arch Linux - Create own version of Linux
● Linux Mint - Used for Media Site applications
● Gentoo - Highly customized Linux variant on hardware level
● Slackware - The oldest distribution of Linux. Very simple, highly customisable
● Alpine Linux - Lightweight distribution, highly secure. Used for highly containerized
applications(recommended)
● Kali Linux - Used for ethical hacking

Difference between Windows and Linux

Commands

1.sudo apt update


sudo-privilege
apt-package manager in debian
update-update to new version

2.touch
Creates an empty file
3. ps-ef | grep nginx
Check process
Search for a pattern

4.cat
Gives/displays content in a file
cat > file.txt - can type content in it then click ctrl+d to exit

5.top
Lists a lot of processes

6.echo
Print content
Put content in a file - echo “hello” > file1.txt

7.vim
Create a file with CLI editor (vim editor, vi editor, nano editor)

8.whereami
Tells current directory

9.pwd
Tells current directory

10.curl
Access URL of websites

11.wget
Access URL of websites and download

12.man
Tells purpose of a command (eg; man grep)

13.chmod
Chmod 500 abc.com
read and execute - 4+1=5. Now root has these permissions. Group and others have no
permissions
-d rw_ r__ r_x - Now root has read execute permission, group has read permission and others
have read execute permissions (6,4,5).
-d represents directory
- represents file
Root, group, others

14.ls-l or ll
Long list

Linux
Suggestion-august 1991
Released - Sept 1991

Fast processing, community support

Major contributor of Linux Operating system


1.Kernel (boot c)
It is the lowest layer in the operating system

Two ways to start the system


Boot loader- It is a process that loads the kernel image through bios or UEFI

1.Via basic input output system (bios)


Gathers the basic information to start the process

2.UEFI (Unified Extensible Firmware Interface)

2.System User Space (Administrator Layer)


● Space where the admin user talks to the system (connects user and hardware)
● We’ll have configurations, software installations here
● Shell, Command interpreter are present here
● Here we’ll have demons(Background process/subsystem inorder to make sure how
things work - eg: compatible java version for jenkins)

3.Application
Involves all user applications

Layers Architecture: (from top to bottom)


Application, processes etc
OS
Kernel Image
Boot loader
Bios or UEFI

Package managers for packages in Linux


● Redhat- uses yum or dnf
● Ubuntu or Debian - uses apt

Libraries - Collection of softwares used by the computer programs


Ubuntu - based on debian
RedHat - based on centOS and fedora. Components from Unix, commands from Linux
OpenSUSE- comes from SUSE project

Unix OS List
● BSD or FreeBSD (Berkeley software distributions)
● Oracle or SunSolaris
● AIX - A proprietary OS from IBM(high end and mainframe)
● HP-UX - Mainframe hardware from HP

macOS- based on BSD but heavily modified

Linux flavours
● Ubuntu(based on Debian) - most popular especially for those who are new to linux
● Fedora (Sponsored by RedHat) - Known for its cutting edge innovations
● Debian - Community driven project, very stable and it’s a foundation for many other
distributions(like ubuntu)
● RedHat Enterprise Linux(RHEL) - Distribution from RedHat designed mainly for
enterprise. Known for their long term support. They are known for their enterprise level
features (like security).
● CentOS - Free and open source clone of RHEL. All utilities not available as RHEL
● Arch Linux - Highly known for its simplicity and customization. Can customize from
scratch (eg: manjaro)
● OpenSUSE- Comes with 2 main flavours 1. Tumbleweed -it is a rolling release 2. Leap -
This is a regular release
● Linux Mint - based on ubuntu. It will give detailed and traditional desktop experience. It
comes with a built-in special tool for media.
● Gentoo - Source based distribution. It is highly customizable and it is optimized for user
specific hardware.
● Slackware - One of the oldest distribution known for its simplicity and minimalism
● Alpine Linux - Lightweight distribution with security, simplicity and resource efficiency.
Popular for containerized applications.
● Kali Linux - Designed for digital forensics and penetration testing.

Linux System Access


Console
Remote
ssh -l username/ip address (putty, cmd)

ssh -p 22 [email protected]

ifconfig
ipaddr
File system
System used to manage files

File system structure


/boot - contains files used by boot loader (eg: grub.cfg)
/root- Represents root user home directory.
/dev - Shows the system devices(eg: speaker, keyboard, etc connected to the linux machine)
/etc - Contains all the configuration files of various applications
/bin -> /usr/bin - Contains the everyday user commands
/sbin -> usr/sbin - Contains the system/file system commands
/opt - Optional additional applications
/proc - Creates files for all running processes . It will only be available on memory(When u
restart, the data will be cleared)
lib-> usr/lib - C programming library files used by commands to application
/temp - Here there will be directory for temporary files
/home - This will have all information about regular user
/var - Contains the system logs
/run - Stores only the temporary runtime files
/mount - mount external file system
/media - Mostly for CD roms. In virtual machines, the ISO image will be shown

Linux file properties

Type # number owner group size |month date time


of links
- r_erw_r_e 21 root root 400 |feb 4 20:14

- ->file
-d -> directory
-l -> point to another file/directory

Type of root:
1. Root account - acc/username
2. Root as (/) - root as file directory

Change password

sudo su -
passwd //change password for root user
passwd joshna //change password for a specific user
Create a file
touch file1
Add user
user add test_user
chgrp test_user file1
group permission will change to test_user

Similarly, chown -> change owner

Listing
ls - Lists files and directories
ls -l - Long list with details like permission, type of group etc
ls -lrt - List in reverse order
ls -a - Lists also the hidden files

Show available storage in mb, kb and gb


free -m
free -k
free -g

Display processes
ps-ef, top - Display all processes
ps-ef | grep joshna- display all the processes with the name joshna
df -f - Shows the file system
du -sh *- display the size of he files

Zip files
zip -r file3.zip file 3
unzip -r file3.zip file 3

Zip that file itself with no copy with gz extension


gzip file1
gunzip file1

Cloud
Features
1.On demand resource provisioning (Scalability)- According to the requirement, the services
automatically increases and decreases

2.Global Availability
Regions - Specify a geographical location (Eg:mumbai)
Data Centers - Number of data centers available within the region (Eg: 3 data centers in
mumbai)

3.Secure, flexible, pay as you go

4. Saves storage and does not affect the application performance

Six different types of engineering


1. Server Engineering
2. Storage Engineering
3. Database Engineering
4. Security Engineering
5. Network Engineering
6. Application management Engineering

Physical server drawbacks


1. 1 server, 1 application - Only one application could run on one server
2. Resource wastage - Even when we have space, it cannot be fully utilised
3. To eradicate the above 2 problems, VM is presented. Even though the resource wastage
is prevented, it made the application slow(eg: oracle ubuntu).
4. Then came cloud - Saves storage and does not affect the application performance

App Deployment
Static vs Dynamic

Layers(top to bottom)
Source code
Middleware
EC2 instance(hardware)

Middleware - Packages or product file that are required for the application to run
1. Web server - If application is static go with this
a. Apache - Works well with linux(free tier)
b. IIS(Internet information services) - Works well with windows
c. IHS - Chargeable, need license to use this(not free)
2. App server - If application is dynamic go with this
a. WAS(Websphere application server) - Chargeable
b. Weblogic - It is an oracle product. Chargeable
c. JBoss - Free tier

Windows - Server manager - Download applications


Linux - Package manager - Download packages

systemctl/service - This command will do 4 jobs


1. Start
2. Stop
3. Restart
4. Check status(Whether application is running)

Windows Application Deployment


● Create the ec2 instance with windows OS
● Connect ec2 instance to Remote Desktop Connection (Public IP address,
Name:Administrator Password: Decrypt password in AWS from file (connect->RDC-
>decrypt password))
● Windows->server manager-> roles and features-> next,next -> select webserver(IIS) in
server roles -> then install
● Copy paste source code folder from desktop to remote desktop
● Then put that folder inside the below directory
○ C->inetpub->wwwroot->
● Copy paste the public ip address of the ec2 instance in the web and see the application
deployed

Linux Application Deployment


● Create an EC2 instance with Linux
● Open putty and put public IP address as Hostname
● Then in the left bar navigate to
○ Connection -> SSH -> Auth -> Credentials
● Upload private key as ppk
● Give user name and log in
○ >>>sudo su -
○ >>>yum install httpd
● httpd -> apache name in amazon linux
○ >>>service httpd status
○ >>>service httpd start

● Copy paste source code to linux - USE WinSCP


○ Uses SFTP - Secure file transfer protocol
■ Host name: AWS EC2 public IP address
■ Username: ec2_user
■ Password: ppk file
○ Now WinSCP will display both desktops (remote and my desktop)
○ Just drag and drop the source code folder
○ Now in the remote desktop, put the folder inside the following directory:
/var/www/html/
○ You will get Permission denied. This is because linux only gives read
permissions for its users. But here we are trying to execute. So we use chmod to
give all permissions to the path
■ >>>chmod 777 /var/www/html/
○ Now you would be able to copy paste the folder in that directory.
○ Copy paste the public ip address of the ec2 instance in the web and see the
application deployed

Make sure to do this to the EC2 instances of both Windows and Linux
● This is done so that anyone with any IP address can come in
● EC2 -> security -> Edit inbound rules -> add rule ->HTTP 0.0.0.0/0

Customization on ec2 machine/vm

(companies use, chargeable)Static IP address - The ip address will not change every time
the system restarts
(not preferred, free)Dynamic IP address - The ip address will change every time the system
restarts

Propagation time - High for dynamic IP address, so static IP address is preferred. Dynamic IP
addresses will take 72 hours to load.

Public IP address- connected to the router. So it (DHCP) assigns IP address dynamically


(Dynamic IP address)

Private IP address - not connected to the router. It has a static IP address.

Static IP address - Elastic IP(AWS)

1. Dynamic IP address to static IP address


2. Add volume, Modify volume - can only increase not decrease volume. mount
Volume-associate, release
3. Termination protection (in actions of an instance)
4. Ram(Change instance type) - Actions -> Instance settings -> change instance type ->t2
nano - >change. Similarly can change process
5. Snapshot - Takes a copy of your volume(not entire machine) (in actions of an instance
or left bar)
6. AMI - Copy of entire OS and data (left bar)

Snapshot -> volume -> select -> create snapshot


Snapshot is cheaper than using volumes. Instead of running many volumes in the bg for a long
period of time, we can take a snapshot of the instance and add it to a new instance and
continue from where we left of

Ami
Actions -> Instance settings->create image - >

Task: Take a snapshot or ami. Delete existing machine. Bring a new machine using ami or
snapshot. Check if data is still there.

Ami -> deregister


7.2.25 Load Balancing
Manage the traffic. Even simple application -> will have high traffic
Elb - elastic load balance
Asg - auto scaling group
Classic load balancer - works with round robin mechanism
Does 2 things
● Manage traffic
● Do a health check - load balancer pings the application, waits to get response. No
response after 2 pings? Then the server is down. Now a notification is sent to the
engineers team to check and fix this issue.

Application load balancer - path based routing


Network load balancer - protocol based routing
Gateway load balancer - IP based routing

Above 3 works well with microservices architecture

Monolithic -> one server for the entire application. Works well with classic load balancer
Microservices ->Split modules and assign servers for each. Split your application into multiple
servers.

Classic Load Balancer


1. Create load balancer
Here application/ server/ ec2 instance all means the same ehehe
Aws - load balancers -classic load balancer - create - give name - internet faced enable all the
availability zones(if one goes down, we have the other data centers to back us up) - security
group(inbound rules ->all TCP, delete existing one) - Listeners and routing - listener
protocol(HTTP) - ping path (/index.html) change is needed - health check - advance health
check settings - response timeout(how long the lb waits for the server to respond) 5 - interval
(time between health check sent to the server) 10 - unhealthy threshold (number of times the lb
is going to ping the application)2 - healthy threshold (how many continuous response to I need
from the server to determine that it is healthy) 3
Create load balancer (this is an empty load balancer)

Autoscaling -> scales up and down automatically.

Could watch -> monitoring service According to the seriousness of the application, we can
increase capacity of a machine(70%).

● 3 machines default.Traffic high or low, 24/7 these machines will run.


● After this, I send some conditions to cloud watch(threshold)
● I gave a template to autocscaling - it has system configuration and source code
● Now when the threshold is met, cloudwatch informs autoscaling to create more
machines. Similarly when traffic decreases, the machines will be terminated.
Horizontal and vertical scaling
Horizontal scaling - it will create a new server
Vertical scaling- increase resources in existing server

Ec2 -> works with horizontal scaling


Database -> works in vertical scaling

2. Create launch template (this is a template for the autoscaling group to create ec2
whenever necessary)
ec2-> auto scaling group -> create auto scaling group - create launch template- (same steps as
creating an ec2 instance), give some template description - password - volume (10gb) - select
existing security group(that ALL TCP-its not recommended still for learning it’s fine) - user data -
type the following

#! /bin/bash
yum install httpd -y
service httpd start
echo “Hello aLL from $(hostname) $(hostname -i)” > /var/www/html/index.html

(hostname -i) -> gives ip address of the machine


Hostname -> prints host name
> /var/www/html/index.html -> that content will be printed in this html file

3. Create auto scaling group


Name - attach template - version 1 - select all availability zones - attach to an existing load
balancer - choose from classic load balancer - select it by name - leave others by default - next
- configure group size and scaling - Desired capacity(2) minimum number of servers to run at all
time - min desired capacity (1) (how many machines to create at the same time) - max (4)
(maximum number of extra machines to create) -others leave default - don’t add notification as
of now - next - create autoscaling group

If Desired capacity(2) minimum number of servers to run at all time, that number of instance
does not exist, then it will ask to create (auto scaling will automatically create).

Copy paste dns address form load balancer in url


See the output -> every time you reload, you will get 2 different ip addresses because 2
machines are running at all times and we have oriented those machine’s priv ip address.

Connection:The template is taken care of by the autoscaling group. We connected auto scaling
with load balancer. So all are interlinked. So load balancer automatically connects the ec2
instance.
Deletion : first delete auto scaling group(ASG). One u delete ASG, ec2 instances will
automatically be deleted. Next delete the template and then delete the load balancer.

OSI layers
● Physical layer- encoding signals, physical specifications
● Data Link layer- local address(communication within the system)
● Network layer- Global address(connect to different networks)
● Transport layer-Transmit data using the transmission protocols(TCP,UDP)
● Session layer- manage the connection
● Presentation layer- Encrypts, compress, encodes
● Application layer- Near to user in order to perform application service

Network layer(layer 3)
● IP -> transfer bits and bytes
● Unreliable -> informations are sent directly through the network (it is fast, but there is
high possibility of data loss)

Transport layer (layer 4)


● Highly reliable
● TCP - Transmission control protocol - highly reliable - makes sure that the data is
received properly in the right order
● TLS - Transport later security - more secure form of TCP (because it encrypts data)
● UDP - Performance over reliability. Eg: video live streaming. It can glitch a little, but it is
very fast

Application layer (Layer 7)


● HTTP
○ Hypertext transfer protocol.
○ Stateless request response cycle
● HTTPS
○ Secure form of HTTP
○ Many certificates are there - SSL certificates etc
● SMTP
○ Simple mail transfer protocol
● FTP
○ File transfer protocol
○ Responsible for sending emails

Application load balancer


Create dedicated directory for each service in the application
1. Create 2 ec2 instances.
User data -> type the below code
1. Home loan
#! /bin/bash
yum install httpd -y
service httpd start
mkdir /var/www/html/homeloan/
echo “This is my homeloan instance” > /var/www/html/homeloan/index.html

2. Plot Loan
#! /bin/bash
yum install httpd -y
service httpd start
mkdir /var/www/html/plotloan/
echo “This is my Plotloan instance” > /var/www/html/plotloan/index.html

Once created-> in web - public ipaddrress/dir name


Check if you are getting output

2. Create 2 target groups


For application load balancer, target group will do a health check. Give it 2 information
1. Application path
2. Server information(ec2 instance)
Load balancer -> target groups -> create 2 target for 2 ec2 instances
Instance - target group name (homeloan -target) - HTTP - IPV4 - healthcheck path -
/homeloan/index.html - select homeloan ec2 instance - add (make sure it is shown below)- then
create target group.

3. Create the Application Load balancer


Name: ALB - internet facing -ipv4 - default VPC -all subnets-default security group -80 - All
availability zone - select target group (homeloan) - create

Wait until you get available state(PROVISIONING->AVAILABLE)

Select the load balancer -Listener and routes - HTTP 80- Add rule - NAME: homeloan -Add
condition - Select path - Path: /homeloan* - forward to target group - target group -select
homeloan-target - Priority: 70(weightage to that particular path) - next- create
Do the same for plotloan

Web url:
1. load balancer url/homeloan
2. load balancer url/plotloan

Deletion
1. Delete load balancer
2. Delete target group
3. Delete instances

Screenshots- ALB
1. Output 2 ss (WEB)
2. Listeners and groups
3. Instances
4. Target groups
5. Bin bash script

Network Load Balancer

1. Create two instances nlb_instance1 and nlb_instance2 and add the following bash script
○ #! /bin/bash
yum install httpd -y
service httpd start
echo “Hello all from $(hostname) $(hostname -i)” > /var/www/html/index.html
2. Create a target group and add path (/index.html). Add the 2 EC2 instances to the target
group.
3. Create a network load balancer and add the target group to it.
4. Once the network load balancer is active, paste the dns of it to the web browser and the
output will be displayed.

10.2.25 S3 - Simple Storage Service


● Store and retrieve data
● EC2 already gives storage - 100 volumes per instance. 1 volume= 16 terabytes
● So why S3?
○ Unstructured format of data
■ Can store data of any format except .exe file.
■ Reason: Storing .exe is basically bringing in an application. But S3 is a
storage service and not a place to deploy an application. That is why they
don't allow it.
■ But if you really want - store as a zip file
○ Unrestricted/Unlimited
■ Can store any amount of data (Paid account)
■ But Storage for free tier - 5 GB data
■ There are numbers for default size
■ But no one has ever reached that default size because it is enough for all
companies.
○ Access through Internet - access from anywhere
■ Global service - lists servers from all locations in aws.
■ Access anytime anywhere as it is running on the internet.
○ Versioning(chargeable)
■ Versioning means keeping multiple variants of an object in the same
bucket.
■ You can use versioning to preserve, retrieve and restore every version of
the applications.
■ Eg: Imagine having 5 different application file(object) with the same name
and you want to put it in a bucket. Since all of them have the same name
it is not possible. That’s when versioning comes into play.
■ When you enable versioning for a bucket. It will have a toggle saying,
“Show versions”
■ When you upload a file with the same name of an existing file it will be
uploaded. You will be able to view the latest version inside the bucket.
■ When you click on show versions, it will show all previous versions
■ The private object URL is the same for all the versions.
■ How does it work in real time?
● S3 is the controller of the source code
● EC2 will take the latest version
● The latest version will be used by the user
● If there an error in the latest version, you can delete the latest
version in S3
● Automatically EC2 will switch to the previous version and it will
start running
● EC2 <—----------------- S3

For understanding (this is not exact)


Bucket - folder
Object - file

Functions
● Storage unit.
● Collections of objects.
● Single level container - contains multiple files/folders.
● Upload/download very easily.
● The name of the bucket must be globally unique. Access data in the bucket using URL
(unique one).
● Bucket creation(default size)
○ Bucket per account = 100 buckets
○ Bucket per region = 20 buckets
○ Can increase this using technical centre
● Number of Objects that could be stored inside a bucket
○ Any number of objects could be stored in a bucket
○ Size per object - 5TB

● When do we use S3?


○ Current version of the application - stored in ec2
○ S3 - You can store previous version of the instances
○ General user data, project information etc

Creating a bucket in S3(Private)


● Search for S3
● Create bucket - general purpose bucket - general purpose - give a name - object
ownership (ACLs disabled) - Block all public access - Versioning(disabled) - disable
encryption (Bucket key)- Object Lock(disable) - create bucket

Creating a bucket in S3(Public)


● Do below things differently, rest same
● Bucket owner (ACLs enabled)- Block all public access(Uncheck)

Upload a file
● Open bucket - upload - add files - upload
● When you try to upload a file in a public bucket, it will ask for access control (Object level
permissions)
● This is because it is a public bucket and sometimes you don’t want the public to access
all the files in the bucket. Eg: A file(object) with key pairs information.
● Give public access - then it will ask for encryption(don’t encrypt as of now) -
Upload
● (Reason: when you create a file as private only you can read but when you encrypt, the
people with the key can access that file). Create encryption key and give decryption key
to necessary people who you want to access the file.
● PUBLIC FILE - Open the object(file) - Private object url will be present - paste in
browser - access denied
● Reason: You are accessing the object using a web browser. AWS wont know that it is
the admin accessing through the browser.
● PRIVATE FILE - Open the object(file) - Private object url will be present - paste in
browser - access denied
● Reason: Because it is a private file. There is an open button in AWS itself. Through that
only you can access the file. This open button is only enabled for the owner and disabled
for others.

DELETION
● Delete object
● Delete bucket

Object Lock(chargeable)
○ Stores using Write once read many(WORM)
○ Helps prevent objects from being deleted or overwritten by someone for a fixed
amount of time or indefinitely.
○ Object Lock works only in versioned buckets.
○ Log file stored with object lock - To ensure that no one changes it.

Encryption(chargeable)
○ SSC - S3 Server side encryption (AWS is encrypt the data and it will support to
decrypt the data)
○ SSC - KMS Key management service (Encryption manually done)
○ DSSE - KMS (Encryption manually done with double verification step)

IAM - Identity and access management


● Root user - owner of the account/ unrestricted access
● IAM user - Restricted access

Use separate browsers for creating alias and groups

1. URL
a. IAM -> create alias (right side bar) ->name: proj name, team name and my name
->Sign in url for IAM users in this account
b. Name format for alias
i. Client name - module - environment (test, production, stage) - eg: hdfc-
cc-stage
2. Group
a. user group -> create user group -> name: ec2_admin(meaning: admin will have
full access to the ec2 instances) ->Permission policies: Search ec2 - select
Amazonec2FullAccess -> create.
b. Now we have created a group. In it, we can add users.
3. Users
a. IAM -> users -> create user -> Provide user access to the AWS management
Console(check that box) -> I want to create an IAM user ->autogenerated or
custom - admin@123 ->check the box “Users must create new password for next
sign in” ->next
b. Set Permissions: Add user to group - check the group -> click next -> create user
c. Retrieve password: Will have info about console sign in details(sign in URL,
username, password). Save it
d. Go to that link - give username and pass word - change password (Password
Reset) - logged into AWS console as a user
e. Changes made by the user will be reflected in the root user’s account. So in
general the users are given very little permissions.
4. Policies
a. Scenario : Task given by client to Ec2_admin: create the same level of ec2
instances as S3 storages. But this ec2_admin has access only to ec2 instances
when he goes to S3, it shows access denied. So now EC2_admins asks for
access to the overall admin.Now overall admin creates a policy to (Read - list
buckets).and associates it with ec_admin user or if needed associate policy to a
group.
b. Create policy:IAM - Policies (left bar) - create policy - S3 - actions allowed (List -
list buckets) - resources(all) - next - name:s3-listbuckets -add description - create
c. Associate policy with user: User - Permissions - Add permissions - attach
policies directly - Filter(Custom managed) - Select s3-listbuckets - save
5. Roles
a. Create role: IAM - roles - create role - s3-s3-listbucket (cutomised policy) - give
a name -create
b. Role is assigned to a service(EG: assigned to ec2 server)
c. In order to differentiate, create 2 instances and we can associate role to one
instance. - ec2withoutrole, ec2withrole
d. One instance - advances details - IAM instance profile - ec2-s3role -launch
instance
e. Another instance - don’t associate any role.
f. Check ec2withrole instance -> click on connect -> connect and bring remote
desktop directly
g. >>>aws s3 ls //lists the buckets in aws
h. Create a bucket (side la in aws)
i. Will get output in ec2withrole instance and get error message in ec2withoutrole

Access Key(CLI Login)


● Now I want output from ec2_withoutrole. So we use access key to configure the user
● IAM- users -click user - security permissions - Access key - accept -next - you will get
info such as key ID and access key
● >>>aws configure
● Get Key ID and access key from above
● Default region name see location and give directional name (eg: us-east-2)
● >>>aws s3 ls
● Deletion: IAM is free, still delete
○ Delete bucket
○ Delete roles(IAM - roles)
○ Delete policy(filter : custom managed)
○ Delete user
○ Delete group
○ Delete URL

Random how to do
Attach role after creation of instance
Attach role - check instance- actions -security -modify IAM role

11-02-25 Setup Budget


1. Configure an account - Create an alias
2. Create a budget
1.Configure an alias account
● Create alias
● Click on the profile
● Go to account
● Click on Bahrain region - click enable (through this way we can enable and block region)
● Click account profile - Billing and cost management - Billing preferences - alert
preferences - edit receive aws free tier alerts - update email address (here we will
receive mail for alerts )
● Invoice delivery preferences -edit - activate (we will get pdf about how much hours of
free tier we have exhausted and other relevant details)

2. Create a budget
● Click account profile - Billing and cost management - budgets - create budget - use a
template -monthly budget - budget amt: 3$ - email:[email protected] - create
● You will receive notification when
○ When your actual spend reaches 85% of the budget
○ Your actual spend reaches 100%
○ If your forecasted spend is expected to reach 100%
● Billing and cost management - cost explorer - new cost and usage report (analyse how
much you spent in the past month through visualizations)

Setting up Monitoring and notifications

● SQS - Simple queuing service


○ One to one communication
● SNS - Simple notification service
○ One to many communication
○ Highly used
● Cloud watch
● Cloud trail

Basic info about SNS and SQs


● Works with A2A, A2P, S2S,S2P S2A (A-application P-Person S-Service)
○ Left side - always A or S, no P. This is because it is a Fully managed message
querying service by amazon. Eg: notification when charge goes below 20%.
○ All these notifications come automatically from a service or application and not a
person so that is why left side always A or S and not P
● Send, save, receive between software components at any volume.
● Message information: subject, body, metadata, timestamp
● Queue -> refers a buffer or a temporary location where the message stays until the
receiver picks it up (like a mail box ) [send/receive]

Two modes of communication


● Synchronized - Both sender and receiver will be active (eg: calls)
● Asynchronized - Sender always active but it won’t bother about receiver activeness
(eg: whatsapp) —> SQS and SNS

SQS: Two types of queue


● Standard - Non sequential manner of communication (Eg: Whatsapp)
○ You send a video and text one after another. But text is received first
○ So size matters when it comes to standard queuing
● FIFO - Sequential manner of communication (FIFO)
○ Follows order in which the data was sent

Notification workflow
● Sender ->sends a message/notification to queue (checks if receiver is available) if yes,
send to receiver —---> receiver
● Sender —--->sends a message queue (checks if receiver is available. If not, it waits for
a specific amount of time (how much time we set up). After that it sends to dead letter
queue(DLQ) (chargeable service)
● If DLQ is not there, then the message will automatically be deleted
● DLQ - DLQ also checks if the receiver is available or not. If available, it sends the
message. If not active, it can wait for a retention period(14 days). After that the message
gets deleted
● DLQ can only hold 1000 messages. After 1001 message, the 1st message will be
deleted and the last message will join.

Hands on: Creating SQS - Sending message to myself


● SQS - create queue - standard(queue) - name:myqueue - configuration: visibility
timeout: 45 seconds ; message retention period: 4 days ; max message size: 256 KB ;
delivery delay and receive message wait time : 0
○ visibility timeout: time the queue will keep the message and wait for the
receiver to be available
○ message retention period: how long the DLQ will have the message
○ delivery delay: Delay the message sent to the receiver
○ receive message wait time:
● encyption (disabled) - access policy(1st is receiver and 2nd is sender) - DLQ(disabled) -
remaining all disabled - create queue
● Go inside the SQS(myqueue) - send and receive message - type something - send
message - scroll below - click poll for messages (To view messages in queue). - you will
receive your message (don’t open message until it reaches 100%)

SNS - Pub/sub messaging


● Sender - publisher
● Receiver - Subscriber
● Between them, we have Topic
● Topic
○ Does replication of messages for how many ever subscribers present (As the
message is sent only once)
○ Message formatted according to the subscriber’s format (Eg; mail format, SQS
format, etc)

HANDS ON - CREATING SNS

● CREATE TOPIC: Search for sns - create topic - name:ec2-Team - standard -


description: This topic has been created for EC2 team to track the instances - leave
other as default - create topic
● We have created a topic. Now we will be creating subscribers.
● CREATE SUBSCRIPTION: Go inside the topic - create subscription - protocol: Email -
Endpoint: [email protected] - create subscription
○ Status: pending confirmation (Have to confirm email manually, go to mail and
confirm)
● Create another subscriber with protocol : myqueue(queue name) - create
○ Status: no need to manually confirm as SQS is a service within AWS
● Go inside ec2-Team - Publish message to the topic EC2-Team
● Email will be sent but SQS message we didn't receive even after polling
● Reason: Have to create a role for services to communicate with each other in AWS.
Create a role and add it to the queue(SQS).
● Create role - Service : SNS - Amazon SNS role -sns-role - create role
● Go inside the role, there will be something called ARN role
● Create a new queue(SQS) - standard(queue) - name:myqueue - others default - Access
policy: Define who can send messages to the queue: Only the specified AWS accounts,
IAM users and roles Only the specified AWS account IDs( IAM users and roles can send
messages to the queue) - copy paste the ARNrole -create queue
● Create another subscriber with protocol : SQS -EC2-MANAGER(queue name) - create
● Go inside ec2-Team - Publish message to the topic EC2-Team
● Email will be sent but SQS message we didn't receive even after polling even now
● Reason: Have to mention the topic for SQS as it is a one to one communication service.
Have to mention from which topic the message is coming from.
● Go inside queue ec2-manager - subscribe to amazon sns topic - choose the arn - save
● Go inside ec2-Team - Publish message to the topic EC2-Team
● Email will be sent but SQS will also be sent after polling
● HOW TO CHECK FOR MESSAGE IN SQS: Go inside the SQS(myqueue) - send and
receive message - type something - send message - scroll below - click poll for
messages (To view messages in queue). - you will receive your message (don’t open
message until it reaches 100%)
● You will receive a message in SQS in JSON format

Cloud Trail
● Continuously log your AWS account activity
● Why use it?
○ Auditing Purpose
○ Used to identify Unauthorised access
○ Identify misuse
○ Troubleshoot
● Already when you go to cloudtrail-event history, you can see the logs
● Then why set up trail?
○ Event history shows you the last 90 days of management events
○ For auditing, we need information/logs from the past 1 year
Setting up trail
● Cloud trail - dashboard - create trail - name: mylogs - let it create a bucket automatically
- uncheck encryption - uncheck log file validation - cloudwatch (uncheck) - log events:
check management events - create trail
● Go inside cloud trail(mylogs) - AWS logs - Account id - CloudTrail
● You’ll get output after cloud watch is enabled (next task)

Cloud watch
● It is a monitoring tool.
● Highly used by Support/delivery teams.
● Monitor the infrastructure and trigger alarms - by this we can troubleshoot and fix issues.
● It is a chargeable service
○ Basic - Free, monitor resources 5 minutes once
○ Detailed - Paid, monitor resources every 1 minute

HANDS ON - MINI PROJECT


● Create an ec2 instance with amazon linux name mini-project
● Check instance - below monitoring - 14 metrics will be seen - change time to 1 min -
manage detailed monitoring - enable -confirm (additional charges will come)

How does it work?

● EC2, CW, SNS, Email, SQS


● I’m telling CW to monitor EC2 instance
● When CPU utilizes goes above 80%, cloud watch alerts SNSwhen CPU utilizes 80%

● Cloudwatch - create alarms - metrics - paste ec2 instance ID - select metrics(CPU


utilization) -Graphed metrics(1) - select metrics - Conditions: Greater/Equal than 0 - next
- In alarm - select sns - for notification (ec2 team) - next - Alarm name: Alarm
triggered:::::: Warning Alert::::: Immediate Action Required - Description: Machin reached
capacity of 80%. Adjust the configuration. - next (OR)
● Cloudwatch - metrics - paste ec2 instance ID - select metrics(CPU unitlization) -Graphed
metrics(1) - go inside it - in CPU utilization right side there will be a bell symbol in actions
- click on it - Conditions: Greater/Equal than 0 - next - In alarm -next - Alarm name:
Alarm triggered:::::: Warning Alert::::: Immediate Action Required - Description: Machin
reached capacity of 80%. Adjust the configuration. - next

OK-> Insufficient data -> In alarm


Below limit Going to reach the limit Reached limit

● Go to cloudwatch - alarms (alarm will be triggered any minute. First it will show
insufficient data) - STATE: In Alarm
● Check alarm details in EC2 - STATE: In Alarm
● Will get mail and SQS from SNS

Deletion
● Delete cloud trail
● Delete bucket and its contents
● Delete alarm
● Delete Instance
● Delete SNS-Role
● Delete topic
● Delete subscriptions
● Delete SQS queue
● Create an EC2 Instance
● Create an EBS volume and attach to the instance.
● Create SNS topic for Notifications
● Add subscribers for the SNS topic
● Create Cloudwatch Alarm for EBS Volume of any Metric and Select the SNS topic.

How-to Monitor EBS Volume Performance by Generating an AWS CloudWatch Alarm

Documentation:

● Create an ec2 instance with amazon linux name mini-project


● Check instance - below monitoring - 14 metrics will be seen - change time to 1 min -
manage detailed monitoring - enable -confirm (additional charges will come)
● For the instance, go to storage and get the volume ID
● Cloudwatch - create alarms - metrics - paste volume ID - select metrics(Volume Queue
Length) -Graphed metrics(1) - select metrics - Conditions: Greater/Equal than 0 - next -
In alarm - select sns - for notification (ec2 team) - next - Alarm name: Alarm triggered::::::
Warning Alert::::: Immediate Action Required - Description: total number of read and
write operation requests waiting has reached the threshold- next
● Go to cloudwatch - alarms (alarm will be triggered any minute. First it will show
insufficient data) - STATE: In Alarm
● Check alarm details in volume of EC2 - STATE: In Alarm
● Will get mail , SMS and SQS from SNS to take immediate action

12-02-25 IaaS - Infrastructure as a code


● Terraform - available for free (By HashiCorp)
● Same service in AWS- CloudFormation - uses Json/Yaml
● Why code instead of GUI?
○ Automate easily(update all at once)
○ Customise existing templates according to requirement.
○ Unlike GUI, where you’d have to change the entire design.
● Code tech stack used in terraform
○ HCL - Hashicorp Configuration language

HANDS ON
● Cloudformation - stacks -create stack
● Prepare template
○ Choose from existing template (select this)
○ Build from infrastructure composer (Create a template using visual builder)
■ Infrastructure composer - Resource: search for ec2 instance and search
keypair
■ Then you can change the yaml code as needed.
● Specify template
○ Amazon S3 URL
○ Upload a template file
○ Sync from Git
● Change ami id (of amazon linux) and ssh key name(key pair password) when you use
existing template (line 11 and line 15)
● Run the template using command prompt
○ >>>aws –version
○ If it is not recognised, install AWS CLIf from documentation as an msi, run the
application and install it.
(https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
● Below is the command to create a stack in cloud formation and upload a template from
local
● >>>aws cloudformation create-stack –stack-name joshna –template-body
“file://give/full/path/to/infra-1.yml” –parameters “file://give/full/path/to/parameters.yml”
● Now our desktop windows CLI does not know where to create this stack. So we get the
below error,

● IAM - users - create user - Permissions: vpcfullaccess,cloudformationfullaccess,


ec2fullacess - create user
● Create access key for this user
● In desktop CLI ,
○ >>>aws configure
■ Enter access key ID, secret Access key, region name: us-east-2, enter
● Now a stack will be created in your account.
● Search for VPC - A VPCwould have been created
● Search for EC2 - an ec2 machine will be created
● Now using the code/template,\\ a VPC and ec2 has been created.
● This is building Infrastructure as a code (using code).
● CLI command delete
○ >>>aws cloudformation delete-stack –stack-name joshna

CREATION AND DELETION OF STACK COMMAND


● aws cloudformation create-stack --stack-name akshivlab-ec2 --template-body
"file://D:/Documents/Learning Materials/AWS/A-Square Technologies Docs/CloudFront
& CloudFormation Doc/infra-1.yml" --parameters "file://D:/Documents/Learning
Materials/AWS/A-Square Technologies Docs/CloudFront & CloudFormation
Doc/parameters.json"
● aws cloudformation delete-stack --stack-name akshivlab-ec2

CODE FOR THE TEMPLATE(infra.yml)

LINKS TO CREATE SAMPLE TEMPLATES

CloudFront - CDN(Content delivery network)

● Latency - response time


● Scenario: First, I had customers only in India. Now I have customers from all over the
world. Now the latency for people outside India will be high. In order to tackle this issue, I
should have data centers all over the world. But for that the expenditure will be
exponentially high.
● How to solve this issue? Using cloudfront.
● Use cache server: Instead of creating data centers all over the world, I enable cache
servers all over the world.
● Cache server works only with static application
● A cache server works by temporarily storing copies of frequently accessed data, like web
pages or files, allowing it to quickly deliver this data to clients without needing to retrieve
it from the original source every time, essentially acting as a middleman to speed up
access by reducing network load and improving response times; when a client requests
data, the cache server checks if it has a copy stored, if so it delivers it immediately (a
"cache hit"), if not, it fetches the data from the original source and then stores it in the
cache for future requests (a "cache miss").

13-02-35 Route53

● DNS is globally unique


● Translates domain names into IP addresses
● Service used for domain name system- Route53
● Route53 is a global service
● 53 - DNS port number

● I will give domain name to Route53 and Route53 will give me back domain address - 2
way handshake

HANDS ON
What are we doing?
● Create EC2 instances
● Create a domain (Route53: hosted zone)
● Add record in it with simple or weighted policy
● Wait for it to be active
● Check if domain works through CLI and browser

AWS STEPS:
● Go to AWS - create 3 ec2 machines - make sure that the security group has all TCP -
Add the below script in the advanced settings: user data
○ #! /bin/bash
○ yum install httpd -y
○ service httpd start
○ echo “This is my Route53 Application” > /var/www/html/index.html
● Route53 - get started - dashboard
○ DNS management - create hosted zone - domain name: joshnaavsha.site -
public hosted zone - create
● Hostinger - domain - manage - leftside (DNS/Nameservers) - Change nameservers -
Copy paste value/route traffic from (Hosted zones in Route53) in Hostinger
● Simple routing policy
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field - Record type: Simple routing policy -
create record
○ Remember: Every time you restart the instance, the public IP address
changes( dynamic IP address). So make sure to change the value in the
record every time.
○ Here, we can create only one record (as it is a simple routing policy)
○ Wait for domain name status to change from pending -> insynk
○ Now the domain name will work. But it may take up to 24 hours sometimes
○ Open browser and search for the domain name
● Weighted routing policy
○ Can host more than 1 record
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field - Record type: Weighted routing policy
- weight: 80 - create record
○ Remember: Every time you restart the instance, the public IP address
changes( dynamic IP address). So make sure to change the value in the
record every time.
○ Like this create another record with weight 20
○ Wait for domain name status to change from pending -> insynk
○ Now the domain name will work. But it may take up to 24 hours sometimes
○ Open cmd
○ >>>nslookup kloudevops.online
● Routing policy: Geolocation - route based on location
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field -Add location (United States) - record
ID (Oregon) - create record
○ Create another record with location as India, mumbai
○ When you use the command,
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.224
■ This means, you are routed to the instance in mumbai( India) and not
routed to the instance in the United States.
● Routing policy: Latency based
○ Scenario: There are 2 servers. One in Mumbai and the other in America. Where
do you think you will be routed? You will be routed to the instance which is has
less traffic so the latency is less.
○ Route53 - hosted zones - joshnaacsha.site - create record - paste public ip
address of any one instance in value field Select policy: Latency - select region
(mumbai) - create record
○ Create another with location as hyderabad
○ When you use the command,
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.252
■ Which means I am routed to the Hyderabad server. Meaning hyd server
is free.
■ If mumbai is free, we will be connected to mumbai server. Nearby or far
doesn't matter.
● Failover policy
○ There will be 2 instances.
■ Primary and secondary server
■ If the primary server fails, the secondary server will come into play.
○ Create health check for the primary server instance - Name: - then configure the
details
■ Protocol: HTTP
■ Ip address
■ Domain name
■ Port: 80
■ Path: /index.html
○ If health check fails, send notification through sns(disable for now) -create
○ Wait to get status : healthy
○ Create record - paste ip address of primary instance - Policy: Failover - select
health check id - create
○ Create another record with secondary instance ip address - Policy: Failover - no
need health check id - create
○ You can check that using
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.124
■ Which means the primary server is currently running

○ Health check changes to unhealthy. So server will be changed to secondary
server ip address
○ You can check that using
■ >>>nslookup kloudevops.online
■ You get this output
● Non-authoritative answer:
● Name: kloudevops.online
● Address: 3.128.168.252
■ Which means the instance has been changed to secondary server
NS - name server
SOA - Service-Oriented Architecture
Routing policy
1. Simple routing policy - when you use only one webserver (mostly not used in
companies)
2. Weighted routing policy - More than one webserver could be added

Elastic Beanstalk (EBS)


● End to end web application management
● PaaS - Platform as a service
● Create application - web server environment - Application name: whatsapp-env -
Customize environment as needed - add description - Managed platform (Custom
platform is not available in free tier) - Platform (Tomcat) - Platform branch (Tomacat 10) -
Platform (5.4.3) - Sample application (Or upload your code) - Preset (Single instance) -
Role: Create a new service role - Role name: change if needed - Choose key pair - ece2
instance profile: (Open new tab -> go to aws -> IAM -> Role -> Use case: EC2 ->
Permission: ElasticBeansstalkWebTeir ->next -> Role name: ELB -Role - create role) -
select it(ELB -Role) - next - leave as default - Root volume type: General Purpose
3(SSD) - Uncheck IMDSv1 (because we want version 1 and not 2) - next - next - review
and submit.
● What will it create?
○ S3 storage
○ Security group
○ It will create elastic IP address (COSTS) for EC2 (static IP address)
○ EC2 instance
○ And many more
● Wait for Health -> OK and success message
● Open Domain
● What if I want my code and not the sample code?
● Download sample tomcat war file
● Click on upload and deploy(top right corner)
● Select that war file
● Upload
● Open the domain
DELETION
● Delete application (EBS)
● Delete policy
● Delete s3 bucket

14-02-24 Cloudfront application deployment project

1.Creating react project



Region: Northern Virginia

Download Node.js and npm

Create a folder named “react-app” in desktop

Open cmd inside the react-app folder
○ >>>npm install -g create-react-app
○ >>>npm install [email protected] //update to latest version
○ >>>npx create-react-app demo-app
○ >>>npm run build
2.AWS STEPS
● Location: Northern Virginia
● Non WWW
○ AWS - S3 - buckets - bucket name: joshnaacsha.site - Private bucket
(Block all public access) - disable encryption - create
● WWW bucket
○ AWS - S3 - bucket name: www.joshnanacsha.site - Public bucket
(uncheck block all public access) - disable encryption - create
○ Go inside www bucket - Upload folder of source code

WWW BUCKET
● Go inside www bucket - edit bucket policy - put the below code
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::http://www.kloudevops.site/*"
]
}
]
}
○ This policy is to get all the objects with no restrictions (like all types of
file : html, js etc)
○ Have to replace arn with your bucket’s arn. arn: get from properties in the
bucket
● Go inside the www bucket - properties -scroll all the way down - edit - enable -
hosting type: host static website - index document: index.html

NON WWW BUCKET


● What we are doing in the below step is that,
○ Whenever someone types joshnaacsha.site we are redirecting them to
www.joshnacsha.site
● Go inside the www bucket - properties -scroll all the way down - edit - enable - hosting
type:redirect request for an object - host name: www.joshnaacsha.site protocol: http

3.ROUTE53

● Route53 - get started - dashboard


○ DNS management - create hosted zone - domain name: joshnaavsha.site -
public hosted zone - create
● Add name server to domain provider: Hostinger - domain - manage - leftside
(DNS/Nameservers) - Change nameservers - Copy paste value/route traffic from
(Hosted zones in Route53) in Hostinger
● Record for www bucket: Create record - switch wizard (top right corner) - next - simple
route - define simple routing - record name: www - route traffic: Alias to S3 website
endpoint - region: northern virginia - select the www S3 bucket - target health: no - save
● Create a record for non www as well: define simple record - record name: empty -
select S3 for endpoint - region: northern virginia - choose non www S3 bucket - target
health: yes - save

4.Request Certificate - for https


● Location: Virginia (Certificate only available in Virginia)
● AWS Certificate manager - certificates - request certificate - Domain name: add
www.joshnaacsha.site and joshnaacsha.site
● Others default - Request
● Status: Pending validation
● Go inside the certificate - Create record in Route53 - check both sites(www and non
www) - create record
● Confirm that the records are created in the route53 records dashboard
● Wait for Status: Issued. Might take up to 30 minutes.
● Don’t go to next step until you get issued and success

Confirm output via browser: joshnaacsha.site


It will redirect to www.joshnaacsha.site
But protocol is still not https
For that we have to create a distribution

5.Creating a distribution

FOR WWW
● AWS - cloudfront - create a cloudfront distribution - origin domain: (S3 - wwwbucket -
properties - scroll all the way down - static website hosting - url will be present - copy
and paste here)
● Default cache behavior - viewer protocol policy:: Redirect http to https
● Settings - add name: www domain name for the cname -Attach certificate - leave others
default - create distribution

FOR NON WWW


● AWS - cloudfront - create a cloudfront distribution - origin domain: (S3 - bucket -
properties - scroll all the way down - static website hosting - url will be present - copy
and paste here)
● Default cache behavior - viewer protocol policy:: Redirect http to https
● Settings - add name : domain name without www (joshnaacsha.site) for the cname -
Attach certificate - leave others default - create distribution

Confirming through browser - protocol changed from http to https


● In distribution dashboard - status: enabled
● Open www distribution - copy domain name and open in browser
● Now we have https
● But the domain name changed. It is not joshnaacsha.site
● So do the below steps

6.Change cloud distribution in records - www and non www records


● Go to route53 - www.joshnaacsha.site - edit record - ENDPOINT: Instead of S3, select
the cloudfront distribution
● Go to route53 - joshnaacsha.site - edit record -ENPOINT: Instead of S3, select the
cloudfront distribution
● Reason: We gave S3 bucket to cloudfront and connected S3 to record through
cloudfront.
● Wait for status: INSYNK

Confirmation that the site works


● Confirm output via browser: joshnaacsha.site
● Correct https, domain name
● Ctrl+Shift+ i -> networking

WORKING:
Words in the diagram: (because it is not clear)
● Kloudevops.site
● Hostinger
● Route53
● Cloudfront
● S3
● Source code folder
● Application

DELETION
● Cloudfront - disable (takes a long time)
● Then delete distribution
● Then delete certificates
● Route53 - delete records

Blue Green Deployment


● Deployment strategy to reduce downtime

● Create Elastic bean stack for wa (same as before)


● Clone the environment of blue server
○ Go inside wa - actions - clone environment - clone - status: OK
● Update green server (wa-1)
○ Connect ec2 instance of wa-1 - CLI
■ >>>cd /var/www/tomcat
■ >>>ls
■ >>> rm index.html
■ >>>vi index.html
● Hi put some content in this file to see output
■ Pate domain and check if page content is changed
○ Now we updated
● Swap URL of green server(wa-1 to wa)
○ wa-1 -Actions - swap environment domain
○ Domain will be swapped
● Deletion
○ Go inside application - delete
○ Delete policy in bucket
○ Delete the bucket

17-02-24 Database (Chargeable service)


● Platform as a service(PaaS)
● It is a container filled with information/data which is electronically stored in a
server/computer.
● Used to store customer data.
● Store data in various formats in DBMS: Types of DBMS:
○ Structured format
■ RDS - schema based storage
○ Semi structured format
■ Type of structured data but does not have table format
■ It will follow some structure
■ Eg:MongoDB, email, json etc
○ Unstructured format
■ No structure
■ Manual upload
■ eg: hadoop, tableau
● Collection of data.
● Data can be of any format - text, image, video, audio etc
● This is chargeable. S3 is free. Why not use S3 for everything?
○ Better usage of CRUD operations -> read, update, insert, alter, drop, delete etc
■ Eg: Customer data (login details) are verified and updated when
necessary.
○ Handles customer’s sensitive information - So DB must always be in the
private network. (public network will have homepage and other things the
public can view)

● Mandatory:
○ Enable accidental termination
○ Must have username and password
● Works on vertical scaling
● What data is stored in the database?
○ Application data
○ User data
● Our database must always have a replica DB (secondary database).
● These 2 databases must always be in sync with each other.
○ How is the sync done?
■ Automatically by AWS
■ No manual intervention required
● This replica is placed in a different region (For safety. If the databases in one region
goes down the database in another region can take position)
● Database Permission:
○ It only has read and write permission
○ No execute permission
● The replica will only have read permission
○ Whatever data written in the primary db is automatically replicated in the
secondary database.
○ So there is no need for write permission
● When the primary db goes down and the secondary db takes position as primary db
○ It will automatically get write permission as well.
● Is database the same as DBMS?
● DB - Container to store data
● DBMS - Software used to manage the database.

How was data stored before?


● FFD(File format database) -> stored in simple structure (.csv,.txt etc.,)
● Hierarchical DB/ Network DB ->Storing complex data
How is data stored now?
● RDS - Relational database - widely used: oracle mysql
● Non - relational database - widely used: mongodb

Relational database
● Used to store structured data
● SQL
○ Structured query language
○ Developed by IBM
○ Declarative language - because they maintain well defined standards
○ Invented in the 1970’s

Creation
● Create private network: RDS - manage relational database service - left dashboard
(subnet groups) - Create DB subnet group (take default network as private network as of
now - bc we did not learn VPC yet.) - name: pvt-network - Description: Network for DB -
VPC (select default) - select us-east-2a and 2b -select 2 subnets - create
● RDS - databases - create database - Standard create - MySQL - Templates: free tier -
Db instance identifier(used to identify db in RDS area): primary-db - admin - self
managed - password: admin123 - DB size: db.t3.micro - Additional Storage: uncheck
autoscaling - Don’t connect to EC2 instance - select default VPC - subnet group: pvt-
network(network u created above) - Public access: Yes - VPC: existing - default (make
sure you have All TCP or MYSQL enabled in security group) - availability zone: us-east-
2a (replicated db must be in another availability zone-us-east-2b. But as of now, we are
not replicating the db as it is chargeable. In real time, we will replicate)- Additional
configuration: initial database name: mydb(this name is used to connect via instance) -
enable automated backup - backup retention period: 1 - Backup Window: no preference
- Uncheck encryption (needed in real time, now chargeable) - Check backup replication -
uncheck maintenance (Used to update mysql automatically - like weekly once or based
on preferance) - no preference - uncheck enable deletion protection - create database.
Can take up to 10 minutes. Status: available
● Create an instance in same region - storage 30 GB
● Open db - connectivity and security: endpoint (this is the point where the read and write
take place)
● Connect the ec2 instance - username: root - connect
○ Terminal opens
○ >>>mysql –-version
○ >>>yum install mysql -y
○ >>>mysql –version
○ Mysql -h <paste endpoint here> -P 3306 -u admin -p
■ -h refers to host
■ -u refers username
■ -p refers to password
■ -P refers to port
○ Now you will be connected with mysql db
○ Open workbench in desktop
■ Connection name: myapp-db
■ Hostname: <paste endpoint>
■ Username: admin
■ Pasword: admin123
■ Port: 3307
○ You can run queries through the desktop application or the remote linux machine
(we connected in both)
○ Application
■ >>>Create database movies;
■ >>>Show databases
○ Remote Linux
■ >>>Create database author
■ Try to create a table, add rows, alter, update, delete etc
○ Now check for replica
■ Go to database in aws
● Actions - create read replica - create read replica - DB instance
identifier:replica-db - 20 GB - uncheck autoscaling - Additional
configuration:port number: 3307 - create read replica
■ ROLE:
● name: replica-db Role: Replica
● name: primary-db Role: Primary
■ You will get replica under primary db
■ Primary and replica will have the same endpoint
■ Same username and password for replica and primary
■ Delete primary-db - Role:Instance

AURORA DB(chargeable)
● Cluster based db
● Has separate endpoint for read and write operations
○ This will result in faster operations
○ Performance will be high
● Create aurora
○ RDS - create database - standard - aurora(mysql) - Templates: Dev/Test - DB
cluster identifier: aurora - username: admin - password: admin123 - Aurora
Standard - db.t3.medium - Don’t create replica - pvt-network - default VPC -
disable everything - create db
● Role:
○ A Cluster and the writer instance will be created.
○ When you go inside you will get 2 endpoints. Type: Writer and Reader endpoints
○ In mysql only one instance will be created

Stopping db
● What to do if you want to stop your db and still want to retain existing data
● DB - can stop temporarily but it will start after a while (chargeable)
● What to do?
○ Take a snapshot (less charge but if you don’t even want that do the below 2
steps)
○ Exports in Amazon S3
○ Move to archive mode in S3
○ Archive mode in S3 will hardly incur any charges
○ Similarly you can restore from S3 (RDS - database - restore from S3)
Deletion
● Delete db
● Delete subnet group
● Delete ec2

18-2-25 EFS
● EFS - Elastic file system
● Shared storage that works only on linux
● Uses NFS - network file sharing
● Default port number- 2049
● Mount shared volume in all the machines - create in one machine and this file will be
mounted to the remaining machines
● It will not be mounted to the entire machine, instead only to a shared directory (Eg:
sending link of a shared folder)

● Not used much, instead we use 3rd party tools

Creation:
● Create 2 ec2 machines - create a security group (Add rule:NFS, SSH) -number of
instances: 2
● Connect both the instances
● AWS - Elastic file system - Create file system - customize - name:myefs - regional (If
one data center goes down, another will come into position) - uncheck automatic backup
- uncheck encryption - PErformance settings: go with default - next - use default VPC -
Availability zones: change to the security group we created
● Review and create
● Go inside the file system
● Mount -> mount via dns -> copy dns name
● EC2:
○ >>>sudo su - -> copy the command
○ >>>df -h
○ We need a dedicated share volume directory
○ So let us create a directory in both the instance
○ Instance 1
■ mkdir test
○ Instance 2
■ mkdir new
○ Paste the link for sudo mount - in the end, add the directory name
■ Eg: sudo mount <dns name>/new
○ Go inside the directory
■ >>>cd new/
■ Try to create a file inside the directory. And give ls in the other instance
■ The file will be there
■ So the same content is mounted in both the instances
● Deletion:
○ File system
○ Instances

Lamda (optional/enhancement service)


● Sleepyhead server - wakes up only when a request comes
● Serverless computing - So completely managed by Amazon
● Highly efficient and high performance than ec2 instances
● You can directly apply code. No need for servers
● There is no charge when your code is not running
● Scenario: Sending an image in whatsapp.
○ Person sends image -> The image will be compressed (code through lambda to
EC2) -> store in db -> received by the person
○ So, how does lambda know when to run the source code to compress the
image?
■ That is when Cloudwatch comes into place
■ We set up cloudwatch to trigger when an image is sent
■ Whenever a person sends an image, cloudwatch triggers lambda
■ Lambda will run the code to compress code
■ Now this compressed image will be sent to EC2
■ Then EC2 sends to the database
■ This prevents the code for running 24/7 and runs only when needed.

○ Why can’t we use lambda to run primary source code if it is so efficient?


■ Primary source code needs to run constantly (Eg: every time a person
sends a message)
■ So cost will be higher than ec2 instance
● Creation
○ Lambda - create function - Author from scratch - function name: myfunction -
Runtime: Python(latest) - Architecture: x86_64 - attach role to function - create
lambda
○ Create role (for service to service communication): lambda_role
■ CloudWatchFullAcess
■ ec2FullAccess
■ VPCFullAccess
○ Go inside the function - test (bottom) - create an event - create a new event -
name: new_event - save
○ Go to code - You will get something like a vs code editor
○ Paste python code - you will get output in the bottom
○ Left side - Click Deploy and then Test

TASK
● Create lambda and event (using the above steps, assign roles)
● Sprint - Splitting tasks into many
● Cloud watch - set trigger to inform lambda on sunday 12 at night to delete resources
(python code)
● Paste code in editor(lambda) -code to delete all the resources
● Inside function -> configuration -> timeout: change from 3 sec to 15 min
● Create an empty CLB
● Run the code(It will take some time) Status: succeeded
● Once you get output, Left side - Click Deploy and then Test
● Now when you check CLB, it will be gone
● Now automate the process using cloudwatch
○ EventBridge - rule - name - Schedule - continue in everbridge schedule -
recurring schedule - Cron based schedule(https://crontab.guru/)
■ Setup trigger after a min (for now to check output)
■ Check if you get output

Setting up API with Lambda proxy integration

API
● Middleman between user and backend
● Fully manage service - create, manage and secure API
● API - front door for application
● Free tier - can make up to 1 million API calls
● How does it work?
○ User sends request
○ API gateway receives the request
○ Forward to backend
○ API Processes the response from db
○ User gets data from API
○ Flow: User -> API gateway -> lambda EC2 ->API gateway ->user
● Why use API gateway?
○ API gateway also helps in authentication. ,monitoring, security etc
○ API is cost effective. It saves a lot of many when you use serveless
connection(like lambda)
○ Helps with scalability
○ Rate limiting and throtlity
● Types of API
○ HTTP API - low latency, cost effective, has built in features like OIDC (Open ID
connect), OAuth2 (Authentication protocol to approve one application to
communicate with another) , Native CORS (Cross Origin Resource Sharing - a
security mechanism that allows web pages to access resources from external
APIs while preventing malicious sites from accessing data without permission.)
○ WebSocket API (Eg:Chat application)
○ REST API (full control)
○ REST API Private
● Steps:
○ Create role -> lambda_role (Add Administrator Role) ->
○ Use the same lambda function.
○ Left side - Click Deploy and then Test
○ API gateway ->REST API -> New API - name:myapi - create API
○ Go inside the API -> create method -> type: GET - Integration type: Lambda
function - select lambda function - create method
○ Deploy this via browser: go inside the api - Deploy API (Top right corner) - Stage:
new stage - stage name: prod - give some description - deploy
○ After deploying, you will get the invoke URL
○ Copy paste in browser to verify the same output in browser
● Deletion
○ Delete stage
○ Delete API
○ Delete lambda function
○ Delete IAM Role
○ Delete cloudwatch loggroup (Cloudwatch -log- loggroups)

Lamda (Project Session) - Event driven messaging service


● Words in the diagram
○ Test file upload
○ S3
○ Trigger
○ Lambda function
○ Role(Permission)
○ SNS
○ Publish
○ End receiver (notification)
● S3
a. Object based storage service
b. Store and retrieve large amount of data
c. Highly scalable
d. Durable

● Lambda Function
a. Serverless computing service
b. Run code without managing server
● Setup
a. Create bucket - name: event-lambda-projects - default - disable encryption -
create
b. SNS - Create topic - standard - name: myeventtopic - create topic
■ Go inside topic - create subscription - protocol: email
■ Confirm email subscription in mail
c. Lambda - create function - name: mylambdafunction -runtime: Python(Latest) -
create function
■ Create event - name: myemailtest - save
■ Paste code - change arn of your sns
■ Deploy (left side)
■ Test - Got error (No permission for S3)
■ Add permission(Trigger) - 2 ways to create a trigger
● Go to S3 bucket - Properties - event notification - event name:
myevents3 - Object creation: Put - leave others default -
destination: Lambda function -Choose from lambda function -
create
OR
● Lambda - go inside lambda function - add triggers - select S3 -
choose bucket - All object create events - acknowledge and add -
confirm by refreshing lambda
d. Communication between 2 servers is still pending - create role
● IAM - roles - mylambdafunction-role-zdnj - Add permissions -
SNSfullAccess and S3fullAccess - add
● Note: This role is a default role which was already created by
lambda when we created that function. So no need to manually
add this role to lambda.
e. Upload a file in S3 and check if you are getting notification through SNS
f. Log groups
● Cloudwatch - log events - log groups - check logs
g. Deletion:
● Delete buckets and objects
● SNS
○ Delete topic
○ Delete subscriptions
● Delete lambda function
● Delete IAM role
● Delete cloudwatch event logs

Project 5: Amazon S3 trigger to invoke a lambda function


Steps:
● Create bucket - upload object
● Lambda function - create policy - create role - create function - deploy code
● Trigger - configure trigger
● Test
○ Test with dummy event
○ Test with trigger

Workflow
1. Create amazon S3 bucket
a. Create bucket - name: amazon-s3-bukket-lambda- create
b. Upload a test object
2. Create a policy in IAM
a. IAM - policies - create policies - paste json script - next - name: s3-trigger-
ptutorial - create policy
3. Create role
a. IAM - role - create a role - use case: Lambda - choose policy which we created -
role name: lambda-s3-trigger-role - create role
4. Create lambda function
a. Lambda - create function - author from scratch - name:s3-lambda-trigger-function
-attach role - create function
b. Paste code - deploy
5. Create amazon S3 trigger
a. Lambda - go inside lambda function - add triggers - select S3 - choose bucket -
All object create events - acknowledge and add - confirm by refreshing lambda
6. Test with dummy event
a. Lambda - go inside lambda function - create event - event: json: paste code
(change AWS region, S3 bucket name(line23), object key (go inside bucket -copy
key) line 30) - save - Test (in the test, event page not code)
b. Go to cloudwatch - log groups - check log(log will be created for the S3 file)
c. Every time a file is uploaded in the S3 bucket, a log is created (this is what the
code is for)

Deletion
1. Delete lambda function
2. Delete role
3. Delete bucket
4. Delete cloudwatch logs
5. Delete policy

20-02-25 VPC

IP address
1. Unique identifier of a system/server
2. Internet protocol address helps to communicate via Internet
3. Two version:
a. IPv4 - 32 bit
b. IPv6 - 128 bits
4. IPv4 - 2 Types
a. NEtwork ID
b. Host ID
c. Each byte got 8 bits - totally 4 bytes - So 4 x 8 = 32 bits
5. IPv4 range - (0 to 255)

CIDR - Classless inter domain routing


1. It is a collection of IP addresses
2. If I have a company and I have 100 machines, I’d want 100 continuous IP addresses
a. Like from 1.1.1.0 - 1.1.1.99
b. This won’t just be available, you have to buy as a CIDR block
c.

d.

e. Link to calculate - https://www.ipaddressguide.com/cidr


PROJECT 6 - VPC(Chargeable)
● No network = no cloud
● VPC - Virtual private cloud
● VPC gives extra privacy and extra security
● Dividing this VPC network - subnets (200 subnets)
● Why subnets?
○ According to different modules in the project, we’d need subnets
○ Eg: Homepage as a separate subnet (in public network) and Payment in
separate subnet(in private network)

Architecture:

STEPS:
1. Purchase VPC
a. AWS - VPC - create VPC - name:myvpc - IPv4 CIDR: 10.0.0.0/16 - Tenancy:
Default - create VPC
2. Divide this network into two
a. VPC: 10.0.0.0/16
b.

c. First and last IP - get from the IP address


d. Public and Private subnet range must be different (Like 10.0.10.0/24 and
10.0.20.0/24)
3. Create subnets
a. Subnets- create subnets - subnet name: pubsub - availability zone: us-east-2a -
IPv4 subnet CIDR block - 10.0.10.0/24
b. Add new subnet - subnet name: privsub - availability zone: us-east-2a - IPv4
subnet CIDR block - 10.0.20.0/24
PUBLIC SUBNET
4. Create internet gateway and attach to VPC
a. VPC - internet gateway
b. Actions - attach to VPC
5. Create 2 route tables (1 for public subnet and 1 for private subnet)
a. VPC - Left side - route table - Name: pub-rt - Select VPC - create route table
b. VPC - Left side - route table - Name: priv-rt - Select VPC - create route table
6. Set up internet gateway(IGW) for the public route table (Connect IGW and public route
table)
a. Go inside the public route table - down click on edit for routes
b. IGW (0.0.0.0/0) and choose IGW (This is for customers to connect to our
application) - save
7. Subnet Association
a. Go inside the public route table - down click on edit subnet association
b. Click on connect with public subnet
PRIVATE SUBNET
8. Connect private route table through subnet association
a. Go inside the public route table - down click on edit subnet association
b. Click on connect with private subnet
9. NAT - Network Address Translator (To connect private subnet through the public
subnet)
a. VPC - left side - NAT gateways - create NAT gateway - name: mynat - subnet:
select public subnet - Elastic IP address Allocation ID: Allocate - create
b. Check Elastic IP - it will be active
c. Wait for NAT status to be available
10. Connect NAT and private subnet
a. VPC - Go inside the private route table - down click on routes - edit routes
b. IGW (0.0.0.0/0) and choose Nate gateway(This is for customers to connect to our
application) - save
SECURITY GROUPS
11. Security Groups
a. Create security group - name: pub-sq - Description: public security group - select
myvpc - (Inbound rules) Add rule - ALL TCP - 0.0.0.0/0
b. Create security group - name: priv-sq - Description: private security group -
select myvpc - (Inbound rules) Add rule - ALL TCP - select public security group
CREATE APPLICATION
12. Create Application - 2 instances - Home Page (Public Subnet) and Login Page(Private
Subnet)
a. EC2 - Instances - Launch an instance - name: homepage - Windows 2022 Base -
Network settings: edit - VPC: select myvpc - select public subnet - Auto assign
public IP: Enable - Select existing security group: pub-sg - Launch Instance
b. EC2 - Instances - Launch an instance - name: loginpage - Windows 2022 Base -
Network settings: edit - VPC: select myvpc - select private subnet - Auto assign
public IP: Disable- Select existing security group: priv-sg - Launch Instance
CHECK INTERNET CONNECTION
13. Connect to both the instance and check for the internet connection.
a. >>>ping google.com (check for the internet connection)
b. Create html files for login and home page
CONNECT TO PRIVATE INSTANCE
14. How to connect to the private instance? (No Public IP address)
a. Connect to public instance, in that open RDC and connect to the private instance
(login page) through the private IP address
b. >>>ping google.com (check for the internet connection)
15. For every subnet, 5 ip’s are allocated by amazon(only 251 ip’s are available for us)

16. Deletion
a. EC2 instances
b. NAT gateway
c. VPC
d. Elastic IP
21-02-25 CI/CD
● Practise developer uses to release software faster with higher quality and few errors
● Automates: coding -> testing -> deployment

Continuous Integration (CI) -> Helps integrate code done by many developers into one shared
codebase

How does CI work?


● Code commit: Developers write the code and pushes the code to version control
system(Eg: git)
● Trigger pipeline: CI tool detects the code push and starts the pipeline
● Build Stage: Compile the code and change to executable format (eg: har, war, docker
image)
● Automation testing:
○ Unit Test: Test individual pieces of code
○ Integration testing: Ensures different parts of the app work together
○ Static code analysis: Code analysis and identifies security issues
● Feedbacks
○ If test fails, The pipeline stops and the developer gets notified

Benefits of CI
● Catch bugs early in the development process itself
● Ensure that code is always in working state
● Helps reduce time spent on manual testing

CD (Continuous Delivery)
● Automate the process for preparing the code for deployment

How does CD work?


● After CI: Code passes all the tests. It is in a package format
● Staging environment: (UAT - User acceptance testing) Environment. Packed code is
deployed to staging/UAT environment for further testing
● Manual approval: Human(QA engineer) reviews the changes and approves them for
production environment.
● Ready for production: Code is in the deployable state, waiting for manual release

Benefits of CD
● Code is ready to be deployed
● Speeds the release with minimal manual work

Continuous Deployment(CD)
How does CD work?
● Post testing: Once code passed all the tests, it will automatically be deployed to the
production environment
● Monitoring: Deploy application is monitored using tools to male sure it is working as
expected
● Rollback if needed: If issues are detected, an automated rollback to a stable version

Benefits of CD:
● Faster delivery of new features - bug fixes
● Developers get immediate feedback
● Reduce manual work: human error reduced

Continuous delivery - requires manual application


Continuous deployment - complete automation
When to use what?
● In applications like instagram, 100 changes are made in a day so in that case continuous
delivery won’t work. In that case, we would have to go with continuous deployment
● If only minimal changes are done to your application, you can go with continuous
delivery

CI/CD steps:
● Source stage: Detects code changes in the repository
● Build stage: Compile code and build analysis

Important terminologies in CI/CD


● Pipeline: A sequence of automated steps in CI/CD (build, test, deploy)
● Build: Convert source code into an executable software (Eg: Java code into ,jar)
● Artifact: It’s the output we get from the build stage, ready for deployment
● Test automation: Automatically run test to check the code quality (Eg: JUnit for Java
code)
● Deployment: Releasing the build code to an environment (staging/production
environment)
● Rollback: Reverting the application to previous version if the new one fails (Eg:
kubernetes)
● Version control: Tracking changes in code
● Blue - green deployment: It is a deployment strategy where 2 identical environments
are used to avoid downtime
● Canary deployment: Releasing the new version to a small subset of users before full
rollout

AWS Developer Tools (CI/CD)


● CodeConnect: Git based source control repository
● CodeBuild: Compiles source code, run tasks, produces build artifacts
● Code Deploy: Automated deployments to servers like EC2, ECS Lambda,
EKS(Kubernetes)
● Code Pipeline: Orchestrates the entire CI/CD workflow
PROJECT: documentation link: https://aws.amazon.com/getting-started/hands-on/create-
continuous-delivery-pipeline/

WORKING:
1. GITHUB
a. Clone the repository and do some changes
>>>git -v
>>>git clone “github-link”
>>>pwd
>>>cd aws-elastic-beanstalk-express-js-example
>>>ls
>>>vi app.js
b. Do some changes in the code
c. Open terminal in VS code
>>>git add .
>>> git commit -m “changes made”
>>>git push

2. CREATE ELASTIC BEAN STALK


a. Create policy: IAM - Policy - Paste Json Code - policy name:
CodeDeployDemo-EC2-Permission - create policy
b. Create role: IAM - role - Use case: EC2 - attach the policy we created from
custom managed - add AmazonSSMMAnagesInstanceCore Policy from all
types) - role name: CodeDeployDemo-EC2-InstanceProfile - create role
c. Create EBS: AWS- EBS - environment name: - add description - platform:
nodejs - sample application - next - create a new service role - Attach key pair -
next - Instance profile: add the role we created - Root volume type: General
purpose 3 SSD - Uncheck IMDSv1 (because we want version 1 and not 2) - skip
to review - create
3. CREATE CODEBUILD - Compiles source code, run tasks, produces build artifacts
a. Create CodeBuild: AWS - CodeBuild - create build project - Project name:
Build-DevopsGettingStarted - Default Project - Source Provider: Github - (An
error will pop up because we did not connect our github account with AWS)
b. Click on Manage account credentials -Github app - create new github connection
- Connection name: gthub-connection - connect to github - Install new app -
Verify in your GitHub, the Repository access - Select repository - save - note the
connection number in AWS - connect
c. Select repository - Service role: create a new role - Build Spec: Switch to editor
and paste code -create build project
4. CREATE CODE PIPELINE
a. AWS - CodePipeline - create new pipeline - Category: Build from custom pipeline
- Pipeline name: DevopsGettingStarted Execution mode: queued - Service role:
New service role - next - source provider: github via github app - Connection:
select your connection - Select repo name - Default branch: main - Output artifact
format: CodePipeline default - Enable automatic retry on stage failure
(ROLLBACK) - next - Build provider: Other build providers - select your code
build - Project name: select your codebuild project - Region: provide your region -
next - testing area -Skip test stage - Application name: select EBSname -
Environment name: select env name from EBS- review and create pipeline
b. Wait for pipeline to succeed
5. Create Continuous delivery - so manual intervention between build and deploy
a. Go inside the pipeline - below edit build and above edit deploy click on add stage
- name it Review - click on edit action of review - name: Manual review - action
provider: manual approval - done -done
b. Delete test stage(we are not doing it now)
c. Save
6. Check working of the pipeline
a. Go to vs code
b. Change code
c. Save changes
i. >>>git add .
ii. >>> git commit -m “changes made”
iii. >>>git push
d. Refresh pipeline in AWS
e. See the build process automated
f. Review - click on review - see code changes by clicking on the link in revisions-
approve - preview markdown(comments) approved the changes - submit
g. Deployment will be in progress
h. Deployment succeeded in EBS
i. Paste dns for EBS in browser and check for output
TASKS
● Add 2 different stages - one between build and source and other between build and
deploy
● Each stage we have to set up a notification except source - SQS and email
● Show manual rejections and approvals
● Rejections -> no deployments
● Detailed documentation

DELETION
● Delete application
● Code pipeline
● S3 bucket
● Code build
● Queue
● Sns topic and subscriber

24-02-24
CD/CP Project
● CD -Continuous Deployment
● CP -Continuous Pipeline
● Services used in this project
○ EC2 (2 machines - one for developer to write code and one for production)
○ S3 (Acts as repository, like github in the previous project)
○ CD - code deploy
○ CP - code pipeline (Automates)
○ SNS - notification
○ CloudWatch - monitoring
IAM - set up roles/permissions

STEPS:
CD STEPS:
1. Create 2 IAM roles
a. EC2 - S3
b. CD -Role
2. IAM user
a. Developer
3. 2 EC2 server
a. Developer machine
b. Production machine
4. Configure developer to the dev machine (So that developer can write code in it)
5. Install CD agent in production machine(Because CD need its agent ti be running in the
production server)
6. Sample code in dev machine
7. S3 bucket
8. Code deploy application on developer machine (Dev machine -> S3)
9. Deployment group (prod server) -> Destination location
10. Deployment (Pick code from S3 bucket) -> Pickup location
11. Test my output
9 and 10 steps are given to CD
CP STEPS
1. Create code pipeline
2. Change source code
3. Zip file
4. Cp file to S3
5. Check for output

Workflow:
1. Create 2 IAM roles
a. EC2 - S3
i. AWS - IAM - Roles - Rolename: EC2-S3 - Use caseEC2 - S3FullAccess -
create
b. CD -Role
i. AWS - IAM - Roles - Rolename:cd-role - Use case: Code deploy - default
permission - create
2. IAM user - Developer
a. AWS - IAM - User - User name: joshna-developer - next - attach policies directly
- S3FullAccess and CodeDeployFullAccess
b. Give User CLI access to configure ec2 machine as a developer (>>>aws
configure): Go inside the user - create access key - get credentials
3. 2 EC2 server
a. Developer machine
i. Dev-Machine - Amazon linux - 30GB - launch instance
b. Production machine
i. Name - additional tags - Name: AppName - Value: SampleApp
ii. Amazon linux - t2.micro - select default security group - 30 GB -
Advanced details: attach IAM instance profile: EC2-S3-Role - Launch
instance
4. Configure developer to the dev machine
a. Open putty - Hostname: public ip address of Dev-machine
b. Connection- SSH-Auth-Credentials: upload ppk file - open
c. Accept - ec2-user - Logged in
i. >>> sudo su -
ii. >>> aws configure
iii. Paste Access key, secret key, region, enter
5. Install CD agent in production machine
a. Connect Prod Machine using Putty: Open putty - Hostname: public ip address of
Dev-machine
b. Connection- SSH-Auth-Credentials: upload ppk file - open
c. Search in google: installing CD agent
(https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-
operations-install-cli.html)
i. >>> sudo su -
ii. yum update -y
iii. sudo yum install ruby (Because code deploy code is in ruby)
iv. sudo yum install wget -y
v. wget
https://aws-codedeploy-us-east-2.s3.us-east-2.amazonaws.com/latest/
install
vi. ls -> give execute permission to install because we need to execute
vii. chmod +x ./install
viii. sudo ./install auto
ix. systemctl status codedeploy-agent (make sure you get active and
running)
x. If you get error, start the service and check again - systemctl start
codedeploy-agent

6. Sample code in dev machine


a. Go to dev ec2 machine
i. mkdir deploy_dir
ii. cd deploy_dir/
iii. mkdir sampleapp
iv. cd sampleapp/
v. vi index.html -put some code in it
vi. vi appspec.yml - type the below code in it (appspec is like a brain for the
code to run)
vii. mkdir scripts
viii. cd scripts/
ix. vi httpd_install.sh
#!/bin/bash
yum install -y httpd

x. vi httpd_start.sh
#!/bin/bash
systemctl start httpd
systemctl enable httpd
xi. vi httpd_stop.sh
#!/bin/bash
systemctl stop httpd
systemctl disable httpd

xii. ll - we notice that these three install, start and stop sh files don’t have
execute permission. So give execute permission to these files
xiii. Chmod 755 * -> give execute permission to all files
xiv. ll - check if these files got execute permission
7. S3 bucket
a. Create public S3 bucket: AWS-S3-Create bucket - name: gir-sampleapp-24 -
uncheck block all public access - ACLs enabled - disable encryption - enable
bucket versioning - create bucket
8. Bring Code deploy application on developer machine (Dev machine -> S3)
a. Go to developer machine
b. Go inside sampleapp directory(where source code is available): Create a code
deploy application:
i. >>>aws deploy create-application –application-name sampleapp
ii. You would get the application ID as the output
iii. Check if it is there in Code Deploy AWS (Can directly create application
through GUI or CLI as we did here)
c. How to bring source code from ec2 machine to S3 bucket using the
application
i. >>>aws deploy push –application-name sampleapp –s3-location
s3://gir-sampleapp/sampleapp.zip
ii. sampleapp.zip - refers to the application name we created in the dev
machine(html file)
iii. Now the files in ec2 dev machine will be uploaded in the s3 machine as a
zip file
iv. Go to AWS - buckets: Go inside your bucket and check if you see these
files in zip format
9. Deployment group (prod server) -> Destination location
a. AWS - Code deploy - Go inside your application - create deployment group -
group name: mycdgrp - Attach service role: cd-role - check Amazon EC2 -
Production server - Matching instance: 1 - disable load balancer - create
deployment group
10. Deployment (Pick code from S3 bucket) -> Pickup location
a. Go inside mycdgrp - Create deployment - Amazon S3 - Select bucket location -
select zip file - create deployment
b. Note: First time deployment is a manual process
c. Note: We might get an error: Too many instances are running. Delete instances
and leave console for 5-6 hours
d. If no error: Success
11. Test my output
a. Paste IP of code deploy in browser
b. Check for output

Code Pipeline
● AWS - CodePipeline - create new pipeline - Category: Build from custom pipeline -
Pipeline name: mypipeline Execution mode: queued - Service role: New service role -
next - source provider: s3 bucket - select bucket name and object key (Go inside object
and copy object key) - next - skip build stage - skip test stage - deploy: Code Deploy -
Application name: sampleapp - choose ec2 group name - review and create pipeline
● Wait for pipeline to succeed

Check working
● Change code in index.html
● Come outside sample app
● Zip the sampleapp file
○ >>>zip -r sampleapp.zip . ->. Means all files inside the directory
● Push the zipped file into s3
○ >>>aws s3 cp sampleapp.zip s3://gir-sampleapp24
● Check pipeline
● Go to S3 - toggle show versions (see both versions)
● Paste IP of code deploy in browser
● Check for output

Deletion
● Code pipeline
● Code deploy - applications - delete
● Delete cloud trails
● Cloud watch - delete logs and rules
● EC2 instances
● Delete user
● Delete roles
● Delete policies
● S3 - empty buckets and delete them
25-02-25 - PROJECT 9

VPC Peering Connection


● 3 tier architecture -must implement VPC peering
● VPC(Virtual private cloud) - it is our own private network so that we can use resources
securely
● VPC peering - A way for a private communication without the use of public networks

Terms:
● Requester VPC: The VPC that initiates the peering request
● Acceptor VPC: VPC that accepts the peering request
● CIDR: Bring IP addresses together and avoid overlapping
● Route tables: Update route table for VPC’s and VPC’s to know each other
● DNS resolution: Private SND names to avoid unwanted confusions
● Transitive Peering (Not supported): We need to bring in direct connection. Transitive
communication is not possible.

Advantages of using VPC peering:


● Secure communication: Traffic stays within our VPC and does not pass through the
public internet.
● Cost effective: Cheaper than VPN, Direct Connection. Only pay for the data transfer.
● Low latency and high bandwidth: Since the traffic is not touching the internet and
stays within our AWS network, it is faster and more reliable.
● Cross account and Cross Region Peering
○ Cross account: Peer between 2 different AWS accounts
○ Cross region: Peer between 2 different regions (inter region VPC peering)
● Simplifies architecture: Easy resource sharing
Architecture: for cross account and cross region
Workflow:
ZONE A
1. Create VPC on Ohio and give IP address - 10.100.0.0/16
a. AWS - VPC - VPC only - name: vpc-a - IPv4 CIDR: 10.100.0.0/16 - create VPC
b. Click on VPc - Top, Actions - edit VPC settings - enable DNS hostnames - Save
2. Create 2 subnets - Public and Private
a. Select VPC - create subnets- create subnets - subnet name: pubsub-a -
availability zone: us-east-2a /Ohio- IPv4 subnet CIDR block - 10.100.10.0/24
b. Add new subnet - subnet name: privsub-a - availability zone: us-east-2b/Ohio-
IPv4 subnet CIDR block - 10.100.20.0/24
c. Select priv subnet - Actions - edit subnet setting - enable autoassign public IPv4
address
d. Select pub subnet - Actions - edit subnet setting - enable autoassign public IPv4
address
3. Create IGW and attach to VPC
a. Internet gateways - name: myigw-a - create internet gateway
b. Attach myigw to public subnet
4. Create 2 route tables (1 for public subnet and 1 for private subnet)
a. VPC - Left side - route table - Name: pubrt-a- Select VPC - create route table
b. VPC - Left side - route table - Name: privrt-a - Select VPC - create route table
5. Set up internet gateway(IGW) for the public route table (Connect IGW and public
route table)
a. Go inside the public route table - down click on edit for routes
b. IGW (0.0.0.0/0) and choose IGW (This is for customers to connect to our
application) - save
6. Subnet Association
a. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with public subnet
b. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with private subnet
7. NAT - Network Address Translator (To connect private subnet through the public
subnet)
a. VPC - left side - NAT gateways - create NAT gateway - name: mynat-a - subnet:
select public subnet - Elastic IP address Allocation ID: Allocate - create
b. Check Elastic IP - it will be active
c. Wait for NAT status to be available
8. Connect NAT and private subnet
a. VPC - Go inside the private route table - down click on routes - edit routes
b. IGW (0.0.0.0/0) and choose Nate gateway(This is for customers to connect to our
application) - save changes
9. Security Groups
a. Create security group - name: pubsg-a - Description: public security group of A-
select myvpc - (Inbound rules) Add rule - ALL TCP - 0.0.0.0/0
b. Create security group - name: privsg-a - Description: private security group of A-
select myvpc - (Inbound rules) Add rule - ALL TCP - custom: select public
subnet ID (Copy paste from public EC2 machine)
10. Create Application - 2 EC2 machines
a. EC2 - Instances - Launch an instance - name: pubec2-a- RedHat - Network
settings: edit - VPC: select myvpc - select public subnet - Auto assign public IP:
Enable - key: pem - Select existing security group: pubsg-a - Launch Instance
b. EC2 - Instances - Launch an instance - name: privec2-a -RedHat - Network
settings: edit - VPC: select myvpc - select private subnet - Auto assign public IP:
Disable- key: pem - Select existing security group: privsg-a - Launch Instance
c. PUT CODE IN ADVANCED SETTINGS: (FOR BOTH INSTANCES)
#! /bin/bash
yum install httpd -y
service httpd start
echo “Hello aLL from $(hostname) $(hostname -i)” >
/var/www/html/index.html
11. Confirm if we get internet connect
a. Connect with putty for public instance
b. Open terminal
i. >>>ping google.com
c. Connect to private through the public instance
i. Copy pem file to redhat machine through WinSCP
ii.

ZONE B
1. Create VPC on Mumabi and give IP address - 20.200.0.0/16
a. AWS - VPC - VPC only - name: vpc-b - IPv4 CIDR: 20.200.0.0/16 - create VPC
2. Create 2 subnets - Public and Private
a. Select VPC - create subnets- create subnets - subnet name: pubsub-b -
availability zone: us-east-1a / Mumbai- IPv4 subnet CIDR block - 20.200.10.0/24
b. Add new subnet - subnet name: privsub-b - availability zone:us-east-1b /
Mumbai- IPv4 subnet CIDR block - 20.200.20.0/24
3. Create IGW and attach to VPC
a. Internet gateways - name: myigw-b - create internet gateway
b. Attach myigw to public subnet
4. Create 2 route tables (1 for public subnet and 1 for private subnet)
a. VPC - Left side - route table - Name: pubrt-b- Select VPC - create route table
b. VPC - Left side - route table - Name: privrt-b - Select VPC - create route table
5. Set up internet gateway(IGW) for the public route table (Connect IGW and public
route table)
a. Go inside the public route table - down click on edit for routes
b. IGW (0.0.0.0/0) and choose IGW (This is for customers to connect to our
application) - save
6. Subnet Association
a. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with public subnet
b. Connect private route table through subnet association
i. Go inside the public route table - down click on edit subnet association
ii. Click on connect with private subnet
7. NAT - Network Address Translator (To connect private subnet through the public
subnet)
a. VPC - left side - NAT gateways - create NAT gateway - name: mynat-b - subnet:
select public subnet - Elastic IP address Allocation ID: Allocate - create
b. Check Elastic IP - it will be active
c. Wait for NAT status to be available
8. Connect NAT and private subnet
a. VPC - Go inside the private route table - down click on routes - edit routes
b. IGW (0.0.0.0/0) and choose Nate gateway(This is for customers to connect to our
application) - save changes
9. Security Groups
a. Create security group - name: pubsg-b - Description: public security group of B-
select myvpc - (Inbound rules) Add rule - ALL TCP - 0.0.0.0/0
b. Create security group - name: privsg-b - Description: private security group of B-
select myvpc - (Inbound rules) Add rule -Type: rdp - Source type:
custom:10.100.20.0/24 (IP of private subnet in ZONE A)
c. Add rule : Type: All ICMP Source: 10.100.20.0/24
10. Create Application - 2 EC2 machines
a. EC2 - Instances - Launch an instance - name: pubec2-b- RedHat - Network
settings: edit - VPC: select myvpc - select public subnet - Auto assign public IP:
Enable - key: pem - Select existing security group: pubsg-b - Launch Instance
b. EC2 - Instances - Launch an instance - name: privec2-b -RedHat - Network
settings: edit - VPC: select myvpc - select private subnet - Auto assign public IP:
Disable- key: pem - Select existing security group: privsg-b - Launch Instance
c. PUT CODE IN ADVANCED SETTINGS:
#! /bin/bash
yum install httpd -y
service httpd start
echo “Hello aLL from $(hostname) $(hostname -i)” >
/var/www/html/index.html

11. Peering setup


a. Create peering connection: A ZONE account -> VPC - left side, peering
connection - name: peera2b - select vpc: vpc-a - Account: same account - VPC
ID (Acceptor): vpc-b(Copy VPC id of vpc-b) - create peering connection
b. Status: Initiated
c. ZONE B has to accept it:
i. ZONE B - Peering connections - you’ll see a VPC with status: Pending
acceptance
ii. Select it - actions - accept request
d. Must modify route table to activate connection
i. ZONE A - route table - privrt-a - go inside it -edit routes- Add route -
Destination: give B zone IP(20.200.0.0/16) - Peering connection - select
your peering connection
ii. ZONE B - route table - privrt-b - go inside it -edit routes- Add route -
Destination: give A zone IP(10.200.0.0/16) - Peering connection - select
your peering connection.
12. Check for output
a. Connect to B zone private machine through A zone private machine
b. Check for connection in terminal
i. >>>ping google.com
DELETION:
1. Delete peering connection in ZONE A
2. Delete ec2 machines
3. Nat gateway
4. VPC
5. Release elastic IP in a and b zone
:
Tasks
Difference between iaas, paas, saas, daas, baas, faas

● IaaS (Infrastructure as a Service):


Provides basic computing infrastructure like virtual servers, storage, and networking,
where the user manages almost all aspects of the operating system and applications.
● PaaS (Platform as a Service):
Offers a development environment with pre-configured tools and services, allowing
users to focus on application development without managing the underlying
infrastructure.
● SaaS (Software as a Service):
Delivers fully functional applications accessible through a web browser, where users
only need to subscribe and use the software without managing any underlying
infrastructure.
● DaaS (Desktop as a Service):
Provides virtual desktops accessible from any device, allowing users to access their
desktop environment without needing to manage physical hardware.
● BaaS (Backend as a Service):
Offers a pre-built backend infrastructure for mobile applications, including user
authentication, data storage, and push notifications.
● FaaS (Function as a Service):
Enables users to run small, event-driven code snippets without managing servers, ideal
for quick execution of specific tasks.

S3 - private and public bucket, Versioning and reverting back to old version

URL, groups(2 groups ), users(4 users) - single user in 2 groups and shuffle like that, create
user with multi factor , customize policies, attach role, user -> URL and CLI

User-mfa-

Download microsoft authenticator

Purchase a domain
https://www.hostinger.com/

11-02-2025
1. Create budget
2. Create cloud trail check bucket
3. Create queue and test
4. Create SNS (Add 3 subscribers: 3rd one using phone)
5. Cloud watch - Detailed documentation and screenshots
6. Follow documentation and finish project (metric from outside)

Learn yaml and json

Customize the script from github for any 3 services and bring iaas

13-02-25 Create records and take screenshots accordingly


● Learn about record types
● Healthcheck - attach notification with sns topic
● Routing policies: Multivalue answer, ip based, geoproximity (documentation with
screenshots)
● Value: (no ec2 instance id)
○ Turn on alias
○ Create CLB
○ Route traffic to Classic load balancer
○ Connect CLB to instances
● Elastic bean stack(EBS) - Sample application and uploading code

Tasks - day 11
1. Create db - Take replica - Run 10 unique queries -Check for output
2. Snapshot - delete db - export to s3 - delete snapshot - restore from s3
3. Project
1. Using AWS documentation link
Detailed documentation - for 2 & 3

TASKS - DAY 12
● Create

22.0.0.0/16

Pub - 22.0.30.0/24

Private - 22.0.40.0/24

Day 16 - CD/CP
● Add test in CP (Manual Approval)
● Enable cloud watch and setup notifications for CD and CP
● Go to Code deploy - selection mysampleapp - top, create notification for all stages
● Go to code pipeline and set notifications
● SS - success and failure notification

VPC Peering
● Pub - Homepage
● Priv a - Login
● Priv b - Dashboard

CAPSTONE PROJECT
● If 3 tier architecture, multi tier and full stack -must implement VPC peering
Mandatory
● VPC is mandatory
● SNS
● Database
● Route53 (domain name)
● Create using Cloudformation (scripts) - attach scripts
● Own architecture diagram
● POC

By today:
● Architecture diagram
● POC

Project 5
● Scalable wordpress deployment with auto healing and load balancer
● EC2, RDS, EFS, ALB, Auto Scaling, R53, cloudformation, cloudwatch - healing and
high availability.
Today
● Put code in github
● Detailed documentation
Team
Joshna
Ragavi - manager
Yogesh
Janardhan

Harini
Shalwin
Joshna
Ajay

Janani
Bhargav
Joshna
Yogesh
Topics
● Reverse proxy
● 3 tier architecture
● CIDR blocks and IP addressing

You might also like