0% found this document useful (0 votes)
12 views8 pages

CA Notepad

The document provides a comprehensive guide on managing security groups, EC2 instances, IAM roles, EBS, EFS, and S3 in AWS. It covers setting up inbound and outbound rules for security groups, launching instances, creating elastic IPs, and configuring load balancers and auto-scaling groups. Additionally, it explains how to create and manage S3 buckets, including setting up static websites and replication rules.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views8 pages

CA Notepad

The document provides a comprehensive guide on managing security groups, EC2 instances, IAM roles, EBS, EFS, and S3 in AWS. It covers setting up inbound and outbound rules for security groups, launching instances, creating elastic IPs, and configuring load balancers and auto-scaling groups. Additionally, it explains how to create and manage S3 buckets, including setting up static websites and replication rules.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 8

security groups

>>>>to protect the webpages or..... these have allow rules


www ______ inbound and outbound traffic ____ security group _EC2 instance
these refere by the ip addresses.

security groups acts as a firewall on EC2 instances

They regulate:
access to ports
ip rages ipv4 and ipv6
control of inbound network( from other to the instance)
control of outbound network ( from instance to the other)

>>>port no..means reference numbers like TCP,FTP,HTTP....

EC2 instance --2 groups>>security inbound and outbound

>>>>AWS
ec2
security group
scroll right you will see default security group >>>>it allows all inbound rule
blocked

all outbound traffic allowed

>>>click on security grp


basic details---- name windows-sg
description ---sg for windows instances
scroll down

>>Inbound rules
add rule
type---search for rdp
source type --anywhere ipv4

if u want any ip instance u will give the range in source info


add rule.

>>Inbound rule 2 --http


>>>Inbound rule 3--https

scroll down

>>>click on security group


for Linux -Linux sg
ssh
http
https
create group
now attach these groups to specific instances

>>>Instances
click on launch instances
name---Linux
amazon Linux
scroll down
instance type--free tier te2 micro
click on create key pair---Linuxkey pair
scroll down
private key file format ----.ppk(PuTTy)

>>firewall--- select existing security groups


common security grp >.linux-sg1
scroll down

>>advanced details
paste code
launch instance

>.go to instances--select instance---copy public ip address--paste in new tab.

>>>.go for instances --- scroll right --security check unabled the http and https
if errors came check

>>>>Method 1: access these instanes by putty


click on coneect
scroll
download the putty app
go to google---search putty
enter
download putty -free ssh--64bitx86--click --downloading
install the application.

>>search puttygen
load
click on allfiles---go downloads--Linux keypairs---saveas a private key---yes---
name--linuxputty--save.

>>search putty
putty
access the instance
saved sessions----linuxserver--save

host name with ip address--go to instances--connect---scroll down--host name---


copy--paste it on putty application
hostname@public ip ddress
saved
Linux server
>>>> extend ssh ----auth---credentials----private key auth---browse---downloads----
Linux putty---open--accept

>>.cmds---whoami
ping google.com
pwd
ctrlc

to get the passwords we create key pairs.

>>>>>{search puttygen
click load--all files--downloaded keypair i.e.,Linux key pair-- open---save as a
private key---click---yes--enter name save.now password created.

search putty--open putty---saved session enter server name as linux--save---select


linux---enter host name---@public ip addrr.
connect} same as above

>>>ports to know
22=SSH(secure shell)
21=FTP-----upload files into a file share
22=SFTP(secure file transfer protocol)-----upload files using ssh
80=HTTP--access unsecured
443=HTTPS--acess secured websites
3389=RDP(remote desktop protocol)

>>>ssh table--

mac---access by ssh ec2


linux---same
win 10-----putty ec2
wind >=10----ssh,putty,ec2

>>>>IAM roles---api calls


>>>iAM role in Linux instance

login to AWS account


launch a Linux instance
connect the instance
>>>cmd----aws --version
aws configure

>>>go to instance search iam


click roles
create role
aws service(specific service)
use case---ec2
scroll---next
permiss.policy-----iamread
select iam read-only access
role details----demoroleforce2----create role

>>>>role for ec2


search ec2
iinstances
attanched to created Linux instance---select theat instance--
go to actions---go to security---modify iam role----drop down ---created role---
click it----iam role.

>>>>same for the windows


launch windows instance----rdp----attach roles----cmd----run the commands

>>>>RDP-----login------aws cli for windows

>>>>open aws account


go to ec2---leftside we see diff instances
spot requests(bid)---dont launch istance to avoid bills
create spot fleet request---launch para-----manual config---ami----select for the
operating system
keypair---additional launch para----launch ebs----ebs size fix---monitoring---check
box---security grps----windows--auto sign in ipv4---attach the iam roles----user
data---as text---batch script----target capacity----increase instances as u want--
check box->set cost price(bidding amount)----(in dollars)ex; 10-----vcpus---10---
max;50---ram--100gb---max;200gb-----additional---cpu manufacture(enough to buy)---
don't launch this instances

>>>>for public to public---no devices required


for pub---pri---routers required'
10,172,192-----private
8.8.8,1.1,1----public

>>>>elastic ip----
**attach static Ip addr to ec2 Linux instance---
launch one ec2 Linux instance---copy ip addr--paste it in new tab---see the
addr----now to stop the instance------select instance-----instance state---stop
instance----again restart---instance state ---start and refresh---now the ip addrr
changed---for this we have to attach elastic ip address.

>>>To create elastic Ip----network ans security option----click on elastic ip'----


click on allocate elastic ip addrr---select amazon pool of ipv4----scroll----
network border grp---default----allocate click--ip addr shows----now attach this ip
addr to instance---select ip addrrr-----click on actions---associate ip addrr-----
select the instance to the ip addrrrr---scroll---associate---that's it.

now check the ip addrr of instance ---stop and start the instance.----now it will
be a static ip addrrss.----------before terminate the instance we need to
disassociate the instance----actions---disassociate the instance.----go to instance
and terminate the instance.----after in ec2--actions---release the ip addrr---
delete

>>.placement groups----placing the ec2 in the srvers.


regons,zones

>>>>login to aws account


ec2--placement groups----create placement group---name as cluster-type---select
stratergy as a cluster ---create group----again create placement group---name as
spread type---strategy---spread----spread level---rack(no restr--for deploy)-----
again placement grp---name as partion type----strategy--partion---no.of partions is
6.create grp.

how to attach the instance---go to instances---launch instances-advanced details--


placement group---partion--dont launch u ll get a bill.then go to placement grp ---
select the group name---actions--select all grps---delete

>>>>creat ENI
before that launch 2 Linux instances---network and secu. we have network
interfaces----click it--name as---myENI---subnet is 1b--(ec2--instance--select any
instance---availability zone we see)---ipv4 is auto----scroll-----attach a selurity
grp-----security grp---linux---scroll---create network interface--attach this eni
to a specific instance----select eni---actions----attach--vpc---instance is 2
instances add by check the ids----attach.
to check this go to instance select and below--networking---able to see the 2 eni.

>>Task ----create another eni with a different availability zone


>>>>login to aws
open ec2----launch instance---lnux---key pair---security grp------attach security
grp as lnux sg-----additional details--sroll---hyprnet behaviour ---enable---config
storage---storage ---add new volume---encrypted----kms key--default---launch
instance---connect---

****cmd---uptime

now hybernate the instance---slect the instance---instance state---hybernate----


again restart---instance state---start

***cmd---uptime

>>>EBS
launch instance---config--
launch---select---go for storage tab--see volume size---?????---photo in wa

>>>>launch instance---windowsoperating system---keypair---configuration storage---


30gb---launch instance---connect this instance---rdp--

search diskmanagement
snapshots*

>>>connect windows instance to ec2 volume

select volume and attach the volume---device name select 1st one---attach the
volume---
in harddisk partition.
Initialise the harddisk---right click --intialise. partions 2 types---mbr and gpt.
MBR--dive into 4 partitions--
gpt -128-unlimited partition.
select gpt ---ok--open file explorer--windows c
unallocate---right click---convert the disk into dynamic disk---disk 1---ok---to
create d drive---right click unallocate---simple volume---nxt--nxt--finsh.
see in disk 0---a basic disk.
new volume d drive---new folder name as ABC--now take snapshot of this volume---
minimise tab---in volumes --scroll--see snapshot---click---1st recycle reduntion
rule---click recycle bin---create reductions ule---name demo rule---
ebs site---reduction period--1day---scroll---create reduction role.now go to ebs
volumes-click on volume---create snapshots--in volume iddddddddd---select your
volume---scroll---create snapshot
go to snapshots---create a snapshot---volume-id---description--demo snap---
create.to use snapshot click on that go to actions---select 1c region---encrypt---
default kms key--create volume.

>>>>how to create EBS encryption


ceat volumes----go to EC2 --in storage section volumes---GP3---size--1GB or
10GB----IOPS 3000---throughput 125----avail zone---encryption---attach the default
kms key---scroll down---create volume

>>crete another ebs acc with out encryption


create volume---create snapshot for the 2 acs
actions--create snapshots--now another acc---actions--create snapshots
Now go to snapshot---rename that 2acc---actions---(1 method)copy snapsot----(2
method)create volume from the snapshot.
>>>create EFS
search EFS---go to file sytems-create file system---create file system----select
default vpc---create.
go to customize option---regional---scroll---enable check box---lifecycle
amangement--1day---archive--7days--transition into standard--encrupt--throughput
mode---enhanced--elastic--next
>>>security group---go to ec2---create separate ec2 for the NFS---create security
grp---name--nfs sg---descr--secu grp for nfs---no need inbound rule---create
security grp.
To attach the secu.grp for nfs---attach---next--next--create---launch instance--2
instances---amazon linux--attach akey pair---network sett---edit---subnet---avail
zones firewall--select existing grp----attch linuux security grp---1a---scoll---add
shared file system---mount point---launch instance. launch another with different
subnet. attach same nfs to the instance------connect both the instance--

cmds---
1.list command--ls /mnt/efs/fs1
2.to get admin access---sudo su
3.create file---echo "hello world1"> /mnt/efs/fs1/hellow.txt
4.cat /mnt/efs/fs1/hellow.txt

>>for 2nd instance


cat /mnt/efs/fs1/hellow.txt

>>>>launch 2 instances,---create load balancer---security grp----target group----in


load balancer---attributes---off----attach the ssl certificcates---select loa
dbalancer---go to listeners and rules---add listeners----http place lo https---port
no--443---scroll---default ssl----from acm-----req a new certificate-----u need to
pay----attach that cert and add it.

>>>>>Auto scaling grp


.copy paste dns server in new tab
create 2 instances ,laod balancer.target grp.securitu grp. copy paste dns

>>>create auto scaling grp---name ASG---launch template---create---name Linux


webserver----scroll down---quick start--attach a operating sys---select amazon
linux---scroll---t2micro----scroll--attach key pair----keypair compulsory---
security grp---linux---name---linux-sg---desc--add inbound rule---http and ssh---
create sec grp---Linux security grp attach----storage---adv details--paste user
data script----create launch template----attach the template---scroll---next-----
avail zones nd subnets---slect all the subnets--availa zone---balanced effort---
next-----click on reset laumch template)---need to attach a load balancer-----
attach existing load balancer---click on created load balancer----scroll---health
check --turn on elastic load balancer---scroll---next---scaling---next----next---
create auto scaling---click ---go for activity ---ASG---click on edit---capacity
overview size is 2---update

>>>>Dynamic scaling
click on auto scaling group---click on auto scaling---Recurrence--every day and put
time zone-----create predictive scaling ---enable---create dynamic scaling---pu
tdesired as a 1---see in action history---go to asg---create dynamic scaling---
click on predictive scale policy----name----cpuutilization--50%target
utilization----create-----taget value as a 4----cretae----go to edit---set max.
desired capacity to 4---update----go to instanes--connect any 1 instance(now gong
to give stress)
cmds____1.sudo yum install -y stress
2.stress --cpu 4--timeout 60

>>>>>>S3 class

to crate a bucket click on reate a bucket---select the Mumbai region---aws region


asia mumbai---general purpose---bucket name as roman(should be unique and no starts
with cap,)----block public access--scroll---bucket version disables---encryption---
key is enable--u can upload data to the bucket by clicking on the bucket----click
on upload---addd files---upload--click on close---click on the file----see the s3
url---click open----access to open----object url will give errors becz it is
external user.

>>>>create one json(bucket) policy


go to bucket---permissions-----edit block public access---edit---block on---save
changes---confirm
>>>Json policy:buckt--prmisions---edit bucket policy----click on policy
generator---select s3 buck policy---effect is allow---principal---give as *---
actions---get object---ARN---bucket arn(in policy)---copy--paste it with/*----
scroll----add staement---generate policy----copy this json policy---paste it on
bucket policy and save changes---now refresh that image page then u will be able to
get.

>>>>How to create static webpage in s3


In desktop By usig html webpage-go to s3----open the previous bucket---we ned to
enable static web page----proprties---scroll---static webpage hosting----edit----
enable---index doc as document name----save changes----go to object---upload the
created html page---upload---go to the particular bucket---go to properties---
scroll---link copy that ---paste it on new tab

>>versioning
go to bucket --select the bucket---properties---bucket versioning ----edit---
enable---save changes----go to html code----edit---click on save---go to objects---
upload---add files---upload----go to the webpage---refresh.
click on buckets--show versions--enable--version id----

>>>>>login to aws accounHow to create replication---same region replication and


cross region replication

>>>same region replicaation

select a Mumbai region---go for s3----create one bucket----name as


name.source123-----enable the bucket versioning----create bucket-----create another
bucket----give name as destination bucket---enable versioning---create bucket----
open the source bucket--to do rule----go to management---scroll---replication
rules-----click ok create replication rule----name as same region replication----
rule scope is apply
to all objects in the bucket----destination------choose---destination path---create
a new role------save---submit----go to buckets ---open the source buckets---try to
add files---in destination it will replicate---

>>>cross region replication

select different region in destination region


create source bucket----remove the block public access settings---acknowledge----
version enable-----create bucket---for create destination account need your friends
account in another window---go to source bucket---management----replication
rule-----create replication rule---give name---scope as apply all to object----
destination as specify a bucket in another account----12 dig acc id----destination
bucket name copy---change object to bucket destination owner----scroll---create new
iam role----save---submit---iam role created is shown---open that role---u can see
some options---click on Json--now we need to attach the bucket policy---to do that
go to chat gpt ask"bucket policy for destination bucket for cross account
replication---we will have a policy----copy---destination account---permissions--
edit bucket policy---go to source bucket---arn copy the arn----go to dest bucket
paste it--remove that and attach the arn. last line remove and paste destination
name dont remove slash and stat---copy link paste it in new tab---click on sve
changes----add object over source bucket regions--check n destination account
whether it is replicating or not.

You might also like