2-Aws Training
2-Aws Training
--------------
What is cloud computing?
----------------------------
Cloud computing is the delivery of computing services over the internet,
such as storage, databases, and software.
It allows users to access these services on-demand, without having to manage
physical servers.
https://aws.amazon.com/what-is-cloud-computing/
Software as a Service(SaaS):
It is a complete product that usually runs on a browser.
It primarily refers to end-user applications. It is run and managed by the
service provider.
The end-user only has to worry about the application of the software suitable
to its needs.
For example, Saleforce.com, Web-based email, Office 365 .
AWS facilitates for both businesses and individual users with effectively
hosting the applications,
storing the data securely, and making use of a wide variety of tools and
services improving management
flexibility for IT resources.
AWS Fundamentals
-----------------
aws region vs availability zone vs data center
Regions:
AWS provide the services with respective division of regions.
The regions are divided based on geographical areas/locations
and will establish data centers.
AWS Lambda:
It is a service in Serverless Architecture with Function as a
Service facilitating serverless computing i.e., running the code on response
to the events,
the background environment management of servers is handled by aws
automatically.
It helps the developers to completely focus on the logic of code build.
It gives you the right to have control over who has authorization (signed in)
and authentication (has permissions) access to the resources.
Amazon Lambda
----------------------------
serverless and event-driven computing service that lets you run code for
virtual applications
or backend services automatically.
You need to worry about servers and clusters when working with solutions
using Amazon Lambda.
It is also cost-effective where you have to only pay for the services you
use.
As a user, your responsibility is to just upload the code and Lambda handles
the rest.
Using Lambda, you get precise software scaling and extensive availability.
With hundreds
to thousands of workloads per second, AWS Lambda responsibly handles code
execution requests.
It is used for bulk message delivery and direct chat with the customers
through system-to-system or app-to-person between decoupled microservice
apps.
It is used to easily set up, operate, and send notifications from the cloud.
It is a messaging service between Application to Application (A2A) and
Application to Person (A2Person),
and sends notifications in two ways – A2A and A2P.
Using VPC, you get complete access to control the environment, such as
choosing IP address,
subset creation, and route table arrangement.
You just need to upload your code and the deployment part is handled by
Elastic Beanstalk
(from capacity provisioning, load balancing, and auto-scaling to application
health monitoring).
Dynamo DB
-----------
DynamoDB is a serverless, document database key-value NoSQL database that is
designed to run high-performance applications.
It can manage up to 10 trillion requests on a daily basis and support
thresholds of more than 20 million requests per second.
AWS Aurora
-------------
AWS Aurora is an RDBMS (Relational Database Management System) which
is built with MySQL and PostgreSQL for the cloud.
Amazon S3 Glacier
Amazon S3 Glacier is the archive storage at a low cost.
Amazon Cloudwatch
---------------------
Amazon CloudWatch detects uncommon changes in the environment, set an alert,
troubleshoots issues,
and take automated actions.
With this, you can track the complete stack, and use logs, alarms, and events
data to take
actions and thereby focusing on building the application, resulting in the
growth of the business.
With this single platform, you can monitor all AWS resources and applications
quickly.
It monitors application performance and optimizes resources.
EC2 Fundamentals
-------------------
EC2 Instance Basics:
Understanding the concept of virtual servers and instances.
Key components of an EC2 instance: AMI (Amazon Machine Image), instance
types, and instance states.
Differentiating between On-Demand, Reserved, and Spot instances.
- Provide secure compute for your applications. Security is built into the
foundation of
Amazon EC2 with the AWS Nitro System.
- Optimize performance and cost with flexible options like AWS Graviton-based
instances,
Amazon EC2 Spot instances, and AWS Savings Plans.
EC2 usecases
---------------------
Deliver secure, reliable, high-performance, and cost-effective compute
infrastructure to
meet demanding business needs.
General purpose
General Purpose instances are designed to deliver a balance of compute,
memory, and network resources.
They are suitable for a wide range of applications, including web servers,
small databases, development and test environments, and more.
Compute optimized
Compute Optimized instances provide a higher ratio of compute power to
memory.
They excel in workloads that require high-performance processing such as
batch processing,
scientific modeling, gaming servers, and high-performance web servers.
Memory optimized
Memory Optimized instances are designed to handle memory-intensive workloads.
They are suitable for applications that require large amounts of memory, such
as in-memory databases,
real-time big data analytics, and high-performance computing.
Storage optimized
Storage Optimized instances are optimized for applications that require high,
sequential read
and write access to large datasets.
They are ideal for tasks like data warehousing, log processing, and
distributed file systems.
Accelerated computing
Accelerated Computing Instances typically come with one or more types of
accelerators,
such as Graphics Processing Units (GPUs),
Field Programmable Gate Arrays (FPGAs), or custom Application Specific
Integrated Circuits (ASICs).
These accelerators offload computationally intensive tasks from the main CPU,
enabling faster
and more efficient processing for specific workloads.
Instance families
C – Compute
D – Dense storage
F – FPGA
G – GPU
Hpc – High performance computing
I – I/O
Inf – AWS Inferentia
M – Most scenarios
P – GPU
R – Random access memory
T – Turbo
Trn – AWS Tranium
U – Ultra-high memory
VT – Video transcoding
X – Extra-large memory
sudo su
yum update -y
yum install httpd
systemctl start httpd
systemctl enable httpd
echo "Hello World" > /var/www/html/index.html
sudo su
apt update
apt install apache2
ls /var/www/html
echo "Hello World!"
echo "Hello World!" > /var/www/html/index.html
echo $(hostname)
echo $(hostname -i)
echo "Hello World from $(hostname)"
echo "Hello World from $(hostname) $(hostname -i)"
echo "Hello world from $(hostname) $(hostname -i)" > /var/www/html/index.html
Ex 2: Security Group
-------------------
Virtual firewall to control incoming and outgoing
traffic to/from AWS resources (EC2 instances,databases etc)
Ex 3: EC2 IP Addresses
-------------------------
Public IP addresses are internet addressable.
Ex 4: Elastic IP Addresses
-----------------------------------------
How do you get a constant public IP address for a EC2 instance?
Quick and dirty way is to use an Elastic IP!
Note:
Elastic IP can be switched to another EC2 instance within the same region
Elastic IP remains attached even if you stop the instance. You have tomanually
detach it.
Now
go to Elastic IP addresses==> action ==>Dissociate Elestic IP address ==> then
release Elestic IP address
Using Userdata
----------------
In EC2, we can configure userdata to bootstrap
We can Install OS patches or so ware when an EC2 instance is launched.
Note:
Dont forget to enable enboud rule to allow traffic
Launch Templates
-------------------
Why do you need to specify all the EC2 instance details (AMI ID, instance
type, and network settings) every time
you launch an instance?
IAM
========
AWS IAM (Identity and Access Management) is a service provided by Amazon Web
Services (AWS)
that helps you manage access to your AWS resources.
It's like a security system for your AWS account.
IAM allows you to create and manage users, groups, and roles.
Users represent individual people or entities who need access to your AWS
resources.
Groups are collections of users with similar access requirements, making it
easier to manage permissions.
Roles are used to grant temporary access to external entities or services.
With IAM, you can control and define permissions through policies.
Policies are written in JSON format and specify what actions are allowed or
denied on specific AWS resources. These policies can be attached to IAM
entities (users, groups, or roles)
to grant or restrict access to AWS services and resources.
IAM follows the principle of least privilege, meaning users and entities are
given only
the necessary permissions required for their tasks, minimizing potential
security risks.
IAM also provides features like multi-factor authentication (MFA) for added
security and an
audit trail to track user activity and changes to permissions.
By using AWS IAM, you can effectively manage and secure access to your AWS
resources,
ensuring that only authorized individuals have appropriate permissions and
actions are
logged for accountability and compliance purposes.
Components of IAM
-------------------
Users:
IAM users represent individual people or entities (such as applications
or services)
that interact with your AWS resources. Each user has a unique name and
security credentials
(password or access keys) used for authentication and access control.
Groups:
IAM groups are collections of users with similar access requirements.
Instead of managing permissions for each user individually, you can
assign permissions to groups,
making it easier to manage access control. Users can be added or
removed from groups as needed.
Roles:
IAM roles are used to grant temporary access to AWS resources.
Roles are typically used by applications or services that need to
access AWS
resources on behalf of users or other services. Roles have associated
policies
that define the permissions and actions allowed for the role.
Policies:
IAM policies are JSON documents that define permissions.
Policies specify the actions that can be performed on AWS resources and
the
resources to which the actions apply. Policies can be attached to
users,
groups, or roles to control access. IAM provides both AWS managed
policies
(predefined policies maintained by AWS) and customer
managed policies (policies created and managed by you).
Lab 2: Authorization:
-------------------
Create user group
crate user to each group with different permissions
Building a Simple Spring Boot Java Project on AWS EC2 Using Maven
----------------------------------------------------------------
step 1: create spring boot project and push to github
https://github.com/rgupta00/employeeappaws.git
step 2: create ec2 instance and Connect to EC2 Instance via MobaXterm
apt-get update -y
apt-get upgrade -y
mvn -version
updating system
sudo apt-get update
install docker
sudo apt-get install docker.io -y
To start the Docker service automatically when the instance starts, you can use the
following command:
sudo systemctl enable docker
Add your user to the Docker group to run Docker commands without 'sudo'
sudo usermod -a -G docker $(whoami)
Note that the change to the user’s group membership will not take effect until the
next time the user logs in.
You can log out and log back in to apply the changes or use t
he following command to activate the changes without logging out:
newgrp docker
inbound rule
FROM openjdk:17-alpine
MAINTAINER email="[email protected]"
EXPOSE 8080
ADD target/*.jar empapp.jar
ENTRYPOINT ["java","-jar","employeeappaws.jar"]
docker image ls
install kubectl:
-----------------
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
install
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
If you do not have root access on the target system, you can still install kubectl
to the ~/.local/bin directory:
chmod +x kubectl
mkdir -p ~/.local/bin
mv ./kubectl ~/.local/bin/kubectl
# and then append (or prepend) ~/.local/bin to $PATH
install minkikube:
-----------------
https://minikube.sigs.k8s.io/docs/start/
minikube dashboard
AWS S3 Buckets
===============
What is Amazon S3?
Simple Storage Service is a scalable and secure cloud storage service
provided by Amazon Web Services (AWS).
It allows you to store and retrieve any amount of data from anywhere on the
web.
Scalability:
You can store and retrieve any amount of data without worrying about
capacity constraints.
Security:
S3 offers multiple security features such as encryption, access
control, and audit logging.
Performance:
S3 is designed to deliver high performance for data retrieval and
storage operations.
Cost-effective:
S3 offers cost-effective storage options and pricing models based on
your usage patterns.
1. Creating an S3 bucket
2. Choosing a bucket name and region
3. Bucket properties and configurations
4. Configure Bucket-level permissions and policies
5. Uploading and Managing Objects in S3 Buckets
S3 bucket policies
-------------------
* S3 provide bucket policies, access control and encryptation setting
* Encrpt data at rest using server side encryption options provided by S3.
Additionally
eanble encryptation in transit by using SSL/TSL for data transfer
Let we have one IAM account we want that he should be not able to access with
bucket inspite of having
s3full access ..
In this case : Create and manage bucket policies to control access to your S3
buckets.
Bucket policies are written in JSON and define permissions for various
actions and resources.
{
"Version": "2012-10-17",
"Id": "RestrictBucketToIAMUsersOnly",
"Statement": [
{
"Sid": "AllowOwnerOnlyAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::your-bucket-name/*",
"arn:aws:s3:::your-bucket-name"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": "arn:aws:iam:: AWS_ACCOUNT_ID :root"
}
}
}
]
}
{
"Version": "2012-10-17",
"Id": "RestrictBucketToIAMUsersOnly",
"Statement": [
{
"Sid": "AllowOwnerOnlyAccess",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::app1-shoppingcart-busycoder-app/*",
"arn:aws:s3:::app1-shoppingcart-busycoder-app"
],
"Condition": {
"StringNotEquals": {
"aws:PrincipalArn": "arn:aws:iam::904233120381:root"
}
}
}
]
}
AWS CLI
==========
UI is not automcation friendly
We just need to install aws cli and use to intract with aws
./aws s3 ___ ___
to check buckets
aws s3 ls
aws s3 help
Amazon SQS offers common constructs such as dead-letter queues and cost
allocation tags.
It provides a generic web services API that you can access using any
programming language that the AWS SDK supports.
Durability:
For the safety of your messages, Amazon SQS stores them on multiple
servers.
Standard queues support at-least-once message delivery,
and FIFO queues support exactly-once message processing and high-
throughput mode.
Availability:
Amazon SQS uses redundant infrastructure to provide highly-concurrent
access to
messages and high availability for producing and consuming messages.
Scalability:
Amazon SQS can process each buffered request independently, scaling
transparently to
handle any load increases or spikes without any provisioning
instructions.
Reliability:
Amazon SQS locks your messages during processing, so that multiple
producers
can send and multiple consumers can receive messages at the same time.
Customization:
Your queues don't have to be exactly alike—for example,
you can set a default delay on a queue. You can store the contents of
messages
larger than 256 KB using Amazon Simple Storage Service (Amazon S3) or
Amazon DynamoDB,
with Amazon SQS holding a pointer to the Amazon S3 object,
or you can split a large message into smaller messages.
Ex3: We will do post request to send an message and message would be used by the
consumers
What is SNS?
-----------
Amazon Simple Notification Service (SNS) is a fully managed messaging service
from AWS that enables developers to send notifications to various subscribers
through channels like email, SMS, and mobile push notifications, using a
publish-subscribe (pub/sub) model
https://docs.aws.amazon.com/sns/latest/dg/welcome.html
Documentation:
-----------------
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/
welcome.html
Use case:
-----
I want to provide user to register to my email substribtion and also allow
him
I will send email to student, he will accept and will recive message
till he now unscribe
AWS RDS
========
Amazon Relational Database Service (Amazon RDS) is a web service
that makes it easier to set up, operate, and scale a relational database in
the AWS Cloud.
It provides cost-efficient, resizable capacity for an industry-standard
relational database and manages common database administration tasks.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html
steps :
1. go to rds service choose freetier mysql
2. provide DB instance identifier for example mydbraj12345id
3. provide db name rajdb root root1234
DynamoDB
========
Amazon DynamoDB is a database service from Amazon Web Services (AWS)
that stores and retrieves data in key-value pairs.
It's a NoSQL database that's cloud-native, meaning it only runs on AWS.
Scalability
DynamoDB can scale automatically to support tables of any size.
It can handle millions of queries per second.
Performance
DynamoDB offers fast, consistent performance at any scale.
It maintains low latency and predictable performance.
Serverless
DynamoDB supports serverless applications.
It has a flexible billing model and serverless-friendly connection
model.
Key-value pairs
DynamoDB's data model consists of key-value pairs in a large, non-
relational table of rows.
Example : creating dynamodb table and running with spring boot application
How it help?
------------------
You can store database credentials or any other type of secret.
Store a new secret
How it works
------------
Use Secrets Manager to store, rotate, monitor, and control access
to secrets such as database credentials, API keys, and OAuth tokens.
Enable secret rotation using built-in integration for MySQL, PostgreSQL,
and Amazon Aurora on Amazon RDS.
You can also turn on rotation for other secrets using AWS Lambda functions.
To retrieve secrets, you simply replace hardcoded secrets in applications
with a call to Secrets Manager APIs, eliminating the need to expose plaintext
secrets
https://ap-south-1.console.aws.amazon.com/secretsmanager/landing?region=ap-south-1
Java program to retrive secreate
AWS Lambda
==============
https://aws.amazon.com/lambda/
AWS Lambda
lets you run code without thinking about servers.
You pay only for the compute time that you consume —
there is no charge when your code is not running.
With Lambda, you can run code for virtually any type of application or
backend service,
all with zero administration.
Funcation as a service
Lambda can act as trigger for example let say when we upload an doc on aws s3
a function must run to process that file
AWS Lambda is a cloud computing service that lets developers run code
without managing compute resources.
It's an example of serverless architecture and function as a service (FaaS).
How it works
---------
Function: The code that performs a task
Configuration: Specifies how the function is executed
Event source: Triggers the function, such as a user action on a website
Use cases
----------
Real-time data processing
File and data transformation
API backend
Chatbots and natural language processing (NLP)
IoT device management
Scheduled tasks and cron jobs
Data validation and enrichment
User authentication and authorization
The Kubernetes control plane, comprising the backend persistence layer and
API servers,
is provisioned and scaled across multiple AWS availability zones using Amazon
EKS,
resulting in high availability and the elimination of a single point of
failure.
Unhealthy control plane nodes are identified and replaced, and control plane
patching is given.
As a result, an AWS-managed Kubernetes cluster that can endure the loss of an
availability zone has been created.
using eksctl
------------
step 1: create ec2 instance and install eksctl
step 2: check arch uname -m x86_64
setp 3:
mkdir eks-setup
cd eks-setup
https://github.com/eksctl-io/eksctl/releases
wget
https://github.com/eksctl-io/eksctl/releases/download/v0.204.0/eksctl_Linux_amd64.t
ar.gz
aws configure
aws s3 ls
to check aws cli is working correctly or not
What is do?
-----------
https://eksctl.io/
cat /root/.kube/config
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.32.0/2024-12-20/bin/
linux/amd64/kubectl
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.32.0/2024-12-20/bin/
linux/amd64/kubectl.sha256
cp kubectl /usr/local/bin/
Now check the command
vim
eks.yaml
-----------
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: busycoder-cluster
region: ap-south-1
nodeGroups:
- name: ng-1
instanceType: t2.micro
desiredCapacity: 2
how to run:
------------
eksctl create cluster -f eks.yaml
create a pod :
------------
kubectl run nginxapp2 --image nginx
kubectl get all
to delete cluster:
----------------
eksctl delete cluster --name busycoder-cluster2 --region ap-south-1
Amazon CloudWatch
====================
Amazon CloudWatch is a monitoring tool that helps you track the
health of your AWS applications and resources.
It can help you monitor and fix operational issues, optimize performance, and
troubleshoot infrastructure.
What it does
---------------------
Monitors applications, infrastructure, network, and services
Collects and stores log files
Sets alarms
Views graphs and statistics
Streams metrics
Provides cross-account observability
gcloud init
other informations
----------------------
gcloud compute zones list
gcloud compute regions list
gcloud compute machine-types list
gcloud compute machine-types list --filter zone:asia-southeast2-b
gcloud compute machine-types list --filter "zone:(asia-southeast2-b asia-
southeast2-c)"
gcloud compute zones list --filter=region:us-west2
gcloud compute zones list --sort-by=region
gcloud compute zones list --sort-by=~region
gcloud compute zones list --uri
gcloud compute regions describe us-west4
updating system
sudo apt-get update
install docker
sudo apt-get install docker.io -y
Add your user to the Docker group to run Docker commands without 'sudo'
sudo usermod -a -G docker $(whoami)
Note that the change to the user’s group membership will not take effect until the
next time the user logs in. You can log out and log back in to apply the changes or
use the following command to activate the changes without logging out:
newgrp docker
inbound rule
install kubectl:
-----------------
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
install
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
If you do not have root access on the target system, you can still install kubectl
to the ~/.local/bin directory:
chmod +x kubectl
mkdir -p ~/.local/bin
mv ./kubectl ~/.local/bin/kubectl
# and then append (or prepend) ~/.local/bin to $PATH
install minkikube:
-----------------
https://minikube.sigs.k8s.io/docs/start/
minikube dashboard
IMP LINKS
=========