0% found this document useful (0 votes)
27 views33 pages

Aws Notes

The document provides an overview of cloud computing, virtualization, and AWS services including EC2, VPC, IAM, Route 53, and S3. It explains the differences between private and public clouds, outlines key features and processes for managing AWS resources, and introduces Infrastructure as Code (IaC) with CloudFormation Templates. Additionally, it covers the AWS Command Line Interface (CLI) for efficient interaction with AWS services.

Uploaded by

DEVA PRASATH R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views33 pages

Aws Notes

The document provides an overview of cloud computing, virtualization, and AWS services including EC2, VPC, IAM, Route 53, and S3. It explains the differences between private and public clouds, outlines key features and processes for managing AWS resources, and introduces Infrastructure as Code (IaC) with CloudFormation Templates. Additionally, it covers the AWS Command Line Interface (CLI) for efficient interaction with AWS services.

Uploaded by

DEVA PRASATH R
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Cloud-AWS

What is Cloud computing?


Cloud computing is the delivery of services-storage, networking, processing power, databases
over the internet. Instead of owning and managing physical hardware, you can access these
resources on demand from a cloud provider paying only for what you use

What is Virtualization?
Every company has datacenters with many servers but the problem is these servers are costly. A
server for an application will cost more.

So these servers can be configured in such a way that it is virtualized into like 10 more servers
where 10 applications can be hosted. Suppose 100GB of server is bought it can be made into 10
servers of 10 GB each.

Private Cloud
A private cloud is used exclusively by one organization. It is hosted on-premises or by a third party
provider.

Better control, security and customization but more expensive

Ex: NASA uses Nebula for secure data processing

Cloud-AWS 1
Public Cloud
Shared among multiple organizations and hosted by AWS, Azure ,GCP etc.

Security is managed by the provider, suitable for general-purpose or less-sensitive workloads.

Netflix uses Amazon Web Services (AWS) to stream content to millions of users worldwide,
leveraging its scalability and cost-efficiency

1.Identity and Access Management(IAM)


In AWS, IAM (Identity and Access Management) is a service that helps you manage access to AWS
resources securely. It allows you to:

1. Create Users: Add individual users for team members.

2. Define Permissions: Control what resources they can access and actions they can perform.

3. Create Groups: Group users with similar permissions for easier management.

4. Use Roles: Assign temporary access to AWS resources for applications or services.

5. Enable Policies: Use JSON-based policies to define precise permissions.

1.1 Creation Process


1. Go to IAM

2. Go to Users

3. Attach Policies

4. Modify permissions

5. Create

Cloud-AWS 2
Similarly multiple users can be put in a group and have the permissions modified so that the process
is made easy

2. EC2(ELASTIC CLOUD COMPUTE)


An EC2 instance is a virtual server in AWS to run applications on the cloud. It provides scalable
computing capacity, allowing you to launch virtual machines(VM) and configure them as per your
needs.

2.1. Key Features of EC2:

1. Scalability:

Easily scale up or down based on demand.

Ideal for handling traffic spikes or reducing costs during low usage.

2. Variety of Instance Types:

EC2 offers various instance types optimized for different use cases like compute, memory,
storage, or GPU-based workloads.

3. Pay-As-You-Go:

Pay only for the time the instance runs.

4. OS Flexibility:

Choose operating systems like Linux, Windows, or macOS.

5. Storage Options:

Cloud-AWS 3
Use Elastic Block Store (EBS) for persistent storage or instance store for temporary storage.

6. Networking:

Launch instances in Virtual Private Clouds (VPCs) with security and customization options.

2.2 Steps to Launch an EC2 Instance:

1. Choose an AMI (Amazon Machine Image):

Select the OS and software configuration.

2. Select an Instance Type:

Choose CPU, memory, storage, and networking capacity.

3. Configure Instance Details:

Specify networking, IAM roles, and other settings.

4. Add Storage:

Attach volumes like EBS or instance store.

5. Configure Security Group:

Define inbound and outbound traffic rules.

6. Launch the Instance:

Choose an existing key pair or create a new one for secure access.

2.3 Exercise

The ip address of the instance was taken

ssh -i (key_pair_file) ubuntu@ip in gitbash

The instance ip terminal will come

To come out type sudo su -

apt update

apt install openjdk-11-jdk

Go to jenkins ubuntu download and copy the commands

systemcl status jenkins

port is 8080 for jenkins

go to security in ec2

inbound rules →Edit

Custom tcp-8080(port)-anywhere ipv4-save

Cloud-AWS 4
now type http://ip:8080 and see jenkins deployment on ec2

Basically deployed jenkins in ec2 instance

3. Virtual Private Cloud(VPC)


Imagine you want to set up a private, secure, and isolated area in the cloud where you can run your
applications and store your data. This is where a VPC comes into play.

A VPC is a virtual network that you create in the cloud. It allows you to have your own private
section of the internet, just like having your own network within a larger network

Within this VPC, you can create, and manage various resources such as servers, databases and
storage

Think of it as having your own little "internet" within the bigger internet. This virtual network is
completely isolated from other users' networks, so your data and applications are secure and
protected.

Just like a physical network, a VPC has its own set of rules and configurations. You can define the
IP address range for your VPC and create smaller subnetworks within it called subnets.

These subnets help you organize your resources and control how they communicate within each
other.

To connect your VPC to the internet or other networks, you can set up gateways or routers. These
act as entry and exit points for traffic going in and out of your VPC.

3.1 VPC components

3.1.1 Virtual Private Cloud:


A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own
data center. After you create a VPC, you can add subnets.

Cloud-AWS 5
VPC CREATION

3.1.2 Subnets

A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability
Zone.

Public Subnet- Resources that need access to internet

Private Subnet- No public access

3.1.3 IP Addressing

You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets.

You can also bring your public IPv4 and IPv6 GUA addresses to AWS and allocate them to
resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers.

Cloud-AWS 6
3.1.4 NACL

A Network Access Control List is a stateless firewall that controls inbound and outbound traffic at
the subnet level.

It operates at the IP address level and can allow or deny traffic based on rules that you define.

NACLs provide an additional layer of network security for your VPC.

WITHIN EC2 BLOCKING A SPECIFIC PORT ADDRESS . EVEN IF IT IS ALLOWED IN INSTANCE LEVEL IT WILL BLOCK
IT SINCE NACL IS CONTROLLING THE ENTIRE VPC

3.1.5 Security Group

A security group acts as a virtual firewall for instances (EC2 instances or other resources) within a
VPC.

It controls inbound and outbound traffic at the instance level.

Security groups allow you to define rules that permit or restrict traffic based on protocols, ports,
and IP addresses.

WITHIN EC2 ALLOWING THE PORT 8000

3.1.6 Route Tables

Cloud-AWS 7
Use route tables to determine where network traffic from your subnet or gateway is directed.

3.1.7 Gateways and Endpoints

A gateway connects your VPC to another network. For example, use an internet gateway to
connect your VPC to the internet.

Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway
or NAT device.

3.1.8 NAT(Network Address Translator)

A NAT in AWS is used to enable resources in private subnet to access the internet while keeping
them secure by preventing direct inbound access.

1. Instances in a private subnet send outbound traffic (e.g., HTTP/HTTPS) through the NAT device.

2. The NAT translates the private IP addresses of the instances into a public IP address for internet
communication.

3. Responses from the internet are routed back to the instances through the NAT.

A web application running in private subnets might need to download security patches or updates
from the internet. The NAT gateway enables this without exposing the application to direct internet
access.

3.2 Exercise
Created a VPC with two availability zones

Created an EC2 instance and chose the VPC creates

Open git and ssh -i (access_key) ubuntu@(ip)

sudo apt update

python3 -m http.server 8000

Went to chrome and http://(ip):8000

It will open

Went to security groups in EC2 and edited inbound rules to deny access to port 8000

Went to NACL in VPC and gave access to port 8000

Likewise changed the settings and understood the access options.

3.3 Reference
https://github.com/iam-veeramalla/aws-devops-zero-to-hero/blob/main/day-4/README.md

Cloud-AWS 8
4. DNS(Domain Name Service)
DNS acts as a directory service by mapping domain names to IP addresses, which enables users to
access websites using easy-to-remember names instead of numerical IP addresses.

When a user types a domain name in their browser, DNS resolves it to the corresponding IP
address, directing the traffic to the appropriate server (like a load balancer or an application
server).

4.1 Route 53

Route 53 is Amazon Web Services' (AWS) Domain Name System (DNS) service, providing a
scalable and reliable way to route users to applications hosted on AWS.

It simplifies the process of managing domain names, allowing users to purchase new domains or
integrate existing ones seamlessly.

4.2 Key Features

Domain Registration: Users can register new domain names directly through AWS Route 53,
eliminating the need for third-party domain registrars. Alternatively, existing domain names from
other registrars can be integrated into Route 53.

Hosted Zones: Route 53 uses hosted zones to manage DNS records. Users can create public or
private hosted zones, where DNS mappings between domain names and IP addresses are
maintained.

Health Checks: Route 53 can perform health checks on applications, ensuring that traffic is only
directed to healthy instances. It checks the availability of application servers and can route traffic
accordingly.

5. S3
Simple Storage Service is a scalable and secure cloud storage service provided by Amazon Web
Services (AWS). It allows you to store and retrieve any amount of data from anywhere on the web.

5.1 What are S3 buckets?

S3 buckets are containers for storing objects (files) in Amazon S3. Each bucket has a unique name
globally across all of AWS. You can think of an S3 bucket as a top-level folder that holds your data

5.2 Why use S3 buckets?


S3 buckets provide a reliable and highly scalable storage solution for various use cases. They are
commonly used for backup and restore, data archiving, content storage for websites, and as a data

Cloud-AWS 9
source for big data analytics.

5.3 Key benefits of S3 buckets

Durability and availability: S3 provides high durability and availability for your data.
Scalability: You can store and retrieve
any amount of data without worrying about capacity constraints.

Security: S3 offers multiple security features such as encryption, access control, and audit logging.

Performance: S3 is designed to deliver high performance for data retrieval and storage operations.

Cost-effective: S3 offers cost-effective storage options and pricing models based on your usage
patterns.

5.4 Creating and Configuring S3 Buckets

5.4.1 Uploading objects to S3 buckets

You can upload objects to an S3 bucket using various methods, including the AWS Management
Console, AWS CLI, SDKs, and direct HTTP uploads.

5.4.2 Object metadata and properties

Object metadata contains additional information about each object in an S3 bucket. It includes
attributes like content type, cache control, encryption settings, and custom metadata

5.4.3 S3 Replication

S3 replication enables automatic and asynchronous replication of objects between S3 buckets in


different regions or within the same region.

5.4.4 Versioning

Versioning allows you to keep multiple versions of an object in the bucket. It helps protect against
accidental deletions or overwrites.

Cloud-AWS 10
5.5 S3 Bucket Management and Administration

5.5.1 S3 bucket policies

Create and manage bucket policies to control access to your S3 buckets. Bucket policies are written in
JSON and define permissions for various actions and resources.

5.5.2 S3 access control and IAM roles

Use IAM roles and policies to manage access to S3 buckets. IAM roles provide temporary credentials
and fine-grained access control to AWS resources.

Cloud-AWS 11
HOSTING

6.AWS CLI
The AWS Command Line Interface (CLI) is a tool that enables users to interact with AWS services
through commands in a terminal or command prompt.

It acts as a middleman between the user and AWS APIs, translating CLI commands into API calls
that AWS can understand.

6.1 Basic Command Usage

The AWS CLI simplifies commands for quick tasks, such as listing S3 buckets with aws s3 ls , which
provides immediate results.

For more complex operations, users can leverage detailed commands that involve multiple
parameters, such as creating EC2 instances, which require specifying instance types, AMI IDs, and
other configurations.

AWS CLI documentation- https://docs.aws.amazon.com/cli/latest/

6.2 Benefits of Using AWS CLI

The CLI provides a faster alternative for executing tasks compared to the AWS Management
Console, especially for repetitive or bulk actions.

It is particularly useful for DevOps engineers who require quick access to AWS resources and need
to manage them efficiently.

While the CLI is great for simple commands, more complex deployments may benefit from
Infrastructure as Code (IAC) tools like Terraform or CloudFormation, which offer structured
templates for creating extensive AWS environments.

6.3 Installation

Cloud-AWS 12
Installing or updating to the latest version of the AWS CLI - AWS Command Line Interface
The AWS CLI is an open source tool built using the AWS SDK for Python (Boto) that provides
commands for interacting with AWS services. With minimal configuration, you can start using all of
the functionality provided by the AWS Management Console from your favorite terminal program.
https://www.google.com/url?sa=i&url=https%3A%2F%2Ffanyv88.com%3A443%2Fhttps%2Fdocs.aws.amazon.com%2Fcli%2Flates
t%2Fuserguide%2Fgetting-started-install.html&psig=AOvVaw1ArjmKUuzYvoG2Jiwmpbym&ust=1733
402255759000&source=images&cd=vfe&opi=89978449&ved=0CAYQrpoMahcKEwiI0ZLekI6KAxUA
AAAAHQAAAAAQBA

7. Cloud Formation Template

7.1 What is IAC?

Infrastructure as Code (IaC) is a method of managing and provisioning IT infrastructure (like


servers, networks, and databases) using code, rather than manual processes.

It allows you to define your infrastructure using configuration files or scripts, which can then be
version-controlled and automated.

For example, you can use Terraform to write a script that provisions an AWS EC2 instance,
configures security groups, and connects it to an RDS database. The same script can be reused
across environments or updated as requirements change.

7.2 CFT

CloudFormation Templates (CFT) are used to create and manage infrastructure on AWS, following
the principles of Infrastructure as Code (IaC).

CFT allows users to define their AWS resources declaratively using YAML or JSON formats.

CFT serves as a middleman between the user and AWS, converting templates into API calls that
AWS understands.

7.3 Differences Between CFT and AWS CLI

AWS CLI is suitable for executing short, quick commands, such as listing resources, while CFT is
utilized for creating complex infrastructure setups.

CFT is preferred for managing multiple resources as a single unit, allowing for easier updates and
version control.

Use CFT for large-scale deployments, while AWS CLI is better for ad-hoc or immediate actions.

7.4 Key Features of CFT

Drift Detection: CFT can detect changes made outside of the template, allowing users to identify
discrepancies between the actual infrastructure and the defined template.

Declarative Syntax: Users can describe what resources are needed without specifying how to
create them, making it easier to manage complex systems.

Cloud-AWS 13
Version Control: CFT templates can be versioned, enabling users to track changes over time and
revert to previous states when necessary.

7.5 Exercise

1. Create a yaml file or JSON file with the code . You can refer awscloudfront documentation.

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html

Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties: {
BucketName: "democft"
}

2. You can also create code from the infrastructure

3. Go to Cloudfromation→Stacks→Upload the file

4. When the stack is created the S3 bucket is created and a template file is also created

Stack Creation

Cloud-AWS 14
Bucket Creation

Cloud-AWS 15
5. After the bucket is created if you delete it you can identify from the drift detection

6. It will show what changes has been done

8.Cloudwatch

8.1 What is Cloudwatch?

AWS CloudWatch acts as a monitoring service that tracks AWS cloud resources and applications,
providing insights into system performance and operational health.

It serves as a "gatekeeper" for AWS, enabling users to monitor activities such as EC2 instance
utilization, S3 bucket interactions, and more.

Key functionalities include monitoring, alerting, reporting, and logging, which facilitate proactive
management of cloud resources.

8.2 Core Features of CloudWatch

Cloud-AWS 16
Monitoring: CloudWatch enables real-time monitoring of AWS resources, allowing users to
visualize system performance through metrics.

Alarms: Users can set up alarms to trigger notifications based on specific metrics, such as CPU
utilization exceeding predefined thresholds.

Logs: The service automatically collects and organizes logs from various AWS resources,
facilitating easier troubleshooting and performance analysis.

8.3 Exercise

1. Create an EC2 instance and connect it

2. You can watch the CPU utilization in cloudwatch as well as in monitoring inside EC2 instance

3. By default EC2 shows data for every 5 minutes

4. To change it enable detailed monitoring

5. The CPU utilization is spiked using a program to monitor the graph

import time

def simulate_cpu_spike(duration=30, cpu_percent=80):


print(f"Simulating CPU spike at {cpu_percent}%...")
start_time = time.time()

# Calculate the number of iterations needed to achieve the desired CPU utili
target_percent = cpu_percent / 100
total_iterations = int(target_percent * 5_000_000) # Adjust the number as n

# Perform simple arithmetic operations to spike CPU utilization


for _ in range(total_iterations):

Cloud-AWS 17
result = 0
for i in range(1, 1001):
result += i

# Wait for the rest of the time interval


elapsed_time = time.time() - start_time
remaining_time = max(0, duration - elapsed_time)
time.sleep(remaining_time)

print("CPU spike simulation completed.")

if __name__ == '__main__':
# Simulate a CPU spike for 30 seconds with 80% CPU utilization
simulate_cpu_spike(duration=30, cpu_percent=80)

6. You can see the spike in both ec2 monitoring and in cloudwatch

7. You can adjust the log values to see the data in various formats in different time

Cloud-AWS 18
8. You can create alarms and get email when it
is exceeds the limit

9. Email can be specified from which alarm notification will be received

Cloud-AWS 19
9. AWS Lambda

9.1 What is serverless computing?


It's not about eliminating servers altogether. Instead, serverless computing is a cloud computing
execution model where you, as a developer, don't have to manage servers directly. You focus solely on
writing and deploying your code, while the cloud provider takes care of all the underlying
infrastructure. Serverless architecture eliminates the need to manually manage servers.

9.2 Lambda

AWS Lambda is an important service for devops engineers with multiple use cases including Cloud
cost optimization.

With AWS Lambda, there is no need to specify CPU or RAM requirements. AWS automatically
scales up or down based on application needs.

Serverless architecture in AWS Lambda allows for automatic creation and teardown of instances,
saving costs and reducing manual effort.

AWS Lambda allows you to write code directly or use provided samples for creating functions.

AWS Lambda functions can be triggered by events for efficient serverless application
development.

Lambda functions can access other services and roles can be used to manage permissions.

DEFAULT TIMEOUT=3 s

9.3 How Lambda Functions Fit into the Serverless World

1. Event-Driven Execution: Lambda functions are triggered by events. An event could be anything,
like a new file being uploaded to Amazon S3, a request hitting an API, or a specific time on the
clock. When an event occurs, Lambda executes the corresponding function.

2. No Server Management: As a developer, you don't need to worry about managing servers. AWS
handles everything behind the scenes. You just upload your code, configure the trigger, and
Lambda takes care of the rest.

3. Automatic Scaling: Whether you have one user or one million users, Lambda scales automatically.
Each function instance runs independently, ensuring that your application can handle any level of
incoming traffic without manual intervention.

4. Pay-per-Use: One of the most attractive features of serverless computing is cost efficiency. With
Lambda, you pay only for the compute time your code consumes. When your code isn't running,
you're not charged.

Cloud-AWS 20
5. Supported Languages: Lambda supports multiple programming languages like Node.js, Python,
Java, Go, and more. You can choose the language you are comfortable with or that best fits your
application's needs.

9.4 Serverless Architecture vs. EC2

Unlike EC2, where users must manage instances and resources, Lambda abstracts these details,
allowing users to focus solely on code and logic.

Lambda is event-driven, meaning it only runs when triggered, leading to cost savings as users pay
only for the compute time consumed.

EC2 instances require manual scaling and management, while Lambda automatically scales based
on the volume of requests, removing the overhead of infrastructure management.

9.5 Real-World Use Cases

1. Automated Image Processing: Imagine you have a photo-sharing app, and users upload images
every day. You can use Lambda to automatically resize or compress these images as soon as they
are uploaded to S3.

2. Chatbots and Virtual Assistants: Build interactive chatbots or voice-controlled virtual assistants
using Lambda. These assistants can perform tasks like answering questions, fetching data, or even
controlling smart home devices.

3. Scheduled Data Backups: Use Lambda to create scheduled tasks for backing up data from one
storage location to another, ensuring data resilience and disaster recovery.

4. Real-Time Analytics: Lambda can process streaming data from IoT devices, social media, or other
sources, allowing you to perform real-time analytics and gain insights instantly.

5. API Backends: Develop scalable API backends for web and mobile applications using Lambda. It
automatically handles the incoming API requests and executes the corresponding functions.

9.6 Cost Optimization with Lambda

DevOps engineers can leverage Lambda to automate cost-saving measures, such as identifying
and notifying users about idle resources.

By scheduling Lambda functions to run at specific intervals, teams can regularly assess resource
utilization and take action accordingly.

The serverless model reduces operational overhead and allows organizations to focus on
optimizing cloud spending without manual intervention.

9.7 Security and Compliance Monitoring

Lambda functions can be employed to enforce compliance by monitoring resource configurations


and alerting teams to any violations, such as the creation of insecure S3 buckets.

Cloud-AWS 21
Automated scripts can be scheduled to run daily, checking for compliance issues and sending
notifications to relevant stakeholders.

This proactive approach enhances organizational security while minimizing the risk of human error
in managing cloud resources.

10. Cost Optimization with Lambda


Cloud cost optimization is crucial for organizations seeking to reduce infrastructure overhead and
manage expenses effectively.

They must identify and delete stale resources—unused or forgotten cloud services that continue
to incur costs—such as unattached EBS volumes and snapshots.

By utilizing Python and the Boto3 library, DevOps engineers can write scripts that interact with
AWS APIs to identify unused resources based on certain criteria (e.g., no recent activity).

10.1 Exercise

To create a Lambda function that identifies EBS snapshots that are no longer associated with any
active EC2 instance and deletes them to save on storage costs.

Create EC2 instance and create a snapshot with the volume of EC2.

Go to Lambda function and paste the code.

import boto3

def lambda_handler(event, context):


ec2 = boto3.client('ec2')

# Get all EBS snapshots


response = ec2.describe_snapshots(OwnerIds=['self'])

# Get all active EC2 instance IDs


instances_response = ec2.describe_instances(Filters=[{'Name': 'instance-stat
active_instance_ids = set()

for reservation in instances_response['Reservations']:


for instance in reservation['Instances']:
active_instance_ids.add(instance['InstanceId'])

# Iterate through each snapshot and delete if it's not attached to any volum
for snapshot in response['Snapshots']:
snapshot_id = snapshot['SnapshotId']
volume_id = snapshot.get('VolumeId')

if not volume_id:

Cloud-AWS 22
# Delete the snapshot if it's not attached to any volume
ec2.delete_snapshot(SnapshotId=snapshot_id)
print(f"Deleted EBS snapshot {snapshot_id} as it was not attached to
else:
# Check if the volume still exists
try:
volume_response = ec2.describe_volumes(VolumeIds=[volume_id])
if not volume_response['Volumes'][0]['Attachments']:
ec2.delete_snapshot(SnapshotId=snapshot_id)
print(f"Deleted EBS snapshot {snapshot_id} as it was taken f
except ec2.exceptions.ClientError as e:
if e.response['Error']['Code'] == 'InvalidVolume.NotFound':
# The volume associated with the snapshot is not found (it m
ec2.delete_snapshot(SnapshotId=snapshot_id)
print(f"Deleted EBS snapshot {snapshot_id} as its associated

The Lambda function fetches all EBS snapshots owned by the same account ('self') and also retrieves
a list of active EC2 instances (running and stopped). For each snapshot, it checks if the associated
volume (if exists) is not associated with any active instance. If it finds a stale snapshot, it deletes it,
effectively optimizing storage costs.

This error will occur because access has been declined so in configuration you should create a
custom policy allowing (decribe snaphot, delete snapshot, ec2 view..) and attach it to the lamda
configuration.

If you execute the program no change will occur because the instance is still running so you have
to stop it and run the code.

First the volume will be deleted.

Upon running it again the snapshot will also be deleted.

Cloud-AWS 23
11.AWS CloudFront

11.1 What is CDN?

It is designed to improve the delivery of static and dynamic web content. It is a network of servers
distributed across various locations that cache content closer to users, reducing latency and
enhancing load times.

A CDN is a network of servers distributed across various locations that cache content closer to
users, reducing latency and enhancing load times.

For example, a user in US uploads a reel in instagram and it is stored in the central server. If an
user from India wants to see that reel instead of going to the central server which is very far it goes
to the CDN which directs to the edge locations where the copies of reel is stored

There are edge locations all over the world and that’s why instagram and Youtube are very fast.

11.2 Benefits of Using a CDN

Reduced Latency: CDNs store cached copies of content at edge locations, allowing users to
access data from a server geographically closer to them, resulting in faster load times.

Scalability: CDNs can handle high volumes of traffic, distributing requests across multiple servers
to prevent overload on any single server.

Cost Efficiency: By caching content, CDNs can minimize the load on the origin server, potentially
lowering bandwidth costs for data transfer.

11.3 Exercise

CloudFront can be integrated with Amazon S3 to serve static content, providing a seamless way to
host websites without exposing the S3 bucket to the public.

Using CloudFront, content is cached at edge locations, enabling quicker access for users while
ensuring the S3 bucket remains secure and private

Created a bucket and put a html file in it.

Go to cloudfront and choose the origin domain as the s3 bucket

Choose legacy access identity and create new OAI

Cloud-AWS 24
Choose update the bucket policy

The bucket policy will also be changed

Cloud-AWS 25
Now the html file can be accessed from the cloudfront distribution domain name

12.ECR
ECR stands for Elastic Container Registry, an AWS service designed for storing and managing
Docker images.

It allows users to push and pull Docker images, functioning as a container registry similar to Docker
Hub.

ECR is designed for scalability and high availability, enabling users to store any number of
container images without restriction.

12.1 Comparison of ECR and Docker Hub

ECR primarily supports private repositories, enhancing security by default, whereas Docker Hub
offers public repositories by default.

Cloud-AWS 26
Organizations using AWS can easily integrate their IAM (Identity and Access Management) with
ECR, simplifying user management compared to Docker Hub.

Docker Hub is often preferred for public images, while ECR is better suited for private images
within AWS infrastructure.

13. AWS Code Pipeline


AWS Code Pipeline is part of a suite of AWS services designed for continuous integration and
continuous delivery (CI/CD).

The main services involved are AWS Code Commit, Code Pipeline, Code Build, and Code Deploy.

13.1 AWS Code Services

AWS Code Commit is a managed version control service that simplifies the management of code
repositories without the need for infrastructure maintenance.

AWS Code Build automates the build process, allowing users to define build specifications and
integrate various stages of the CI pipeline.

AWS Code Deploy facilitates the deployment process, enabling deployment to various
environments like EC2 instances and Kubernetes clusters.

13.2 Benefits of Using AWS Code Pipeline

Cloud-AWS 27
AWS Code Pipeline offers a fully managed service, eliminating the overhead of managing
infrastructure like Jenkins servers or worker nodes.

Organizations often prefer managed services to reduce the need for dedicated DevOps resources
for scaling, maintenance, and security.

13.3 Challenges and Considerations

While AWS Code Pipeline is convenient, it is restricted to AWS infrastructure, which may pose
challenges for organizations looking to adopt a multi-cloud strategy.

Jenkins, being open-source and platform-agnostic, allows for greater flexibility and portability
across different cloud providers and on-premises environments.

14. CI Pipeline
Go to codebuild

Choose github and give the repository link

Choose privileged as true

The service role must be created in IAM policies giving access to SSM full access

Cloud-AWS 28
Build spec code

version:0.2
env:
parameter-store:
DOCKER_REGISTRY_USERNAME: /myapp/docker-credentials/username
DOCKER_REGISTRY_PASSWORD: /myapp/docker-credentials/password
DOCKER_REGISTRY_URL: /myapp/docker-registry/url
phases:
install:
runtime-versions:
python: 3.11
pre_build:
commands:
- echo "Installing dependencies..."
- pip install -r day-14/simple-python-app/requirements.txt
build:
commands:
- echo "Running tests..."
- cd day-14/simple-python-app/
- echo "Building Docker image..."
- echo "devberg@123" | docker login -u "devberg007" --password-stdin "dock
- docker build -t "docker.io/devberg007/simple-python-flask-app:latest" .
- docker push "docker.io/devberg007/simple-python-flask-app:latest"
post_build:
commands:
- echo "Build completed successfully!"

The parameters should be registered inside the parameter store in system manager

The docker password and username must be given in respective fields and fpr registry docker.io
must be given.

The build can be started

Cloud-AWS 29
The success message can be viewed and the docker image will also be created in the docker
account

Cloud-AWS 30
For pipeline, choose github version 2 as source provider and add the codebuild project

If the code is changed the cycle runs automatically doing all the tests all because of the pipeline we
have created

15. ECS(Elastic Container Service)

15.1 What is ECS?

AWS ECS is a fully managed container orchestration service that allows you to run Docker containers
at scale. It eliminates the need to manage your own container orchestration infrastructure and provides
a highly scalable, reliable, and secure environment for deploying and managing your applications.

Cloud-AWS 31
15.2 Why Choose ECS Over Other Container Orchestration Tools?

Comparison with Kubernetes:


Kubernetes is undoubtedly a powerful container orchestration tool with a vast ecosystem, but it comes
with a steeper learning curve. ECS, on the other hand, offers a more straightforward setup and is
tightly integrated with other AWS services, making it a preferred choice for AWS-centric environments.

15.3 ECS Fundamentals

Clusters:
A cluster is a logical grouping of EC2 instances or Fargate tasks on which you run your containers. It
acts as the foundation of ECS, where you can deploy your services.

Task Definitions:
Task Definitions define how your containers should run, including the Docker image to use, CPU and
memory requirements, networking, and more. It is like a blueprint for your containers.

Tasks:
A task represents a single running instance of a task definition within a cluster. It could be a single
container or multiple related containers that need to work together.
Services:

Services help you maintain a specified number of running tasks simultaneously, ensuring high
availability and load balancing for your applications.

15.4. Pros of Using AWS ECS

Fully Managed Service: AWS handles the underlying infrastructure, making it easier for you to
focus on deploying and managing applications.

Seamless Integration: ECS seamlessly integrates with other AWS services like IAM, CloudWatch,
Load Balancers, and more.

Scalability: With support for Auto Scaling, ECS can automatically adjust the number of tasks based
on demand.

Cost-Effective: You pay only for the AWS resources you use, and you can take advantage of cost
optimization features.

15.5 Cons of Using AWS ECS

AWS-Centric: If you have a multi-cloud strategy or already invested heavily in another cloud
provider, ECS's tight integration with AWS might be a limitation.

Learning Curve for Advanced Features: While basic usage is easy, utilizing more advanced
features might require a deeper understanding.

Cloud-AWS 32
Limited Flexibility: Although ECS can run non-Docker workloads with EC2 launch types, it is
primarily optimized for Docker containers.

16. Secret Management in AWS


Secrets management is crucial for protecting sensitive information such as API keys, database
credentials, and passwords, which are frequently used in CI/CD pipelines.

AWS offers several services for secrets management, primarily Systems Manager -Parameter
Store and AWS Secrets Manager.

Parameter Store is suitable for storing less sensitive information like Docker usernames and
registry URLs, allowing easy integration with other AWS services.

Secrets Manager provides advanced features like automatic secret rotation and is ideal for
managing highly sensitive information, such as database passwords and API tokens.

16.1 HashiCorp Vault as an Alternative

HashiCorp Vault is a widely-used open-source tool for secrets management that can be deployed
across different cloud platforms, offering flexibility in multi-cloud environments.

Vault provides advanced security features and community-driven enhancements, making it a


robust option for organizations with diverse infrastructure needs.

Choosing Vault allows organizations to avoid vendor lock-in and facilitates easier migration and
integration with different cloud providers.

17. AWS Config


AWS Config is a service that helps manage and monitor the compliance of AWS resources.

It ensures that AWS accounts and resources comply with organizational rules and regulations.

The service allows users to track resource inventory and changes, enabling better governance.

AWS Lambda functions can be integrated with AWS Config to automate compliance checks in real-
time.

Lets say two ec2 instances are created and one is enabled with monitoring and one is not.

The config rule can be modified in such a way that the ec2 instances must be enabled with monitoring.
So the instances can be seen whether it has enabled or not in the config section
This is achieved by creating a lambda function first and posting that arn in the rule section of config

The code can be written to modify the rule for whatever services

Cloud-AWS 33

You might also like