0% found this document useful (0 votes)
438 views42 pages

AWS Lab Manual-Core

AWS lab manual for 6th sem students

Uploaded by

swatimk1123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
438 views42 pages

AWS Lab Manual-Core

AWS lab manual for 6th sem students

Uploaded by

swatimk1123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Experiment-1

Exploring AWS CloudShell and the AWS Cloud9 IDE

This experiment will guide you through AWS CloudShell and AWS Cloud9, two cloud-
based development environments that allow you to interact with AWS resources and develop
applications without setting up a local development environment.

Part 1: Exploring AWS CloudShell


Step 1: Access AWS CloudShell

1. Log in to AWS Console:


o Open AWS Management Console and sign in with your AWS credentials.
2. Open CloudShell:
o In the AWS Console, click on the CloudShell icon (top-right corner).
o Wait for the environment to initialize (it may take a few seconds).

Step 2: Run Basic AWS CLI Commands

1. Check AWS Identity:

sh
=================================================
aws sts get-caller-identity

o This verifies your IAM identity.


2. List S3 Buckets:

sh
=================================================
aws s3 ls

o This displays existing S3 buckets.


3. Check Running EC2 Instances:

sh
=================================================
aws ec2 describe-instances

o Lists EC2 instances (if any exist).

Step 3: Upload and Download Files

1. Create a sample text file:

sh
=================================================
echo "Hello AWS CloudShell!" > sample.txt

1|Page
2. Upload it to S3:

sh
=================================================
aws s3 cp sample.txt s3://your-bucket-name/

3. Download a file from S3:

sh
=================================================
aws s3 cp s3://your-bucket-name/sample.txt .

Step 4: Exit CloudShell

• Simply close the browser tab or type exit in the terminal.

Part 2: Exploring AWS Cloud9 IDE


Step 1: Create an AWS Cloud9 Environment

1. Open AWS Cloud9:


o Navigate to the AWS Cloud9 service in the AWS Console.
2. Create a new environment:
o Click Create Environment.
o Enter an environment name (e.g., MyCloud9Env).
o Choose EC2 (managed environment) or SSH-based environment.
o Select an instance type (e.g., t2.micro for free-tier users).
o Click Create and wait for setup completion.

Step 2: Explore Cloud9 Interface

• Terminal Panel: Access AWS CLI and system commands.


• Editor Panel: Write and edit code in various languages.
• File Explorer: Manage project files.

Step 3: Run AWS CLI Commands

1. Verify AWS Identity:

sh
=================================================
aws sts get-caller-identity

2. Create a Simple Python Script:

sh
=================================================
echo "print('Hello from AWS Cloud9!')" > hello.py

2|Page
3. Run the Script:

sh
=================================================
python3 hello.py

Step 4: Install Dependencies

1. Install Boto3 (AWS SDK for Python):

sh
=================================================
pip install boto3

2. Test Boto3 by Listing S3 Buckets:

python
=================================================
import boto3
s3 = boto3.client('s3')
response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])

Step 5: Shut Down Cloud9

• Click "Actions > Delete Environment" to remove your Cloud9 instance and avoid
unnecessary charges.

3|Page
Experiment-2
Working with Amazon S3Orchestrating Serverless Functions with AWS Step
Functions

This experiment involves storing and retrieving files using Amazon S3 and automating
workflows using AWS Step Functions to orchestrate serverless AWS Lambda functions.

Part 1: Working with Amazon S3


Step 1: Create an S3 Bucket

1. Log in to AWS Console:


o Go to AWS Management Console.
2. Navigate to S3:
o Open the S3 service from the AWS console.
3. Create a New Bucket:
o Click "Create bucket".
o Enter a unique bucket name (e.g., my-serverless-bucket).
o Choose a region (e.g., us-east-1).
o Disable Block all public access (for testing only, keep it enabled for
production).
o Click "Create bucket".

Step 2: Upload a File to S3

1. Click on the newly created bucket.


2. Click "Upload", then "Add files" and select a file from your local system.
3. Click "Upload" to store the file in S3.

Step 3: Access the File Using AWS CLI

1. Open AWS CloudShell or any terminal with AWS CLI installed.


2. Run the following command to list files in the bucket:

sh
=================================================
aws s3 ls s3://my-serverless-bucket/

3. Download the file:

sh
=================================================
aws s3 cp s3://my-serverless-bucket/your-file.txt .

Step 4: Enable Event Notifications for Serverless Automation

4|Page
1. In the S3 bucket settings, go to "Properties" → "Event notifications".
2. Click "Create event notification", name it LambdaTrigger.
3. Under Event types, choose "PUT" (triggers when a file is uploaded).
4. Under Destination, select AWS Lambda (we will create this function in the next
section).
5. Click Save changes.

Part 2: Orchestrating Serverless Functions with AWS


Step Functions
Step 1: Create a Lambda Function

1. Open the AWS Lambda service from the AWS Console.


2. Click "Create function" → Select "Author from scratch".
3. Enter function name (e.g., S3FileProcessor).
4. Choose runtime Python 3.9 (or any supported version).
5. Click "Create function".

Step 2: Write the Lambda Function Code

1. Scroll to the Code section and edit lambda_function.py:

python
=================================================
import json
import boto3

def lambda_handler(event, context):


s3 = boto3.client('s3')

# Extract bucket name and file name from event


record = event['Records'][0]
bucket_name = record['s3']['bucket']['name']
file_name = record['s3']['object']['key']

# Process file (e.g., log the file name)


print(f"File {file_name} uploaded to {bucket_name}")

return {
'statusCode': 200,
'body': json.dumps(f"Processed file {file_name}")
}

2. Click "Deploy".

Step 3: Create an AWS Step Function

1. Open the AWS Step Functions service.


2. Click "Create state machine".
3. Select "Author with code" and choose "Standard".

5|Page
4. Use the following JSON definition to integrate Lambda:

json
=================================================
{
"Comment": "Step Function to Process S3 Files",
"StartAt": "InvokeLambda",
"States": {
"InvokeLambda": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:S3FileProcessor",
"End": true
}
}
}

o Replace REGION and ACCOUNT_ID with your actual AWS values.


5. Click "Create state machine".

Step 4: Test the Step Function Workflow

1. Click "Start execution".


2. Provide a test input, e.g.:

json
=================================================
{
"Records": [
{
"s3": {
"bucket": {"name": "my-serverless-bucket"},
"object": {"key": "your-file.txt"}
}
}
]
}

3. Click "Start execution".


4. Monitor the execution and check logs in AWS CloudWatch.

6|Page
Experiment-3
Working with Amazon DynamoDB

This experiment guides you through creating, inserting, querying, and deleting data in
Amazon DynamoDB, a fully managed NoSQL database service.

Step 1: Create a DynamoDB Table


1. Log in to AWS Console:
o Go to AWS Management Console.
2. Navigate to DynamoDB:
o Search for DynamoDB in the AWS Console and open the service.
3. Create a New Table:
o Click "Create table".
o Enter Table name (e.g., Customers).
o Set the Partition key: CustomerID (Type: String).
o Click Create table (leave other options as default).
4. Wait for the table status to change to ACTIVE.

Step 2: Insert Data into DynamoDB Table


Method 1: Using AWS Console

1. Click on the Customers table.


2. Go to the Items tab → Click "Create item".
3. Add the following key-value pairs:

json
=================================================
{
"CustomerID": "C001",
"Name": "John Doe",
"Email": "[email protected]",
"Phone": "+1234567890"
}

4. Click Save.

Method 2: Using AWS CLI

1. Open AWS CloudShell or your terminal.


2. Run the following command:

sh
=================================================

7|Page
aws dynamodb put-item --table-name Customers --item '{
"CustomerID": {"S": "C002"},
"Name": {"S": "Jane Smith"},
"Email": {"S": "[email protected]"},
"Phone": {"S": "+9876543210"}
}' --region us-east-1

Step 3: Query Data from DynamoDB


Method 1: Using AWS Console

1. Open the Customers table.


2. Go to the Items tab → Click "Scan" to view all records.

Method 2: Using AWS CLI

1. Fetch all records:

sh
=================================================
aws dynamodb scan --table-name Customers --region us-east-1

2. Query a specific record:

sh
=================================================
aws dynamodb get-item --table-name Customers --key '{
"CustomerID": {"S": "C001"}
}' --region us-east-1

Step 4: Update an Item


Method 1: Using AWS Console

1. Select an item from the Items tab.


2. Click "Edit", modify values, and click "Save".

Method 2: Using AWS CLI

1. Update the phone number for CustomerID C001:

sh
=================================================
aws dynamodb update-item --table-name Customers --key '{
"CustomerID": {"S": "C001"}
}' --update-expression "SET Phone = :p" --expression-attribute-values
'{
":p": {"S": "+1122334455"}
}' --region us-east-1

8|Page
Step 5: Delete Data from DynamoDB
Method 1: Using AWS Console

1. Open DynamoDB, go to Items tab.


2. Select an item → Click "Delete".

Method 2: Using AWS CLI

1. Delete CustomerID C002:

sh
=================================================
aws dynamodb delete-item --table-name Customers --key '{
"CustomerID": {"S": "C002"}
}' --region us-east-1

Step 6: Delete the DynamoDB Table


Method 1: Using AWS Console

1. Open the DynamoDB service.


2. Select the Customers table → Click "Delete table".

Method 2: Using AWS CLI

1. Run the following command:

sh
=================================================
aws dynamodb delete-table --table-name Customers --region us-east-1

9|Page
Experiment-4
Developing REST APIs with Amazon API Gateway

This experiment guides you through creating, deploying, and testing a REST API using
Amazon API Gateway and AWS Lambda to handle HTTP requests.

Step 1: Create an AWS Lambda Function


Before setting up the API, we need a backend function to process API requests.

1. Open AWS Lambda Console:


o Go to AWS Lambda.
2. Create a New Lambda Function:
o Click "Create function" → Select "Author from scratch".
o Enter Function name (e.g., MyAPIFunction).
o Choose Runtime: Python 3.9 (or another preferred language).
o Click "Create function".
3. Modify the Lambda Code:
o Scroll to the Code section and edit lambda_function.py:

python
=================================================
import json

def lambda_handler(event, context):


response = {
"statusCode": 200,
"body": json.dumps({"message": "Hello from API
Gateway!"})
}
return response

o Click "Deploy".

Step 2: Create an API in Amazon API Gateway


1. Open API Gateway Console:
o Navigate to API Gateway.
2. Create a New REST API:
o Click "Create API" → Select "REST API" (Private or Regional) → Click
Build.
o Enter API Name (e.g., MyAPI).
o Select Regional as the endpoint type.
o Click Create API.

10 | P a g e
Step 3: Create a Resource and Method
1. Create a New Resource:
o In the left panel, click Actions → Create Resource.
o Enter Resource Name (e.g., hello).
o Click Create Resource.
2. Create a GET Method:
o Select the /hello resource.
o Click Actions → Create Method → Choose GET.
o Click ✓ (Check Mark).
3. Integrate with Lambda Function:
o In the Integration type, select Lambda Function.
o Enter the Lambda function name (MyAPIFunction).
o Click Save and confirm the permissions.

Step 4: Deploy the API


1. Create a New Deployment Stage:
o Click Actions → Deploy API.
o Choose New Stage, enter Stage Name (e.g., dev).
o Click Deploy.
2. Copy the Invoke URL:
o The Invoke URL is displayed (e.g., https://xyz.execute-api.us-east-
1.amazonaws.com/dev).
o To test, append /hello:

sh
=================================================
curl -X GET https://xyz.execute-api.us-east-
1.amazonaws.com/dev/hello

Step 5: Enable CORS (Optional)


1. Select the /hello resource.
2. Click Actions → Enable CORS.
3. Click Enable CORS and replace existing CORS headers.

Step 6: Secure the API (Optional)


• API Key Authentication:
o Under Usage Plans, create an API key and require it for requests.

11 | P a g e
• IAM Authentication:
o Use IAM policies to restrict access.

Step 7: Monitor API Usage


1. Open CloudWatch Logs to view API requests.
2. Use AWS X-Ray for tracing API latency.

12 | P a g e
Experiment-5
Creating Lambda Functions Using the AWS SDK for Python

This experiment will guide you through creating, deploying, and invoking an AWS
Lambda function programmatically using Boto3, the AWS SDK for Python.

Step 1: Install and Configure AWS CLI and Boto3


1. Install AWS CLI (if not already installed):
o Download and install AWS CLI from AWS CLI Official Page.
2. Configure AWS CLI:
o Open a terminal and run:

sh
=================================================
aws configure

o Enter AWS Access Key, Secret Key, Region, and Output format (json).
3. Install Boto3 (if not installed):
o Run the following command:

sh
=================================================
pip install boto3

Step 2: Create an IAM Role for Lambda Execution


1. Go to AWS IAM Console:
o Open IAM.
2. Create a New Role:
o Click Roles → Create Role.
o Choose AWS Service → Select Lambda.
o Click Next.
3. Attach Permissions:
o Attach the policy AWSLambdaBasicExecutionRole.
o Click Next → Create Role.
4. Copy the Role ARN:
o Go to the newly created role → Copy the Role ARN.

Step 3: Write a Python Script to Create a Lambda


Function
13 | P a g e
1. Create a ZIP File Containing Lambda Code

• Create a file lambda_function.py:

python
=================================================
import json

def lambda_handler(event, context):


return {
'statusCode': 200,
'body': json.dumps('Hello from AWS Lambda!')
}

• Zip the file to prepare for deployment:

sh
=================================================
zip lambda_function.zip lambda_function.py

2. Python Script to Create the Lambda Function

• Create a new Python script create_lambda.py:

python
=================================================
import boto3

lambda_client = boto3.client('lambda', region_name='us-east-1')

function_name = "MyBoto3LambdaFunction"
role_arn = "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_LAMBDA_ROLE"

with open("lambda_function.zip", "rb") as f:


zipped_code = f.read()

response = lambda_client.create_function(
FunctionName=function_name,
Runtime="python3.9",
Role=role_arn,
Handler="lambda_function.lambda_handler",
Code={"ZipFile": zipped_code},
Timeout=10,
MemorySize=128,
)

print("Lambda Function Created:", response)

3. Run the Script

• Execute the script:

sh
=================================================
python create_lambda.py

14 | P a g e
• This will create the Lambda function and display the response.

Step 4: Invoke the Lambda Function Using Boto3


• Create a new script invoke_lambda.py:

python
=================================================
import boto3
import json

lambda_client = boto3.client('lambda', region_name='us-east-1')

response = lambda_client.invoke(
FunctionName="MyBoto3LambdaFunction",
InvocationType="RequestResponse",
Payload=json.dumps({})
)

response_payload = json.loads(response['Payload'].read())
print("Lambda Response:", response_payload)

• Run the script:

sh
=================================================
python invoke_lambda.py

Step 5: Update the Lambda Function (Optional)


If you need to update the function's code:

1. Modify lambda_function.py.
2. Re-zip and update the Lambda function:

sh
=================================================
zip lambda_function.zip lambda_function.py

3. Use the following script update_lambda.py:

python
=================================================
import boto3

lambda_client = boto3.client('lambda', region_name='us-east-1')

with open("lambda_function.zip", "rb") as f:


zipped_code = f.read()

response = lambda_client.update_function_code(
FunctionName="MyBoto3LambdaFunction",

15 | P a g e
ZipFile=zipped_code
)

print("Lambda Function Updated:", response)

4. Run the update script:

sh
=================================================
python update_lambda.py

Step 6: Delete the Lambda Function (Optional)


To delete the function:

1. Use the following script delete_lambda.py:

python
=================================================
import boto3

lambda_client = boto3.client('lambda', region_name='us-east-1')

response = lambda_client.delete_function(
FunctionName="MyBoto3LambdaFunction"
)

print("Lambda Function Deleted:", response)

2. Run the script:

sh
=================================================
python delete_lambda.py

16 | P a g e
Experiment-6
Migrating a Web Application to Docker Containers

Migrating a web application to Docker containers in AWS involves several steps, from
containerizing the application to deploying it on AWS infrastructure. Here’s a detailed step-
by-step procedure:

Step 1: Prepare the Web Application for Containerization


Before migrating, ensure your application is structured properly for containerization.

1. Identify Application Dependencies


o List all dependencies (e.g., libraries, databases, external services).
o Ensure the application runs properly on a local machine.
2. Refactor for Containerization (if necessary)
o Convert hardcoded configurations to environment variables.
o Ensure database connections are configurable via environment variables.

Step 2: Install Docker and Docker Compose


On your local machine:

1. Install Docker
o Download and install Docker from Docker’s official site.
o Verify installation:

sh
===========================================================
docker --version

2. Install Docker Compose (if using multi-container setup)


o Install using:

sh
===========================================================
sudo apt install docker-compose

o Verify:

sh
===========================================================
docker-compose --version

17 | P a g e
Step 3: Create a Dockerfile
Create a Dockerfile in the root of your project. Example for a Node.js app:

Dockerfile
===========================================================
# Use official Node.js runtime as base image
FROM node:18

# Set the working directory


WORKDIR /app

# Copy application files


COPY package*.json ./
RUN npm install

COPY . .

# Expose application port


EXPOSE 3000

# Start the application


CMD ["npm", "start"]

Step 4: Build and Test the Docker Image Locally


1. Build the Docker Image

sh
===========================================================
docker build -t my-web-app .

2. Run the Container Locally

sh
===========================================================
docker run -p 3000:3000 my-web-app

o Verify the application runs at http://localhost:3000.

Step 5: Push the Docker Image to Amazon Elastic


Container Registry (ECR)
5.1 Create an ECR Repository

1. Open AWS Management Console.


2. Navigate to Amazon ECR → Create Repository.
3. Set the repository name (e.g., my-web-app).
4. Click Create repository.

18 | P a g e
5.2 Authenticate Docker with AWS ECR

Run the following AWS CLI command:

sh
===========================================================
aws ecr get-login-password --region <region> | docker login --username AWS
--password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com

5.3 Tag and Push Docker Image to ECR


sh
===========================================================
docker tag my-web-app <aws_account_id>.dkr.ecr.<region>.amazonaws.com/my-
web-app
docker push <aws_account_id>.dkr.ecr.<region>.amazonaws.com/my-web-app

Step 6: Deploy the Docker Container to AWS


You can deploy the container using Amazon ECS (Elastic Container Service) with
Fargate, Amazon EC2, or AWS Elastic Beanstalk.

Option 1: Deploy on Amazon ECS (Using Fargate)

1. Create an ECS Cluster

sh
===========================================================
aws ecs create-cluster --cluster-name my-cluster

2. Create a Task Definition


o In AWS Console, navigate to ECS → Task Definitions → Create New Task
Definition.
o Choose Fargate as launch type.
o Define the container:
▪ Image: <aws_account_id>.dkr.ecr.<region>.amazonaws.com/my-
web-app
▪ Port: 3000
o Save and create the task definition.
3. Create and Run ECS Service
o Go to ECS Clusters → Create Service.
o Choose Fargate.
o Select the created Task Definition.
o Define a Load Balancer (if needed) and set auto-scaling options.
o Deploy the service.

Option 2: Deploy on AWS Elastic Beanstalk

1. Create an Elastic Beanstalk Application:

19 | P a g e
sh
===========================================================
eb init -p docker my-web-app

2. Deploy the application:

sh
===========================================================
eb create my-web-app-env

Step 7: Configure Networking and Security


1. Set Up Security Groups
o Allow inbound traffic on port 3000 (or other ports used by the app).
o Restrict unnecessary ports.
2. Configure IAM Roles
o Attach IAM roles to ECS/Fargate allowing access to ECR.
3. Set Up a Load Balancer (Optional)
o Use an Application Load Balancer (ALB) for traffic routing.

Step 8: Monitor and Scale the Deployment


1. Use Amazon CloudWatch to monitor container logs:

sh
===========================================================
aws logs describe-log-groups

2. Use AWS Auto Scaling to manage load dynamically.

Step 9: Test the Deployment


1. Get the public IP or domain of the service.
2. Open in a browser and verify the application is running.

Step 10: Automate the Deployment (Optional)


• Use AWS CodePipeline for CI/CD automation.
• Configure a GitHub Actions or AWS CodeBuild pipeline.

20 | P a g e
Final Notes

• Use Amazon RDS for databases (if required).


• Secure the container with AWS Secrets Manager.
• Use Amazon Route 53 for domain management.

21 | P a g e
Experiment-7
Caching Application Data with ElastiCache, Caching with Amazon
CloudFronT, Caching Strategies

Caching can significantly improve application performance by reducing latency and


offloading backend workloads. In this experiment, we will cover three main aspects of
caching in AWS:

1. Caching Application Data with Amazon ElastiCache


2. Caching with Amazon CloudFront
3. Implementing Caching Strategies

Step 1: Caching Application Data with Amazon


ElastiCache (Redis/Memcached)
Amazon ElastiCache is a managed service for caching using Redis or Memcached.

1.1 Choose Between Redis and Memcached

Feature Redis Memcached


Data Structure Support Yes (Lists, Sets, Hashes, etc.) No (Key-Value only)
Persistence Optional (RDB, AOF) No Persistence
Multi-AZ Yes No
Scaling Read Replicas & Clustering Sharding

For this experiment, we will use Redis.

1.2 Create an ElastiCache Cluster

1. Open AWS Console → ElastiCache → Click Create Cluster.


2. Choose Redis as the cache engine.
3. Set:
o Cluster name: my-cache
o Node type: cache.t2.micro (for testing)
o Number of nodes: 1
o Security group: Ensure it allows connections from your application.
4. Click Create and wait for it to be provisioned.

1.3 Connect Application to ElastiCache


22 | P a g e
1. Retrieve the Endpoint from the AWS console.
2. Install redis library in Python:

sh
===========================================================
pip install redis

3. Modify your application to use Redis:

python
===========================================================
import redis

# Connect to Redis
redis_client = redis.StrictRedis(host='my-
cache.abc123.use1.cache.amazonaws.com', port=6379,
decode_responses=True)

# Store a value
redis_client.set('user:1001', 'John Doe')

# Retrieve a value
user = redis_client.get('user:1001')
print(user) # Output: John Doe

4. Verify that data is cached and retrieved efficiently.

1.4 Performance Optimization

• Enable TTL (Time-To-Live) to automatically expire old cache:

python
===========================================================
redis_client.setex('user:1002', 3600, 'Jane Doe') # Expires in 1
hour

• Use Read Replicas for scaling.


• Enable Clustering for horizontal scaling.

Step 2: Caching with Amazon CloudFront


Amazon CloudFront is a CDN (Content Delivery Network) that caches content closer to
users.

2.1 Create an S3 Bucket (for Static Files)

1. Go to Amazon S3 → Create bucket (e.g., my-static-content).


2. Upload some static files (e.g., index.html, logo.png).
3. Make the files public or configure an S3 bucket policy:

23 | P a g e
json
===========================================================
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-static-content/*"
}
]
}

2.2 Create a CloudFront Distribution

1. Open AWS Console → CloudFront → Click Create Distribution.


2. Choose Origin Domain → Select your S3 bucket.
3. Set:
o Origin Access Control (OAC) to secure S3 content.
o Cache Policy: Use CachingOptimized.
4. Click Create Distribution and wait for deployment.
5. Copy the CloudFront domain name (e.g., d123.cloudfront.net).

2.3 Test CloudFront Caching

1. Access your file via CloudFront:

sh
===========================================================
curl -I https://d123.cloudfront.net/index.html

2. Look for:

csharp
===========================================================
X-Cache: Hit from cloudfront

o Hit from cloudfront: Cached


o Miss from cloudfront: Not cached yet

2.4 Cache Invalidation

To remove cached objects:

sh
===========================================================
aws cloudfront create-invalidation --distribution-id ABCD1234 --paths "/*"

24 | P a g e
Step 3: Implementing Caching Strategies
Caching strategies define how and when data is cached.

3.1 Write-Through Caching

• Data is written to cache and database at the same time.


• Ensures freshness but increases write latency.

python
===========================================================
def set_user_data(user_id, data):
# Save to Redis cache
redis_client.set(f'user:{user_id}', data)

# Save to Database (simulate)


db[user_id] = data

3.2 Lazy Loading (Cache-aside)

• Data is only cached when requested.


• If data is not found, fetch from DB and store in cache.

python
===========================================================
def get_user_data(user_id):
data = redis_client.get(f'user:{user_id}')
if not data:
# Simulate DB fetch
data = db.get(user_id, 'Default User')
redis_client.setex(f'user:{user_id}', 3600, data) # Cache with TTL
return data

3.3 Least Recently Used (LRU) Eviction

Redis automatically removes least used items when the cache is full.

Enable LRU:

sh
===========================================================
CONFIG SET maxmemory-policy allkeys-lru

3.4 Content Delivery Strategy

• Use CloudFront for static content.


• Use ElastiCache for frequently accessed database queries.

25 | P a g e
• Configure TTL settings to balance freshness and performance.

Step 4: Test Performance Gains


1. Measure API response times before and after caching:

python
===========================================================
import time

start = time.time()
get_user_data(1001)
print(f"Time taken: {time.time() - start:.3f} sec")

2. Monitor Cache Hits & Misses in AWS Console.


3. Use CloudWatch Metrics to analyze performance.

Final Notes
• ElastiCache (Redis) speeds up database-heavy applications.
• CloudFront improves static content delivery.
• Choosing the right caching strategy (Write-through, Lazy Loading, etc.) is critical.

26 | P a g e
Experiment-8

Implementing CloudFront for Caching and Application Security

Amazon CloudFront is a content delivery network (CDN) that caches content closer to
users, reducing latency and enhancing application security. This experiment covers:

1. Setting Up CloudFront for Caching (Static & Dynamic Content)


2. Enhancing Security with CloudFront Features
3. Testing and Validating Performance & Security

Step 1: Setting Up CloudFront for Caching


1.1 Create an S3 Bucket (for Static Content Caching)

CloudFront can cache content from Amazon S3 or an existing web application (e.g., EC2,
API Gateway).

1. Open AWS Console → S3 → Create bucket.


2. Set a unique bucket name (e.g., my-cloudfront-cache).
3. Disable Block all public access and enable static website hosting.
4. Upload static files (index.html, logo.png, etc.).

Optional: Configure an S3 Bucket Policy to allow CloudFront access:

json
===========================================================
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-cloudfront-cache/*"
}
]
}

1.2 Create a CloudFront Distribution (Pointing to S3 or an Application)

1. Go to AWS Console → CloudFront → Create Distribution.


2. Origin Settings:
o Origin domain: Select your S3 bucket or enter a web app URL (e.g., my-
api.example.com).
o Origin Access Control (OAC): Enable (to prevent direct S3 access).

27 | P a g e
o Restrict Bucket Access: Yes, and update policy automatically.
3. Default Cache Behavior:
o Viewer Protocol Policy: Redirect HTTP to HTTPS
o Allowed HTTP Methods: GET, HEAD (for static files)
o Cache Policy: Use CachingOptimized for static content.
4. Distribution Settings:
o Enable WAF Protection (if security is required).
o Click Create Distribution.

Note: Copy the CloudFront Distribution Domain Name (e.g., d123.cloudfront.net),


which will be used to access cached content.

1.3 Test Caching with CloudFront

1. Access your static file:

sh
===========================================================
curl -I https://d123.cloudfront.net/index.html

Look for:

csharp
===========================================================
X-Cache: Hit from cloudfront

o Hit from cloudfront: Cached


o Miss from cloudfront: Not cached yet
2. Force Cache Invalidation (If Needed):

sh
===========================================================
aws cloudfront create-invalidation --distribution-id ABCD1234 --paths
"/*"

Step 2: Enhancing Security with CloudFront


2.1 Restrict Direct Access to the Origin (S3 or EC2)

• S3: Use Origin Access Control (OAC) to block public access.


• EC2 or Web App: Set up Security Groups to accept traffic only from CloudFront.

Example EC2 Security Group rule:

• Allow HTTP/HTTPS from CloudFront IP ranges.


• Block direct access from the internet.

28 | P a g e
2.2 Enable Web Application Firewall (AWS WAF)

1. Go to AWS WAF → Create WebACL.


2. Associate the WebACL with your CloudFront Distribution.
3. Add rules for protection:
o SQL Injection Protection
o Cross-Site Scripting (XSS) Protection
o Rate Limiting (to prevent DDoS attacks).

2.3 Enable HTTPS & Enforce Security Headers

• Enable HTTPS: CloudFront provides free SSL/TLS certificates via AWS Certificate
Manager (ACM).
• Security Headers: Use Lambda@Edge to add security headers:

js
===========================================================
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;

headers['strict-transport-security'] = [{ key: 'Strict-Transport-


Security', value: 'max-age=63072000; includeSubdomains; preload' }];
headers['x-content-type-options'] = [{ key: 'X-Content-Type-Options',
value: 'nosniff' }];
headers['x-frame-options'] = [{ key: 'X-Frame-Options', value: 'DENY'
}];
headers['x-xss-protection'] = [{ key: 'X-XSS-Protection', value: '1;
mode=block' }];

callback(null, response);
};

Deploy this using AWS Lambda@Edge on CloudFront.

2.4 Restrict Access Using Signed URLs or Signed Cookies

• Signed URLs: Allow access to CloudFront content only for authorized users.
• Signed Cookies: Control user access based on authentication.

Example: Generate a Signed URL using AWS SDK:

python
===========================================================
import boto3

client = boto3.client('cloudfront')
url = client.get_signed_url(
url="https://d123.cloudfront.net/protected-file.jpg",

29 | P a g e
private_key="private-key.pem",
key_pair_id="APKAIKEYPAIRID",
expires=1700000000
)

print(url)

This ensures that only authorized users can access sensitive content.

Step 3: Validate Performance & Security


3.1 Measure Performance Improvement

Run tests before and after CloudFront:

1. Measure API response time before caching:

sh
===========================================================
time curl -o /dev/null -s https://my-api.example.com/data

2. Measure response time via CloudFront:

sh
===========================================================
time curl -o /dev/null -s https://d123.cloudfront.net/data

3. Check latency improvement.

3.2 Check Security Logs

• Go to AWS CloudWatch → Logs → CloudFront Access Logs.


• Review blocked requests (e.g., SQL injection attempts).
• Set up alerts for unusual access patterns.

Step 4: Automate with AWS CI/CD (Optional)


• Use AWS CodePipeline to deploy CloudFront updates.
• Terraform/CloudFormation for infrastructure-as-code (IAC).

Example Terraform for CloudFront:

hcl
===========================================================
resource "aws_cloudfront_distribution" "cdn" {

30 | P a g e
origin {
domain_name = "my-api.example.com"
origin_id = "my-api"
}

enabled = true
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my-api"
}

viewer_certificate {
cloudfront_default_certificate = true
}
}

Final Results
CloudFront Caching: Faster content delivery
Security Measures: Protection against attacks
Performance Gains: Reduced latency

31 | P a g e
Experiment-9
Orchestrating Serverless Functions with AWS Step Functions

AWS Step Functions is a serverless workflow service that helps coordinate multiple AWS
Lambda functions and other AWS services. This experiment involves:

1. Creating a Step Function Workflow


2. Connecting AWS Lambda Functions
3. Adding Error Handling & Parallel Execution
4. Deploying and Testing the Workflow

Step 1: Set Up AWS Lambda Functions


AWS Step Functions use Lambda functions to process data. We'll create three Lambda
functions:

• Step 1: Receive Input (start_function)


• Step 2: Process Data (process_function)
• Step 3: Store Results (store_function)

1.1 Create Lambda Function 1 (Start Function)

1. Open AWS Lambda Console → Click Create function.


2. Choose Author from Scratch.
3. Enter function name: start_function.
4. Runtime: Python 3.8+.
5. Click Create Function.
6. Add the following code:

python
===========================================================
import json

def lambda_handler(event, context):


name = event.get("name", "Guest")
return {"message": f"Hello {name}, starting the workflow!"}

7. Click Deploy.

1.2 Create Lambda Function 2 (Processing Function)

1. Repeat steps 1-6, but name it process_function.


2. Use this code:

32 | P a g e
python
===========================================================
import json

def lambda_handler(event, context):


message = event.get("message", "No message received")
processed_message = message.upper() # Simulate processing
return {"processed_message": processed_message}

3. Click Deploy.

1.3 Create Lambda Function 3 (Store Results)

1. Repeat steps 1-6, but name it store_function.


2. Use this code:

python
===========================================================
import json
import boto3

def lambda_handler(event, context):


s3 = boto3.client("s3")
bucket_name = "my-step-functions-bucket"
s3.put_object(
Bucket=bucket_name,
Key="workflow_result.json",
Body=json.dumps(event)
)
return {"status": "Data stored in S3"}

3. Click Deploy.
4. Create an S3 bucket named my-step-functions-bucket.

Step 2: Create an AWS Step Function Workflow


2.1 Open AWS Step Functions Console

1. Go to AWS Console → Step Functions.


2. Click Create State Machine.
3. Choose Author with Code → Standard Workflow.

2.2 Define the Workflow in ASL (Amazon States Language)

1. Use this JSON definition for the workflow:

json

33 | P a g e
===========================================================
{
"Comment": "Serverless Orchestration Example",
"StartAt": "StartFunction",
"States": {
"StartFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:start_function",
"Next": "ProcessFunction"
},
"ProcessFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"Next": "StoreFunction"
},
"StoreFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:store_function",
"End": true
}
}
}

2. Replace REGION and ACCOUNT_ID with your AWS values.


3. Click Next → Name the workflow StepFunctionWorkflow.
4. Click Create State Machine.

Step 3: Add Error Handling & Parallel Execution


3.1 Implement Error Handling in Workflow

Modify the ProcessFunction to retry on failure:

json
===========================================================
"ProcessFunction": {
"Type": "Task",
"Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"Retry": [
{
"ErrorEquals": ["Lambda.ServiceException"],
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2.0
}
],
"Next": "StoreFunction"
}

3.2 Implement Parallel Execution

34 | P a g e
Modify the workflow to process two Lambda functions in parallel:

json
===========================================================
"ParallelProcessing": {
"Type": "Parallel",
"Branches": [
{
"StartAt": "ProcessFunction",
"States": {
"ProcessFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"End": true
}
}
},
{
"StartAt": "AdditionalProcessing",
"States": {
"AdditionalProcessing": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:additional_function",
"End": true
}
}
}
],
"Next": "StoreFunction"
}

This runs two functions simultaneously before storing results.

Step 4: Test & Deploy the Workflow


4.1 Run a Test Execution

1. Open Step Functions Console.


2. Select StepFunctionWorkflow.
3. Click Start Execution.
4. Enter test input:

json
===========================================================
{ "name": "Alice" }

5. Click Start Execution and monitor the workflow.

4.2 Validate the Output

35 | P a g e
1. Open S3 → my-step-functions-bucket.
2. Download workflow_result.json.
3. Ensure the stored data matches the expected result.

Step 5: Automate with EventBridge (Optional)


To trigger this workflow automatically, use Amazon EventBridge:

1. Open EventBridge Console → Rules → Click Create Rule.


2. Set Name: StepFunctionTrigger.
3. Choose Event Source (e.g., S3 file upload).
4. Set Target: Your Step Functions Workflow.
5. Click Create Rule.

Now, whenever a file is uploaded to S3, the Step Function executes!

Final Results
Serverless orchestration of Lambda functions
Error handling & parallel execution
Automated triggers for seamless execution

36 | P a g e
Experiment-10
Automating Application Deployment Using a CI/CD Pipeline

This experiment covers Continuous Integration and Continuous Deployment (CI/CD)


using AWS CodePipeline, CodeCommit, CodeBuild, and CodeDeploy to automate the
deployment of a web application.

Key Steps

1. Set Up the Source Code Repository (AWS CodeCommit)


2. Create a Build Process (AWS CodeBuild)
3. Deploy the Application (AWS CodeDeploy)
4. Orchestrate the Pipeline (AWS CodePipeline)
5. Test and Validate the Deployment

Step 1: Set Up AWS CodeCommit (Source Repository)


AWS CodeCommit is a Git-based repository for storing application code.

1.1 Create a CodeCommit Repository

1. Open AWS Console → CodeCommit.


2. Click Create repository.
3. Enter repository name: MyAppRepo.
4. Click Create.

1.2 Clone the Repository Locally

1. Get the repository clone URL from CodeCommit.


2. Clone the repo to your local machine:

sh
===========================================================
git clone https://git-
codecommit.REGION.amazonaws.com/v1/repos/MyAppRepo
cd MyAppRepo

1.3 Add Application Code

1. Create a sample Node.js web app:

sh

37 | P a g e
===========================================================
mkdir app && cd app
touch index.js

2. Add the following index.js code:

javascript
===========================================================
const http = require("http");

const server = http.createServer((req, res) => {


res.writeHead(200, { "Content-Type": "text/plain" });
res.end("Hello from AWS CI/CD Pipeline!");
});

server.listen(3000, () => console.log("Server running on port


3000"));

3. Initialize Git, commit, and push the code:

sh
===========================================================
git init
git add .
git commit -m "Initial commit"
git push origin main

Step 2: Configure AWS CodeBuild (Build & Test Process)


AWS CodeBuild compiles the source code and runs tests before deployment.

2.1 Create a buildspec.yml File

1. In the repo root, create buildspec.yml:

yaml
===========================================================
version: 0.2
phases:
install:
runtime-versions:
nodejs: 16
commands:
- echo "Installing dependencies..."
- npm install
build:
commands:
- echo "Building the application..."
- npm run build
post_build:
commands:
- echo "Build complete, preparing for deployment..."
- zip -r app.zip .
- aws s3 cp app.zip s3://myapp-deployment-bucket/
artifacts:
files:

38 | P a g e
- app.zip

2.2 Create an S3 Bucket for Build Artifacts

1. Go to AWS Console → S3.


2. Click Create bucket, name it myapp-deployment-bucket.
3. Enable Block all public access and click Create.

2.3 Set Up CodeBuild Project

1. Open AWS Console → CodeBuild.


2. Click Create build project.
3. Enter project name: MyAppBuild.
4. In Source, choose AWS CodeCommit and select MyAppRepo.
5. In Environment, choose:
o Managed image: Amazon Linux 2
o Runtime: Node.js
6. Under Buildspec, select Use buildspec.yml.
7. In Artifacts, select S3 and enter myapp-deployment-bucket.
8. Click Create build project.

Step 3: Set Up AWS CodeDeploy (Deployment Process)


AWS CodeDeploy automates application deployment to EC2 instances.

3.1 Launch an EC2 Instance for Deployment

1. Go to AWS Console → EC2 → Launch Instance.


2. Select Amazon Linux 2 AMI.
3. Choose t2.micro (Free Tier).
4. Enable Auto-assign Public IP.
5. Under Security Group, allow HTTP (port 80).
6. Click Launch and connect via SSH:

sh
===========================================================
ssh -i my-key.pem ec2-user@EC2-PUBLIC-IP

3.2 Install CodeDeploy Agent on EC2

Run these commands on EC2:

sh
===========================================================

39 | P a g e
sudo yum update -y
sudo yum install ruby
sudo yum install wget
cd /home/ec2-user
wget https://aws-codedeploy-REGION.s3.REGION.amazonaws.com/latest/install
chmod +x install
sudo ./install auto
sudo systemctl start codedeploy-agent
sudo systemctl enable codedeploy-agent

3.3 Create an AppSpec File (appspec.yml)

In the repo, add appspec.yml:

yaml
===========================================================
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/myapp
hooks:
ApplicationStart:
- location: scripts/start.sh
timeout: 300
runas: ec2-user

3.4 Create a Deployment Script (scripts/start.sh)

In the repo, add scripts/start.sh:

sh
===========================================================
#!/bin/bash
cd /home/ec2-user/myapp
node index.js > app.log 2>&1 &

Run:

sh
===========================================================
chmod +x scripts/start.sh

3.5 Create CodeDeploy Application & Deployment Group

1. Open AWS Console → CodeDeploy.


2. Click Create Application, name it MyAppDeploy.
3. Choose Compute Platform: EC2/On-Premises.
4. Click Create Deployment Group.
5. Choose Service Role (create an IAM role for CodeDeploy).
6. Select EC2 Instances for deployment.
7. Choose Amazon S3 as the source and enter myapp-deployment-bucket/app.zip.
40 | P a g e
Step 4: Automate with AWS CodePipeline
AWS CodePipeline orchestrates CodeCommit, CodeBuild, and CodeDeploy.

4.1 Create CodePipeline

1. Open AWS Console → CodePipeline.


2. Click Create Pipeline → Enter name: MyAppPipeline.
3. In Source Stage, select:
o Source provider: AWS CodeCommit
o Repository: MyAppRepo
4. In Build Stage, select AWS CodeBuild → MyAppBuild.
5. In Deploy Stage, select AWS CodeDeploy → MyAppDeploy.
6. Click Create Pipeline.

Step 5: Test the CI/CD Pipeline


5.1 Trigger a Deployment

1. Make changes to the app and commit them:

sh
===========================================================
echo "console.log('New Version Deployed');" >> index.js
git add .
git commit -m "Updated app"
git push origin main

2. Go to AWS Console → CodePipeline.


3. Click Release Change.
4. Monitor the pipeline execution.

5.2 Validate the Deployment

1. Connect to EC2:

sh
===========================================================
ssh -i my-key.pem ec2-user@EC2-PUBLIC-IP

2. Check if the app is running:

sh
===========================================================

41 | P a g e
curl http://localhost:3000

3. Output should be:

csharp
===========================================================
Hello from AWS CI/CD Pipeline!

Final Results
Automated Deployment with AWS CI/CD
Code changes trigger an end-to-end pipeline
Fully functional, auto-deploying web app

42 | P a g e

You might also like