AWS Lab Manual-Core
AWS Lab Manual-Core
This experiment will guide you through AWS CloudShell and AWS Cloud9, two cloud-
based development environments that allow you to interact with AWS resources and develop
applications without setting up a local development environment.
sh
=================================================
aws sts get-caller-identity
sh
=================================================
aws s3 ls
sh
=================================================
aws ec2 describe-instances
sh
=================================================
echo "Hello AWS CloudShell!" > sample.txt
1|Page
2. Upload it to S3:
sh
=================================================
aws s3 cp sample.txt s3://your-bucket-name/
sh
=================================================
aws s3 cp s3://your-bucket-name/sample.txt .
sh
=================================================
aws sts get-caller-identity
sh
=================================================
echo "print('Hello from AWS Cloud9!')" > hello.py
2|Page
3. Run the Script:
sh
=================================================
python3 hello.py
sh
=================================================
pip install boto3
python
=================================================
import boto3
s3 = boto3.client('s3')
response = s3.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
• Click "Actions > Delete Environment" to remove your Cloud9 instance and avoid
unnecessary charges.
3|Page
Experiment-2
Working with Amazon S3Orchestrating Serverless Functions with AWS Step
Functions
This experiment involves storing and retrieving files using Amazon S3 and automating
workflows using AWS Step Functions to orchestrate serverless AWS Lambda functions.
sh
=================================================
aws s3 ls s3://my-serverless-bucket/
sh
=================================================
aws s3 cp s3://my-serverless-bucket/your-file.txt .
4|Page
1. In the S3 bucket settings, go to "Properties" → "Event notifications".
2. Click "Create event notification", name it LambdaTrigger.
3. Under Event types, choose "PUT" (triggers when a file is uploaded).
4. Under Destination, select AWS Lambda (we will create this function in the next
section).
5. Click Save changes.
python
=================================================
import json
import boto3
return {
'statusCode': 200,
'body': json.dumps(f"Processed file {file_name}")
}
2. Click "Deploy".
5|Page
4. Use the following JSON definition to integrate Lambda:
json
=================================================
{
"Comment": "Step Function to Process S3 Files",
"StartAt": "InvokeLambda",
"States": {
"InvokeLambda": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:S3FileProcessor",
"End": true
}
}
}
json
=================================================
{
"Records": [
{
"s3": {
"bucket": {"name": "my-serverless-bucket"},
"object": {"key": "your-file.txt"}
}
}
]
}
6|Page
Experiment-3
Working with Amazon DynamoDB
This experiment guides you through creating, inserting, querying, and deleting data in
Amazon DynamoDB, a fully managed NoSQL database service.
json
=================================================
{
"CustomerID": "C001",
"Name": "John Doe",
"Email": "[email protected]",
"Phone": "+1234567890"
}
4. Click Save.
sh
=================================================
7|Page
aws dynamodb put-item --table-name Customers --item '{
"CustomerID": {"S": "C002"},
"Name": {"S": "Jane Smith"},
"Email": {"S": "[email protected]"},
"Phone": {"S": "+9876543210"}
}' --region us-east-1
sh
=================================================
aws dynamodb scan --table-name Customers --region us-east-1
sh
=================================================
aws dynamodb get-item --table-name Customers --key '{
"CustomerID": {"S": "C001"}
}' --region us-east-1
sh
=================================================
aws dynamodb update-item --table-name Customers --key '{
"CustomerID": {"S": "C001"}
}' --update-expression "SET Phone = :p" --expression-attribute-values
'{
":p": {"S": "+1122334455"}
}' --region us-east-1
8|Page
Step 5: Delete Data from DynamoDB
Method 1: Using AWS Console
sh
=================================================
aws dynamodb delete-item --table-name Customers --key '{
"CustomerID": {"S": "C002"}
}' --region us-east-1
sh
=================================================
aws dynamodb delete-table --table-name Customers --region us-east-1
9|Page
Experiment-4
Developing REST APIs with Amazon API Gateway
This experiment guides you through creating, deploying, and testing a REST API using
Amazon API Gateway and AWS Lambda to handle HTTP requests.
python
=================================================
import json
o Click "Deploy".
10 | P a g e
Step 3: Create a Resource and Method
1. Create a New Resource:
o In the left panel, click Actions → Create Resource.
o Enter Resource Name (e.g., hello).
o Click Create Resource.
2. Create a GET Method:
o Select the /hello resource.
o Click Actions → Create Method → Choose GET.
o Click ✓ (Check Mark).
3. Integrate with Lambda Function:
o In the Integration type, select Lambda Function.
o Enter the Lambda function name (MyAPIFunction).
o Click Save and confirm the permissions.
sh
=================================================
curl -X GET https://xyz.execute-api.us-east-
1.amazonaws.com/dev/hello
11 | P a g e
• IAM Authentication:
o Use IAM policies to restrict access.
12 | P a g e
Experiment-5
Creating Lambda Functions Using the AWS SDK for Python
This experiment will guide you through creating, deploying, and invoking an AWS
Lambda function programmatically using Boto3, the AWS SDK for Python.
sh
=================================================
aws configure
o Enter AWS Access Key, Secret Key, Region, and Output format (json).
3. Install Boto3 (if not installed):
o Run the following command:
sh
=================================================
pip install boto3
python
=================================================
import json
sh
=================================================
zip lambda_function.zip lambda_function.py
python
=================================================
import boto3
function_name = "MyBoto3LambdaFunction"
role_arn = "arn:aws:iam::YOUR_ACCOUNT_ID:role/YOUR_LAMBDA_ROLE"
response = lambda_client.create_function(
FunctionName=function_name,
Runtime="python3.9",
Role=role_arn,
Handler="lambda_function.lambda_handler",
Code={"ZipFile": zipped_code},
Timeout=10,
MemorySize=128,
)
sh
=================================================
python create_lambda.py
14 | P a g e
• This will create the Lambda function and display the response.
python
=================================================
import boto3
import json
response = lambda_client.invoke(
FunctionName="MyBoto3LambdaFunction",
InvocationType="RequestResponse",
Payload=json.dumps({})
)
response_payload = json.loads(response['Payload'].read())
print("Lambda Response:", response_payload)
sh
=================================================
python invoke_lambda.py
1. Modify lambda_function.py.
2. Re-zip and update the Lambda function:
sh
=================================================
zip lambda_function.zip lambda_function.py
python
=================================================
import boto3
response = lambda_client.update_function_code(
FunctionName="MyBoto3LambdaFunction",
15 | P a g e
ZipFile=zipped_code
)
sh
=================================================
python update_lambda.py
python
=================================================
import boto3
response = lambda_client.delete_function(
FunctionName="MyBoto3LambdaFunction"
)
sh
=================================================
python delete_lambda.py
16 | P a g e
Experiment-6
Migrating a Web Application to Docker Containers
Migrating a web application to Docker containers in AWS involves several steps, from
containerizing the application to deploying it on AWS infrastructure. Here’s a detailed step-
by-step procedure:
1. Install Docker
o Download and install Docker from Docker’s official site.
o Verify installation:
sh
===========================================================
docker --version
sh
===========================================================
sudo apt install docker-compose
o Verify:
sh
===========================================================
docker-compose --version
17 | P a g e
Step 3: Create a Dockerfile
Create a Dockerfile in the root of your project. Example for a Node.js app:
Dockerfile
===========================================================
# Use official Node.js runtime as base image
FROM node:18
COPY . .
sh
===========================================================
docker build -t my-web-app .
sh
===========================================================
docker run -p 3000:3000 my-web-app
18 | P a g e
5.2 Authenticate Docker with AWS ECR
sh
===========================================================
aws ecr get-login-password --region <region> | docker login --username AWS
--password-stdin <aws_account_id>.dkr.ecr.<region>.amazonaws.com
sh
===========================================================
aws ecs create-cluster --cluster-name my-cluster
19 | P a g e
sh
===========================================================
eb init -p docker my-web-app
sh
===========================================================
eb create my-web-app-env
sh
===========================================================
aws logs describe-log-groups
20 | P a g e
Final Notes
21 | P a g e
Experiment-7
Caching Application Data with ElastiCache, Caching with Amazon
CloudFronT, Caching Strategies
sh
===========================================================
pip install redis
python
===========================================================
import redis
# Connect to Redis
redis_client = redis.StrictRedis(host='my-
cache.abc123.use1.cache.amazonaws.com', port=6379,
decode_responses=True)
# Store a value
redis_client.set('user:1001', 'John Doe')
# Retrieve a value
user = redis_client.get('user:1001')
print(user) # Output: John Doe
python
===========================================================
redis_client.setex('user:1002', 3600, 'Jane Doe') # Expires in 1
hour
23 | P a g e
json
===========================================================
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-static-content/*"
}
]
}
sh
===========================================================
curl -I https://d123.cloudfront.net/index.html
2. Look for:
csharp
===========================================================
X-Cache: Hit from cloudfront
sh
===========================================================
aws cloudfront create-invalidation --distribution-id ABCD1234 --paths "/*"
24 | P a g e
Step 3: Implementing Caching Strategies
Caching strategies define how and when data is cached.
python
===========================================================
def set_user_data(user_id, data):
# Save to Redis cache
redis_client.set(f'user:{user_id}', data)
python
===========================================================
def get_user_data(user_id):
data = redis_client.get(f'user:{user_id}')
if not data:
# Simulate DB fetch
data = db.get(user_id, 'Default User')
redis_client.setex(f'user:{user_id}', 3600, data) # Cache with TTL
return data
Redis automatically removes least used items when the cache is full.
Enable LRU:
sh
===========================================================
CONFIG SET maxmemory-policy allkeys-lru
25 | P a g e
• Configure TTL settings to balance freshness and performance.
python
===========================================================
import time
start = time.time()
get_user_data(1001)
print(f"Time taken: {time.time() - start:.3f} sec")
Final Notes
• ElastiCache (Redis) speeds up database-heavy applications.
• CloudFront improves static content delivery.
• Choosing the right caching strategy (Write-through, Lazy Loading, etc.) is critical.
26 | P a g e
Experiment-8
Amazon CloudFront is a content delivery network (CDN) that caches content closer to
users, reducing latency and enhancing application security. This experiment covers:
CloudFront can cache content from Amazon S3 or an existing web application (e.g., EC2,
API Gateway).
json
===========================================================
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-cloudfront-cache/*"
}
]
}
27 | P a g e
o Restrict Bucket Access: Yes, and update policy automatically.
3. Default Cache Behavior:
o Viewer Protocol Policy: Redirect HTTP to HTTPS
o Allowed HTTP Methods: GET, HEAD (for static files)
o Cache Policy: Use CachingOptimized for static content.
4. Distribution Settings:
o Enable WAF Protection (if security is required).
o Click Create Distribution.
sh
===========================================================
curl -I https://d123.cloudfront.net/index.html
Look for:
csharp
===========================================================
X-Cache: Hit from cloudfront
sh
===========================================================
aws cloudfront create-invalidation --distribution-id ABCD1234 --paths
"/*"
28 | P a g e
2.2 Enable Web Application Firewall (AWS WAF)
• Enable HTTPS: CloudFront provides free SSL/TLS certificates via AWS Certificate
Manager (ACM).
• Security Headers: Use Lambda@Edge to add security headers:
js
===========================================================
'use strict';
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
callback(null, response);
};
• Signed URLs: Allow access to CloudFront content only for authorized users.
• Signed Cookies: Control user access based on authentication.
python
===========================================================
import boto3
client = boto3.client('cloudfront')
url = client.get_signed_url(
url="https://d123.cloudfront.net/protected-file.jpg",
29 | P a g e
private_key="private-key.pem",
key_pair_id="APKAIKEYPAIRID",
expires=1700000000
)
print(url)
This ensures that only authorized users can access sensitive content.
sh
===========================================================
time curl -o /dev/null -s https://my-api.example.com/data
sh
===========================================================
time curl -o /dev/null -s https://d123.cloudfront.net/data
hcl
===========================================================
resource "aws_cloudfront_distribution" "cdn" {
30 | P a g e
origin {
domain_name = "my-api.example.com"
origin_id = "my-api"
}
enabled = true
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "my-api"
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
Final Results
CloudFront Caching: Faster content delivery
Security Measures: Protection against attacks
Performance Gains: Reduced latency
31 | P a g e
Experiment-9
Orchestrating Serverless Functions with AWS Step Functions
AWS Step Functions is a serverless workflow service that helps coordinate multiple AWS
Lambda functions and other AWS services. This experiment involves:
python
===========================================================
import json
7. Click Deploy.
32 | P a g e
python
===========================================================
import json
3. Click Deploy.
python
===========================================================
import json
import boto3
3. Click Deploy.
4. Create an S3 bucket named my-step-functions-bucket.
json
33 | P a g e
===========================================================
{
"Comment": "Serverless Orchestration Example",
"StartAt": "StartFunction",
"States": {
"StartFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:start_function",
"Next": "ProcessFunction"
},
"ProcessFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"Next": "StoreFunction"
},
"StoreFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:store_function",
"End": true
}
}
}
json
===========================================================
"ProcessFunction": {
"Type": "Task",
"Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"Retry": [
{
"ErrorEquals": ["Lambda.ServiceException"],
"IntervalSeconds": 2,
"MaxAttempts": 3,
"BackoffRate": 2.0
}
],
"Next": "StoreFunction"
}
34 | P a g e
Modify the workflow to process two Lambda functions in parallel:
json
===========================================================
"ParallelProcessing": {
"Type": "Parallel",
"Branches": [
{
"StartAt": "ProcessFunction",
"States": {
"ProcessFunction": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:process_function",
"End": true
}
}
},
{
"StartAt": "AdditionalProcessing",
"States": {
"AdditionalProcessing": {
"Type": "Task",
"Resource":
"arn:aws:lambda:REGION:ACCOUNT_ID:function:additional_function",
"End": true
}
}
}
],
"Next": "StoreFunction"
}
json
===========================================================
{ "name": "Alice" }
35 | P a g e
1. Open S3 → my-step-functions-bucket.
2. Download workflow_result.json.
3. Ensure the stored data matches the expected result.
Final Results
Serverless orchestration of Lambda functions
Error handling & parallel execution
Automated triggers for seamless execution
36 | P a g e
Experiment-10
Automating Application Deployment Using a CI/CD Pipeline
Key Steps
sh
===========================================================
git clone https://git-
codecommit.REGION.amazonaws.com/v1/repos/MyAppRepo
cd MyAppRepo
sh
37 | P a g e
===========================================================
mkdir app && cd app
touch index.js
javascript
===========================================================
const http = require("http");
sh
===========================================================
git init
git add .
git commit -m "Initial commit"
git push origin main
yaml
===========================================================
version: 0.2
phases:
install:
runtime-versions:
nodejs: 16
commands:
- echo "Installing dependencies..."
- npm install
build:
commands:
- echo "Building the application..."
- npm run build
post_build:
commands:
- echo "Build complete, preparing for deployment..."
- zip -r app.zip .
- aws s3 cp app.zip s3://myapp-deployment-bucket/
artifacts:
files:
38 | P a g e
- app.zip
sh
===========================================================
ssh -i my-key.pem ec2-user@EC2-PUBLIC-IP
sh
===========================================================
39 | P a g e
sudo yum update -y
sudo yum install ruby
sudo yum install wget
cd /home/ec2-user
wget https://aws-codedeploy-REGION.s3.REGION.amazonaws.com/latest/install
chmod +x install
sudo ./install auto
sudo systemctl start codedeploy-agent
sudo systemctl enable codedeploy-agent
yaml
===========================================================
version: 0.0
os: linux
files:
- source: /
destination: /home/ec2-user/myapp
hooks:
ApplicationStart:
- location: scripts/start.sh
timeout: 300
runas: ec2-user
sh
===========================================================
#!/bin/bash
cd /home/ec2-user/myapp
node index.js > app.log 2>&1 &
Run:
sh
===========================================================
chmod +x scripts/start.sh
sh
===========================================================
echo "console.log('New Version Deployed');" >> index.js
git add .
git commit -m "Updated app"
git push origin main
1. Connect to EC2:
sh
===========================================================
ssh -i my-key.pem ec2-user@EC2-PUBLIC-IP
sh
===========================================================
41 | P a g e
curl http://localhost:3000
csharp
===========================================================
Hello from AWS CI/CD Pipeline!
Final Results
Automated Deployment with AWS CI/CD
Code changes trigger an end-to-end pipeline
Fully functional, auto-deploying web app
42 | P a g e