Lab Assignment 2-CSET 463
Lab Assignment 2-CSET 463
Course Code- CSET-463 Course Name- AWS Cloud Support Associate (Lab
2)
Year- 2024 Semester- Even VI Semester
Date- 19/01/2023 Batch- 2021-2024
CO-Mapping
CO1 CO2 CO3
Q1 √
Objective: The purpose of this lab assignment is to provide hands-on experience with Amazon S3 (Simple
Storage Service), covering various aspects such as bucket creation, object management, permissions,
versioning, lifecycle policies, CORS configuration, transfer acceleration, static website hosting, IAM roles,
and multipart uploads using the AWS CLI.
• Upload a sample file or object to the created S3 bucket using the AWS Management Console or
AWS CLI.
• Enable versioning for the created S3 bucket using the AWS Management Console or AWS CLI.
• Define a lifecycle policy for the S3 bucket to automatically transition objects to different storage
classes or delete them based on defined rules.
• Use AWS CLI or SDK to compare data transfer speeds with and without acceleration.
• Create an IAM role with the necessary permissions to access the S3 bucket.
• Use the AWS CLI to perform a multipart upload for a large object to the S3 bucket.
• Confirm that the multipart upload was successful and the object is intact.
Compulsory Tasks
Task 1:
Task Details
Architecture Diagram
Creating an S3 Lifecycle Policy: This walks you through the steps on how to create a Lifecycle Rule for an
object in an S3 Bucket.
Lab Tasks
Architecture Diagram
Enable CORS in Amazon S3: This lab walks you through the steps to Enable Cross-Origin Resource
Sharing (CORS) in Amazon S3.
Lab Tasks
Architecture Diagram
Comparing Data Transfer Speeds with S3 Transfer Acceleration: This lab walks you through the steps to
create an S3 Bucket to compare the speeds of Direct Upload and Transfer Accelerated Upload of a file.
Lab Tasks
Architecture Diagram
How to Create a static website using Amazon S3: This walks you through how to create a static HTML
website using AWS S3 and also how to make it accessible from the internet.
Task Details
Accessing S3 with AWS IAM Roles: This walks you through the steps to create an AWS S3 bucket and
demonstrates how to access the bucket using AWS CLI commands from EC2 instance and IAM roles.
Introduction
IAM Policy
1. An IAM (Identity and access management) policy is an entity in AWS, that enables you to manage
access to AWS services and resources in a secure fashion.
2. Policies are stored on AWS in JSON format and are attached to resources as identity-based policies.
3. You can attach an IAM policy to different entities such as an IAM group, user, or role.
4. IAM policies gives us the power of restricting users or groups to only use the specific services that
they need.
Policy Types
There are two important types of policies:
• Identity-Based-Policies
• Resource-Based-Policies
Identity-Based-Policy**
1. Identity-based policies are policies that you can attach to an AWS identity (such as a user, group
of users, or role).
2. These policies control what actions an entity can perform, which resources they can use, and the
conditions in which they can use said resources.
3. Identity-based policies are further classified as:
o AWS Managed Policies
o Custom Managed Policies
1. AWS Managed policies are those policies that are created and managed by AWS itself.
2. If you are new to IAM policies, you can start with AWS managed policies before managing your
own.
1. Custom managed policies are policies that are created and managed by you in your AWS account.
2. Customer managed policies provide us with more precise control than AWS managed policies.
3. You can create and edit an IAM policy in the visual editor or by creating the JSON policy document
directly.
4. You can create your own IAM policy using the following
link: https://awspolicygen.s3.amazonaws.com/policygen.html
Resource-Based-Policy
1. Resource-based policies are policies that we attach to a resource such as an Amazon S3 bucket.
2. Resource-based policies grant the specified permission to perform specific actions on particular
resources and define under what conditions these policies apply to them.
3. Resource-based policies are in line with other policies.
4. There are currently no AWS-managed resource-based policies.
5. There is only one type of resource-based policy called a trust policy, which is attached to an IAM
role.
6. An IAM role is both an identity and a resource that supports resource-based policies.
IAM Role
1. An IAM role is an AWS IAM identity (that we can create in our AWS account) that has specific
permissions.
2. It is similar to an IAM user, which determines what the identity can and cannot do in AWS.
3. Instead of attaching a role to a particular user or group, it can be attached to anyone who needs it.
4. The advantage of having a role is that we do not have standard long-term credentials such as a
password or access keys associated with it.
5. When resources assume a particular role, it provides us with temporary security credentials for our
role session.
6. We can use roles to access users, applications, or services that don’t have access to our AWS
resources.
7. We can attach one or more policies with roles, depending on our requirements.
8. For example, we can create a role with s3 full access and attach it to an EC2 instance to access S3
buckets.
Lab Tasks
Architecture Diagram
IAM Configuration
Select Roles in the left pane and click on Create Role to create a new IAM role.
In the Create Role section, choose AWS Service and then select EC2 service for the role. Click on Next:
Permissions
• Key: Name
• Value: ec2S3role
You have successfully created the IAM role to access the S3 bucket.
See the highlight role.
EC2 Configuration
Under the left sub-menu, click on Instances and then click on Launch Instance
Choose an Amazon Machine Image (AMI): Search for Amazon Linux 2 AMI in the search box and click
on the Select button.
Choose an Instance Type: Select t2.micro and then click on Next: Configure Instance Details
• Scroll down to the IAM role and then select the role that we have created in the above step.
• Leave other fields as default.
Click on Next: Add Storage
Add Storage: No need to change anything in this step. Click on Next: Add Tags
Add Tags: Click on Add Tag
• Key: Name
• Value: S3EC2server
• Name: S3server-SG
To add SSH:
You can tell that the instance is running by checking the instance status (example below).
S3 Configuration
Services - > S3
Create a bucket with all default settings. Give it a bucket name yourBucketName.
Note: the bucket name must be globally unique.
To SSH into the server, please follow the steps in SSH into EC2 Instance.
Move file from current EC2 instance to S3 bucket
Shell
1sudo su
Run the below command to find your S3 bucket via CLI.
You can see yourBucketName below.
AWS CLI
1aws s3 ls
You will see output similar to the image above, which shows that we are able to access the S3 bucket with
the help of role attached to the EC2 instance.
Create a new text file and upload it to the bucket via AWS CLI (using the following set of commands):
Shell
1touch test.txt
2echo "Hello World" >> test.txt
3cat test.txt
AWS CLI
1aws s3 mv test.txt s3://yourBucketName
Service -> S3
Click on yourBucketName
Repeat the above steps and create some more files like new.txt, smile.txt and upload it to the S3 bucket
using below commands:
Shell
1touch new.txt smile.txt
AWS CLI
1aws s3 mv new.txt s3://yourBucketName
2
3aws s3 mv smile.txt s3://yourBucketName
You can confirm the files uploaded to S3 bucket by navigating to the bucket in the AWS console.
Services -> S3
You can also list the files uploaded to S3 bucket via CLI from the EC2 instance with the following
command:
AWS CLI
1aws s3 ls s3://yourBucketName
AWS CLI
1aws s3 mv test.txt s3://yourBucketName/test.txt .
AWS S3 Multipart Upload using AWS CLI: This Lab walks you through the steps on how to upload a file
to an S3 bucket using multipart uploading.
Tasks
IAM Configuration
Select Roles in the left pane and click on Create Role to create a new IAM role.
In the Create Role section, choose AWS Service and then select EC2 service for the role. Click on Next:
Permissions as shown below in the screenshot:
• Key: Name
• Value: EC2-S3-fullAccess
S3 Configuration
Services - > S3
Create a bucket with all default settings. Give it a bucket name yourBucketName.
Click on Create
EC2 Configuration
Under the left sub-menu, click on Instances and then click on Launch Instance
Choose an Amazon Machine Image (AMI): Search for Amazon Linux 2 AMI in the search box and click
on the Select button.
Choose an Instance Type: Select t2.micro and then click on Next: Configure Instance Details
• Scroll down to the IAM role and then select the role that we have created in the above step.
• Leave other fields as default.
Shell
1#!/bin/bash
2sudo su
3yum update -y
4mkdir /home/ec2-user/tmp/
Click on Next: Add Storage
Add Storage: No need to change anything in this step. Click on Next: Add Tags
• Key: Name
• Value: Multipart_Server
• Name: Multipart_Server-SG
• Description: Multi-part Server SSH Security Group
To add SSH:
You can tell that the instance is running by checking the instance status (example below).
To SSH into the server, please follow the steps in SSH into EC2 Instance.
Upload a short video to EC2
AWS CLI
1sudo su
2ls -l
3
4mv yourVideo.mp4 tmp
Change directory to tmp
AWS CLI
1cd tmp
2ll
Notice this is a 56.4MB video
• The split command will split a large file into many pieces (chunks) based on the option.
• split [options] [filename]
Here we are dividing the 56.4 MB file into 10MB chunks. [ -b option means Bytes ]
AWS CLI
1split -b 10M yourVideo.mp4
View the chunked files
AWS CLI
1ls -lh
Info: Here “xaa” and “xad” are the chunked files that have been renamed alphabetically. Each file is 10MB
in size but except the last one. The number of chunks depends on the size of your original file and the byte
value used to partition the chunks.
AWS CLI
1aws s3 ls
We are initiating the multipart upload using an AWS CLI command, which will generate a UploadID that
will be used later.
• Syntax: aws s3api create-multipart-upload –bucket [Bucket name] –key [original file name]
Note: Replace the example bucket name below with your bucket name.
Note: Replace the example file name below with your file name.
AWS CLI
1aws s3api create-multipart-upload --bucket yourBucketName --key yourVideo.mp4
Note: Please copy the UploadId and save it for later use.
• My
UploadId: _igJ9Rd_RMsonoL0FgYs1zZ6pqrzJyUudKYMDYNS_Cf2k4ktHbNaYeQgJtVjpJwH
AmAPIjeYZTvLUjsAndKqOToyOzRSqeZONdhLy7uPD1a2qX9kdo8_NYkxpmj_xe2xx250xRZ
jDAm32N6X8bCYlA--
Next, we need to upload each file chunk one by one, using the part number. The part number is assigned
based on the alphabetic order of the file.
xaa 1
xab 2
xac 3
xad 4
xae 5
xaf 6
Syntax:
AWS CLI
aws s3api upload-part --bucket [bucketname] --key [filename] --part-number [number] --body [chunk
1
file name] --upload-id [id]
Example:
Note: Replace the example bucket name below with your bucket name.
Note: Replace the example file name below with your file name.
Note: Replace the example UploadId below with your UploadId.
AWS CLI
aws s3api upload-part --bucket yourBucketName --key yourVideo.mp4 --part-number 1 --body xaa --
1
upload-id yourUploadId
Note: Copy the ETag id and Part number for later use.
Repeat the above CLI command for each file chunk [Replace --part-number & --body values with the above
table values]
Press the UP Arrow Key on your computer to get back to the previous command. No need to enter the
Upload ID again, just change the Part Number and Body Value.
Each time you upload a chunk, don’t forget to save the Etag value.
My ETags:
• “ETag”: “"48dc187fd8b0e41deec06891ed7e0c02"“
• “ETag”: “"07d54f5c82a1c59f5df23d453e8775d7"“
• “ETag”: “"fa79d59ab3e32c6dd8b9c53fa9747f54"“
• “ETag”: “"3898e88d3d5f5ba93a9b7175e859b42d"“
• “ETag”: “"7dcd2ad6dd3f28f146cdf87775754dcc"“
• “ETag”: “"fc511daca99afd0be81a6d606ee95c2d"“
EC2 Shell
1vim list.json
Copy the below JSON Script and paste it in the list.json file.
Note: Replace the ETag ID according to the part number, which you received after uploading each chunk.
1{
2
3 "Parts": [
4
5 {
6
7 "PartNumber": 1,
8
9 "ETag": "\"48dc187fd8b0e41deec06891ed7e0c02\""
10
11 },
12
13 {
14
15 "PartNumber": 2,
16
17 "ETag": "\"07d54f5c82a1c59f5df23d453e8775d7\""
18
19 },
20
21 {
22
23 "PartNumber": 3,
24
25 "ETag": "\"fa79d59ab3e32c6dd8b9c53fa9747f54\""
26
27 },
28
29 {
30
31 "PartNumber": 4,
32
33 "ETag": "\"3898e88d3d5f5ba93a9b7175e859b42d\""
34
35 },
36
37 {
38
39 "PartNumber": 5,
40
41 "ETag": "\"7dcd2ad6dd3f28f146cdf87775754dcc\""
42
43 },
44
45 {
46
47 "PartNumber": 6,
48
49 "ETag": "\"fc511daca99afd0be81a6d606ee95c2d\""
50
51 }
52 ]
53}
Now we are going to join all the chunks together with the help of the JSON file we created in the above
step.
Syntax:
AWS CLI
aws s3api complete-multipart-upload --multipart-upload [json file link] --bucket [upload bucket name] -
1
-key [original file name] --upload-id [upload id]
Example:
Example:
Note: Replace the example bucket name below with your bucket name.
Note: Replace the example file name below with your file name.
Note: Replace the example UploadId below with your UploadId.
Note: Replace the example list.json with your json name.
AWS CLI
aws s3api complete-multipart-upload --multipart-upload file://yourJsonName --bucket yourBucketName
1
--key yourVideo.mp4 --upload-id yourUploadId
Note: