0% found this document useful (0 votes)
50 views34 pages

Skill Enhancement Course-I Aws Cloud Computing: Submitted by

Uploaded by

vijaykrishna2k24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views34 pages

Skill Enhancement Course-I Aws Cloud Computing: Submitted by

Uploaded by

vijaykrishna2k24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

SKILL ENHANCEMENT COURSE–I

AWS CLOUD COMPUTING


Submitted by

MUDDAPU CHARAN 23A51A05A8


MUNGI PAVANI 23A51A05A9
K VENKATA SRILAKSHMI 23A51A05B0
NIMMADA RAVI TEJA 23A51A05B1
NUNNA VIJAY KRISHNA 23A51A05B2
PALLI JITENDRA 23A51A05B3

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


ADITYAINSTITUTE OF TECHNOLOGY AND MANAGEMENT
(Approved by AICTE, New Delhi, Affiliated to JNTUGV, Accredited by NBA
& NAAC) (AUTONOMOUS)
(K.KOTTURU, TEKKALI-
532201) 2024-2025

i
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
ADITYA INSTITUTE OF TECHNOLOGY AND MANAGEMENT

This is to certify that the Skill Enhancement Course – I entitled “AWS Cloud
Computing” is being submitted by MUDDAPU CHARAN (23A51A05A8), MUNGI
PAVANI (23A51A05A9), KINTALI VENKATA SRI LAKSHMI (23A51A05B0),
NIMMADA RAVI TEJA (23A51A05B1), NUNNA VIJAY KRISHNA
(23A51A05B2), PALLI
JITENDRA (23A51A05B3) in partial fulfilment of the requirements of the degree of
Bachelor of Technology in Computer Science and Engineering, Aditya Institute of
Technology and Management, Tekkali, is a record of genuine work carried out by them
under my guidance and supervision during the academic year 2024-2025

Signature of the Co-ordinator Signature of Head of the Department


Sri G. S. Pavan Kumar, M.Tech Dr. Y. Ramesh, M.Tech.,
Ph.D

Assistant Professor Head of the Department


Department of CSE Department of CSE

ii
CONTENTS

Introduction to AWS 01
Storage in AWS 05
AWS EBS and Volumes 10
Console Demonstration - EBS 13
AWS S3 17
Console Demonstration - S3 20
AWS EFS 22
Console Demonstration - S3 & EFS 25
AWS S3 Glacier 27
Console Demonstration - Glacier 29

iii
Introduction to AWS Cloud Computing
In today’s digital era, cloud computing has become a foundation for modern business operations,
providing flexible, scalable, and cost-effective IT solutions that facilitate innovation and productivity.
Amazon Web Services (AWS) is one of the most widely adopted cloud platforms, known for its vast
array of services and global infrastructure that enable companies to meet their unique business needs.
This introduction explores AWS cloud computing, covering its fundamental concepts, features, and
benefits for organizations of all sizes.

What is Cloud Computing?


Cloud computing is the delivery of computing services, including storage, processing, and networking,
over the internet, or “cloud.” Rather than owning and maintaining physical servers, organizations can
leverage cloud providers to access on-demand resources that are managed remotely. This shift
transforms the traditional model of IT, allowing businesses to scale operations quickly, reduce capital
expenses, and improve efficiency by focusing on core activities rather than infrastructure management.

What is AWS?
Amazon Web Services (AWS) is a leading cloud provider offering a comprehensive suite of over 200
cloud-based services. AWS was launched by Amazon in 2006 and has since grown to serve millions of
customers globally, from startups to large enterprises and government agencies. AWS operates through
a network of data centers spread across multiple geographic regions worldwide, ensuring high
availability, security, and resilience.
AWS provides services under three core models:

1. Infrastructure as a Service (IaaS) – Offers fundamental cloud building blocks such as


compute, storage, and networking.
2. Platform as a Service (PaaS) – Provides a platform for developers to build, run, and manage
applications without managing the underlying infrastructure.
3. Software as a Service (SaaS) – Delivers fully managed software applications hosted on the
cloud.
These service models give organizations the flexibility to choose solutions that fit their needs, from
managing infrastructure to leveraging ready-made applications.

1
Key Components of AWS Cloud Computing

AWS offers various services organized into categories, each addressing specific business requirements:

1. Compute Services
AWS’s compute services provide scalable processing power, allowing organizations to run
applications, host websites, and analyze data.
 Amazon EC2 (Elastic Compute Cloud): This service offers virtual servers to run
applications. Users can choose the instance type, operating system, and storage capacity to suit
their needs. EC2 instances are scalable, meaning resources can be adjusted as demand
fluctuates.
 AWS Lambda: A serverless compute service that runs code without provisioning or managing
servers. It is ideal for tasks like data processing and backend services, as users are charged only
for the compute time they consume.
2. Storage Services
Storage is essential for any organization’s IT infrastructure, and AWS provides several solutions to
securely store and retrieve data.
 Amazon S3 (Simple Storage Service): S3 is an object storage service that stores any amount
of data and makes it accessible over the internet. It is ideal for storing backups, archives, and
multimedia files, with multiple storage classes to optimize costs.
 Amazon EBS (Elastic Block Store): Provides persistent block storage for EC2 instances,
suitable for applications requiring low-latency storage.

3. Networking and Content Delivery


Networking is crucial for secure data transfer and efficient delivery of content.

 Amazon VPC (Virtual Private Cloud): Allows users to create a logically isolated network
within the AWS cloud. VPCs enable control over network configuration, including IP address
range and subnet creation.
 Amazon CloudFront: A content delivery network (CDN) service that distributes content
globally with low latency, ideal for streaming media or delivering applications to users
worldwide.

4. Database Services
AWS offers various database services to handle structured, semi-structured, and unstructured data.

 Amazon RDS (Relational Database Service): Provides managed relational databases (e.g.,
MySQL, PostgreSQL, Oracle, and SQL Server). It simplifies database setup, scaling, and
maintenance.
 Amazon DynamoDB: A fully managed NoSQL database service suitable for applications

2
requiring high performance and scalability, such as e-commerce sites and mobile apps.

3
5. Security and Identity
AWS prioritizes security and provides numerous services to protect user data.

 AWS IAM (Identity and Access Management): Enables administrators to control access to
AWS resources through role-based permissions.
 AWS Key Management Service (KMS): Provides encryption keys to secure data at rest and
in transit.
6. Machine Learning and Artificial Intelligence
AWS’s AI and ML services allow organizations to build intelligent applications.

 Amazon SageMaker: A fully managed service that enables data scientists to build, train, and
deploy machine learning models at scale.
 Amazon Rekognition: An image and video analysis service that can detect objects, people,
and activities, useful in security and media applications.
Advantages of AWS Cloud Computing
AWS has gained popularity due to the following key advantages:

1. Scalability and Flexibility


AWS offers both vertical and horizontal scaling options, allowing businesses to adjust resources based
on demand. Organizations can run small applications on a single instance or scale to massive
infrastructure supporting millions of users without over-provisioning resources.
2. Cost-Effectiveness
AWS operates on a pay-as-you-go model, where users are only billed for the resources they use. This
eliminates the need for large upfront investments in hardware and reduces ongoing maintenance costs.
3. Reliability
With data centers across multiple regions, AWS provides high levels of availability and redundancy.
Services such as Amazon EC2 and Amazon S3 offer “99.99%” uptime guarantees, making AWS a
trusted choice for mission-critical applications.
4. Global Reach
AWS’s extensive network of data centers and edge locations enables fast, low-latency access
worldwide. Organizations can deploy applications in multiple regions, enhancing performance for
international users and improving disaster recovery.

5. Security and Compliance


AWS adheres to numerous compliance standards, including HIPAA, GDPR, and SOC, which makes it
suitable for organizations in regulated industries. Security features like IAM, encryption, and audit
trails provide enhanced protection for sensitive data.

4
AWS Pricing Model
AWS pricing is designed to be flexible and transparent. The following are some commonly used
models:

. On-Demand Pricing: Pay for compute or storage without long-term commitments.


 Reserved Instances: Reserved instances are available at a discounted rate for users willing to
commit to one or three years.
 Spot Instances: AWS sells spare computing capacity at discounted rates through spot
instances, suitable for flexible workloads.
These options allow organizations to optimize costs based on their workload patterns and needs.
Use Cases of AWS Cloud Computing
AWS supports diverse use cases across various industries:
 Web Hosting: AWS can host websites, web applications, and microservices, offering
reliability and scalability.
 Data Analytics: Services like Amazon Redshift and AWS Glue enable users to analyze large
data sets, providing insights to improve decision-making.
 Machine Learning: With SageMaker and AI-powered services, AWS supports machine
learning applications across industries, from customer service chatbots to predictive analytics.
 Gaming: AWS provides infrastructure to host and scale gaming applications, handling
thousands of simultaneous connections.
Getting Started with AWS
For new users, AWS offers a free tier, which provides limited access to services for one year. This
enables organizations and individuals to experiment with the cloud at no cost. Additionally, AWS’s
vast resources, including documentation, tutorials, and support, help users navigate the platform and
build cloud-based solutions.
Conclusion
AWS has revolutionized the cloud computing landscape, empowering organizations to achieve digital
transformation with a robust and versatile cloud platform. With a broad range of services, scalability,
security, and cost-effective pricing, AWS allows businesses to innovate and expand in ways previously
unattainable with traditional IT infrastructure. As cloud technology continues to evolve, AWS is likely
to remain at the forefront, shaping the future of how organizations manage and deploy IT resources.

5
AWS Storage Solutions: An In-depth Guide
In the cloud computing ecosystem, storage plays a critical role, enabling organizations to store,
manage, and retrieve data efficiently and securely. Amazon Web Services (AWS) offers a
comprehensive suite of storage solutions tailored to diverse use cases, from simple backups to highly
sophisticated data lakes for big data analytics. This guide explores AWS’s storage offerings, covering
key services, features, benefits, and use cases.

Why Cloud Storage Matters


Traditional on-premises storage solutions require substantial upfront investments in hardware, and
ongoing maintenance is resource-intensive. AWS cloud storage shifts this model to a more flexible,
on- demand solution that allows organizations to store and access data without the limitations of
physical infrastructure. By utilizing AWS storage, businesses can scale capacity up or down, pay only
for what they use, and ensure data is available, durable, and secure.
AWS Storage Solutions Overview
AWS’s storage offerings fall into several main categories:

1. Object Storage – Amazon S3


2. Block Storage – Amazon EBS

3. File Storage – Amazon EFS and Amazon FSx

4. Archival Storage – Amazon S3 Glacier

5. Data Transfer and Migration Tools

6
Let’s examine each of these storage services in detail.

1. Amazon S3 (Simple Storage Service)


Amazon S3 is AWS’s premier object storage service, designed for scalability, high durability, and ease
of access. S3 enables users to store virtually unlimited amounts of data in the form of objects, which
are stored in "buckets." S3 is optimized for storing unstructured data, such as documents, images,
videos, and backups.
Key Features of Amazon S3
 Durability and Availability: S3 offers 99.999999999% (11 9s) durability by automatically
replicating data across multiple facilities within an AWS region. Users can also enable cross-
region replication to ensure resilience in case of regional failures.
 Storage Classes: S3 offers multiple storage classes to accommodate different data access
patterns:
o S3 Standard: For frequently accessed data, with low latency.
o S3 Intelligent-Tiering: Automatically moves data between two access tiers based on
usage patterns.

o S3 Standard-IA (Infrequent Access): For infrequently accessed data with rapid


retrieval when needed.

o S3 One Zone-IA: A lower-cost option for data that can be recreated if lost, as it stores
data in a single Availability Zone.
7
 Versioning and Lifecycle Policies: Users can maintain multiple versions of an object, while
lifecycle policies enable automatic transitions of objects to different storage classes over time,
optimizing cost management.
 Security: S3 provides data encryption both at rest and in transit, using AWS Key Management
Service (KMS) integration and Secure Socket Layer (SSL).

Use Cases for Amazon S3


 Data Lakes: Organizations use S3 as the foundation for data lakes, storing large volumes of
structured and unstructured data for analytics.
 Website Hosting: S3 can host static websites, providing scalability and reliability for high-
traffic sites.
 Content Distribution: Media companies store and distribute digital content, such as videos
and images, leveraging S3's global availability.

2. Amazon EBS (Elastic Block Store)


Amazon EBS provides block storage that can be attached to Amazon EC2 instances. It’s suitable for
applications that require low-latency access to frequently updated data, such as databases and
transactional applications.
Key Features of Amazon EBS
 High Performance: EBS offers multiple volume types, including General Purpose SSD (gp3),
Provisioned IOPS SSD (io2), and Throughput Optimized HDD, which are optimized for
varying performance requirements.
 Snapshots: EBS allows users to take point-in-time snapshots, which can serve as backups or
enable volume cloning for testing and development.
 Availability and Durability: Data stored in EBS is replicated within the same Availability
Zone, ensuring durability and availability.
 Encryption: EBS volumes can be encrypted at creation to secure data at rest, with automatic
encryption using AWS KMS.
Use Cases for Amazon EBS
 Relational Databases: EBS provides the persistent storage needed for databases like MySQL,
PostgreSQL, and Oracle.
 Big Data and Analytics: High IOPS and throughput volumes support data processing tasks
that require substantial read/write speeds.
 Backup and Recovery: EBS snapshots allow organizations to create backups and restore data,
ensuring continuity during failures.
3. Amazon EFS (Elastic File System) and Amazon FSx
Amazon EFS and FSx are AWS’s file storage solutions, providing shared file storage for applications
that require hierarchical storage systems.

8
Amazon EFS
Amazon EFS is a scalable file storage system that allows multiple EC2 instances to access a shared file
system. EFS is suitable for applications that require standard file-based access patterns, such as web
content management systems, development environments, and home directories.
 Elastic Scaling: EFS automatically adjusts storage capacity based on usage.

 Performance Modes: EFS offers different performance modes (General Purpose and Max I/O)
and throughput modes (Bursty and Provisioned), allowing users to optimize cost and
performance.
 High Availability: Data is stored across multiple Availability Zones, providing resilience.

Amazon FSx
Amazon FSx offers fully managed file storage solutions for specific applications and environments:

 FSx for Windows File Server: Provides Windows-native shared storage for applications that
require SMB protocol support.
 FSx for Lustre: High-performance file storage optimized for compute-intensive workloads,
such as machine learning and high-performance computing (HPC).
Use Cases for EFS and FSx
 Shared Storage for Applications: EFS and FSx allow multiple EC2 instances to access
shared data, enabling collaboration and high availability.
 Media Processing: FSx for Lustre is ideal for media rendering and other high-performance
applications.
 Machine Learning: High-speed storage like FSx for Lustre can accelerate model training and
data processing.

4. Amazon S3 Glacier and S3 Glacier Deep Archive


Amazon S3 Glacier and Glacier Deep Archive are AWS’s archival storage solutions, optimized for
long- term data storage at a low cost. They are ideal for storing data that is infrequently accessed and
may need retrieval only under specific conditions.

Key Features of Amazon S3 Glacier


 Low Cost: Glacier offers an economical way to store data long-term, with storage costs lower
than S3’s other storage classes.
 Retrieval Options: Glacier provides multiple retrieval options based on retrieval time:
o Expedited: Retrieval within minutes.
o Standard: Retrieval within hours.
o Bulk: Retrieval within days, offering the lowest cost.
 Data Lifecycle Policies: Data can be automatically moved to Glacier using lifecycle policies
in Amazon S3, allowing businesses to manage archival data effectively.

9
Use Cases for Amazon S3 Glacier
 Compliance and Regulatory Storage: Glacier is ideal for storing data that needs to be
retained for regulatory purposes but is rarely accessed.
 Long-Term Backups: Organizations store infrequently accessed backups and historical data in
Glacier to reduce costs.
 Digital Preservation: Institutions and companies with large archives of digital media use
Glacier for affordable, long-term storage.
5. Data Transfer and Migration Tools
AWS provides tools to facilitate data transfer and migration, enabling organizations to move data from
on-premises storage to AWS or between AWS storage services.
 AWS Snow Family: AWS Snowcone, Snowball, and Snowmobile are physical devices used to
transfer large volumes of data to AWS when network bandwidth is limited or data transfer over
the internet is impractical.
 AWS DataSync: A managed service for automating and accelerating data transfers between
on- premises storage and AWS, or between AWS services.
 AWS Storage Gateway: A hybrid cloud storage service that provides seamless integration
between on-premises storage and AWS, enabling data transfer without significant
infrastructure changes.

Benefits of AWS Storage Solutions


AWS storage solutions provide several key benefits:

 Scalability: AWS’s storage services scale up and down as required, ensuring that resources
match demand without over-provisioning.
 Security and Compliance: AWS storage solutions comply with regulatory standards (e.g.,
GDPR, HIPAA) and offer robust security features, including encryption and access
management.
 Cost-Effectiveness: By leveraging different storage classes, organizations can optimize costs
based on data access patterns, using cheaper options like Glacier for long-term storage.
 Flexibility and Integration: AWS storage integrates with other AWS services, allowing users
to build complex workflows for data processing, machine learning, and analytics.
Conclusion
AWS offers a robust and flexible set of storage solutions tailored to diverse data storage needs, from
frequently accessed data to long-term archives. With its global infrastructure, comprehensive security
features, and scalable options, AWS enables organizations to store and manage data efficiently,
ensuring business continuity and agility in a rapidly evolving digital landscape. As cloud storage
becomes increasingly vital for modern businesses, AWS’s storage services provide a foundation for
efficient, secure, and cost-effective data management.

1
Amazon EBS (Elastic Block Store) & Volumes
Amazon Elastic Block Store (EBS) is a scalable, high-performance block storage service designed to
work with Amazon Elastic Compute Cloud (EC2) instances. EBS provides reliable and persistent
storage that is particularly suited for workloads requiring high performance, low latency, and frequent
read/write operations. This makes it ideal for applications like databases, big data analytics, and
enterprise applications. This guide explores Amazon EBS, its features, volume types, and use cases, as
well as how it integrates with AWS’s ecosystem.

Overview of Amazon EBS


EBS offers block-level storage that can be attached to EC2 instances, acting as a virtual hard drive for
applications. Unlike local storage, which is physically tied to the EC2 instance and is ephemeral (data
is lost if the instance stops or terminates), EBS volumes are independent of instances and provide
persistent storage. This persistence ensures data remains intact across instance restarts or terminations,
making EBS a reliable solution for applications requiring long-term data retention.

Key Features of Amazon EBS


1. Durability and Availability: EBS volumes are designed to be highly durable and available.
Data stored in EBS is automatically replicated within the same Availability Zone, providing
fault tolerance against hardware failure. AWS guarantees 99.999% availability for EBS volumes,
making it a dependable storage choice for mission-critical application.

2. Scalability: EBS volumes can be resized to meet changing demands without interrupting
applications. This flexibility allows businesses to scale storage capacity up or down as their
data storage requirements change.

1
3. Data Encryption: EBS supports encryption at rest and in transit. Encryption can be enabled at
volume creation, and EBS integrates with AWS Key Management Service (KMS) to generate
and manage encryption keys.
4. Snapshots: EBS snapshots enable users to create point-in-time backups of volumes. These
snapshots are stored in Amazon S3 and can be used to create new volumes, restore data, or
transfer data across regions. Snapshots can also be automated to simplify backup and recovery.
5. Flexible Performance Options: EBS offers different volume types optimized for performance,
cost, or both, allowing users to select the right storage for their application requirements.

Amazon EBS Volume Types


AWS offers several types of EBS volumes to cater to varying workload needs. These types are
categorized into two main classes: SSD-backed volumes, which are optimized for transactional
workloads requiring fast random access, and HDD-backed volumes, designed for large sequential
read and write operations.
1. General Purpose SSD (gp3 and gp2)
 gp3: The default EBS volume type, gp3 offers a baseline performance of 3,000 IOPS and 125
MB/s throughput, which can be independently scaled up to 16,000 IOPS and 1,000 MB/s. This
flexibility allows users to fine-tune performance without increasing storage size, making gp3
suitable for boot volumes, small databases, and development environments.
 gp2: Previously the default, gp2 scales automatically with volume size to provide higher
baseline performance. However, it is less cost-effective than gp3 for workloads that require
customized IOPS and throughput settings.
Use Cases for General Purpose SSD: gp3 and gp2 volumes are suitable for everyday applications
like boot volumes, virtual desktops, and small-to-medium databases.
2. Provisioned IOPS SSD (io1 and io2)
Provisioned IOPS SSD volumes (io1 and io2) are designed for applications requiring consistently high
I/O performance and low latency. These volumes allow users to specify the exact IOPS requirements,
making them ideal for performance-intensive applications.
 io2: An upgrade from io1, io2 offers greater durability (99.999%) and is intended for
workloads where data integrity is crucial. It provides IOPS ranging from 100 to 64,000,
depending on the volume size.
 io1: Similar to io2, with lower durability, it is gradually being replaced by io2 due to the
latter’s enhanced resilience and efficiency.
Use Cases for Provisioned IOPS SSD: io1 and io2 are optimal for large databases, transactional
workloads, and applications that require minimal latency, such as Oracle, SAP, and high-performance
relational databases.

1
3. Throughput Optimized HDD (st1)
Throughput Optimized HDD (st1) volumes are designed for big data and log processing, where large
sequential read and write throughput is prioritized over IOPS. st1 volumes offer consistent throughput
at a lower cost than SSD volumes, making them a good choice for workloads with heavy read/write
sequences.
Use Cases for Throughput Optimized HDD: st1 is ideal for applications that need high throughput,
such as data warehouses, log processing systems, and big data analytics workloads.

4. Cold HDD (sc1)


Cold HDD (sc1) volumes are the most cost-effective EBS option for infrequently accessed data. sc1
offers lower throughput than st1 but at a fraction of the cost. This volume type is suitable for scenarios
where data is rarely accessed and performance is not a priority.
Use Cases for Cold HDD: sc1 is suitable for cold data storage, data archiving, and scenarios where
storage cost reduction is essential, such as backup repositories.

EBS Snapshots
EBS snapshots are incremental backups stored in Amazon S3, capturing only the changed blocks since
the last snapshot. They provide an efficient method for creating backups, restoring data, and replicating
volumes across AWS regions. Snapshots can be automated using AWS Backup or scheduled via
CloudWatch Events to ensure regular backups without manual intervention. Snapshots also support
data encryption, ensuring compliance with data security standards.
Use Cases for EBS Snapshots:

 Backup and Disaster Recovery: EBS snapshots provide a reliable backup strategy, enabling
quick restoration in case of failure.
 Data Migration: Snapshots can be copied across regions, facilitating data transfer and
enabling multi-region replication.
 Testing and Development: Snapshots enable quick provisioning of identical environments by
restoring snapshots to create new EBS volumes.
Benefits of Amazon EBS
 High Performance: EBS volumes provide low-latency performance for read/write operations,
ideal for databases and transactional systems.
 Reliability: With multiple replication copies within an Availability Zone, EBS minimizes data
loss from hardware failures.
 Security: EBS offers encryption options at both volume and snapshot levels, supporting
industry compliance standards.
 Flexible Cost Management: With various volume types, users can optimize cost by selecting
storage that best fits workload demands.

1
Integration with AWS Ecosystem
EBS seamlessly integrates with other AWS services, enhancing its functionality:

 EC2: EBS volumes are most commonly used with EC2 instances, providing reliable storage
that can be easily detached and re-attached to other instances as needed.
 CloudWatch: Monitoring and alerting capabilities in CloudWatch allow users to track EBS
volume performance, ensuring optimal usage.
 Auto Scaling: Auto Scaling groups can use EBS volumes to launch instances dynamically
based on traffic and performance needs, ensuring efficient resource management.
 AWS Backup: AWS Backup simplifies the process of managing backups for EBS volumes,
ensuring compliance and business continuity.

Conclusion
Amazon EBS is a versatile and robust storage solution that supports a range of applications, from
simple boot volumes to high-performance databases. With its flexible volume types, seamless
integration with EC2, and security and backup features, EBS helps organizations build scalable,
secure, and efficient infrastructure on AWS. By selecting the right volume type and leveraging
snapshots and automated backups, businesses can optimize costs while ensuring data integrity and
availability, making EBS a vital component of AWS’s storage portfolio.

Demonstrating Amazon EBS on the AWS Management Console


Amazon Elastic Block Store (EBS) provides persistent block storage that can be easily attached to
Amazon EC2 instances. This guide covers how to create, configure, and attach EBS volumes to EC2
instances using the AWS Management Console, which simplifies EBS management with a user-
friendly interface. This walkthrough includes steps for creating an EBS volume, attaching it to an EC2
instance, formatting it for use, and creating a snapshot.

1
Step 1: Creating an EBS Volume

1. Access the AWS Management Console: Log in to the AWS Management Console and go to the
EBS Volumes section by navigating to EC2 > Elastic Block Store > Volumes.

2. Create Volume:
o Click Create Volume to open the volume configuration screen.
o Choose the Volume Type based on your workload requirements. For general-purpose
use, select General Purpose SSD (gp3). For higher performance needs, consider
Provisioned IOPS SSD (io2).
o Specify the Size in GiB according to your data storage needs. Keep in mind that certain
volume types have size limitations.

o Choose the Availability Zone that matches the zone of your target EC2 instance. EBS
volumes must be in the same zone as the EC2 instance they are attached to.

o Optional: Configure Encryption if needed. AWS Key Management Service (KMS)


integration allows you to manage encryption keys.

3. Create Volume: After configuring settings, click Create Volume. AWS will provision the
volume, and it will appear in the EBS volumes list with a status of "available."

Step 2: Attaching the EBS Volume to an EC2 Instance


1. Locate the EC2 Instance: Go to EC2 > Instances and find the instance to which you want to
attach the volume. Ensure it is in the same Availability Zone as the newly created EBS volume.
2. Attach Volume:
o Return to the Volumes page, select the EBS volume you created, and click Actions >
Attach Volume.
o In the Attach Volume dialog, select the Instance ID of your EC2 instance from the
dropdown.

o Choose a Device Name (e.g., /dev/sdf). This name represents how the instance will
identify the volume.
3. Attach: Click Attach Volume. The volume’s status will update to "in-use" once it’s
successfully attached.

Step 3: Formatting and Mounting the EBS Volume on the EC2 Instance

1. Connect to the EC2 Instance: Use SSH to log in to the instance where the EBS volume is
attached. You can connect directly from the EC2 console by clicking Connect and following
the SSH instructions.

1
2. Verify the Attached Volume:
o Run the command to list available disks:

bash
Copy code
lsblk

o You should see the new volume (e.g., /dev/sdf) listed as an unformatted disk.
3. Format the Volume:
o If the volume is new and unformatted, format it with a file system (e.g.,
ext4): bash
Copy code
sudo mkfs -t ext4 /dev/sdf

4. Mount the Volume:


o Create a directory to mount the volume:
bash
Copy code
sudo mkdir /mnt/ebs-volume

o Mount the
volume: bash
Copy code
sudo mount /dev/sdf /mnt/ebs-volume

o Verify it is mounted by running df -h. You should see the EBS volume listed with the
specified mount point.

5. Persist the Mounting (Optional): To ensure the volume automatically remounts after a
reboot, add an entry in the /etc/fstab file. Use the following format:
bash
Copy code
/dev/sdf /mnt/ebs-volume ext4 defaults,nofail 0 0

Step 4: Creating an EBS Snapshot for Backup


1. Create Snapshot:

o Go to the Volumes section, select the volume, click Actions > Create Snapshot.
o Provide a description for the snapshot and click Create Snapshot.
2. Automate Backups (Optional): AWS Backup and Lifecycle Manager can automate snapshot
creation for regular backups, ensuring data durability without manual intervention.

1
Step 5: Detaching and Deleting the EBS Volume

1. Detach Volume:

o Before detaching, unmount the volume from the EC2


instance: bash
Copy code
sudo umount /mnt/ebs-volume

o Return to the Volumes section in the AWS Console, select the volume, and click Actions
> Detach Volume. Confirm the detachment.

2. Delete Volume:
o Once detached, the volume can be deleted to free up storage and avoid additional costs. In
the Volumes section, select the volume, click Actions > Delete Volume, and confirm
deletion.

Conclusion
The AWS Management Console simplifies the process of managing Amazon EBS volumes, allowing
you to create, attach, format, snapshot, and delete volumes easily. EBS volumes add flexibility and
persistence to your EC2 instances, providing scalable storage for databases, big data, and enterprise
applications. Following these steps ensures your application data remains secure, available, and backed
up, demonstrating the advantages of AWS EBS for robust, persistent storage.

1
Amazon S3 (Simple Storage Service)
Amazon Simple Storage Service, known as Amazon S3, is a highly scalable, secure, and cost-effective
object storage service provided by Amazon Web Services (AWS). S3 enables users to store and
retrieve any amount of data from anywhere on the web, making it an essential tool for handling large
volumes of unstructured data, such as media files, backups, and application data. S3 is popular for its
durability, easy integration, and security features, making it a reliable storage solution for a wide range
of applications and industries.

Core Concepts of Amazon S3


Amazon S3 is built on a foundation of several core concepts, including buckets, objects, and object
keys:
1. Buckets: In S3, data is stored in containers known as buckets. Each bucket can store an
unlimited number of objects, and every object within S3 is uniquely identified by a bucket
name and an object key (or filename). Bucket names must be unique across all of AWS, as they
form part of the URL used to access objects.
2. Objects: Objects are the fundamental storage units in S3, consisting of the data itself and
metadata. The data can be anything from an image file to a video, document, or backup file,
while metadata includes information like file size, creation date, and custom tags. Each object
is identified by a unique key, which serves as its name in the bucket.
3. Object Keys: An object key is the unique identifier for each object within a bucket. It functions
similarly to a file path in traditional storage, and is used to access, modify, and manage
individual objects.

1
Key Features of Amazon S3
1. Durability and Availability: Amazon S3 is designed to provide 99.999999999% (11 nines)
durability, ensuring that data is highly resistant to loss. This is achieved by automatically
replicating data across multiple physical facilities within an AWS region. Additionally, S3
offers high availability, meaning stored data is always accessible.
2. Storage Classes: Amazon S3 provides various storage classes optimized for different data
access patterns, including:
o S3 Standard: Designed for frequently accessed data, offering high durability,
availability, and low latency.

o S3 Intelligent-Tiering: Automatically moves objects between two access tiers


(frequent and infrequent) based on usage patterns, helping optimize costs.
o S3 Standard-IA (Infrequent Access): Suitable for data that is accessed less frequently
but requires quick retrieval when needed.
o S3 Glacier and Glacier Deep Archive: Ideal for long-term archival storage where data
retrieval times are less critical. S3 Glacier provides retrieval within minutes to hours,
while Glacier Deep Archive is suitable for data that can be restored in 12 hours or more.
3. Security and Access Control: S3 offers a comprehensive suite of security features, including:

o Server-Side Encryption: Data can be encrypted at rest with Amazon-managed keys


(SSE-S3) or AWS Key Management Service (KMS) keys (SSE-KMS).

o Access Control: S3 integrates with AWS Identity and Access Management (IAM) to
define granular access permissions. Bucket policies, ACLs (Access Control Lists), and
IAM roles control who can access or modify data.

o Versioning: S3 versioning tracks changes by retaining multiple versions of an object,


making it possible to recover from accidental deletions or overwrites.
4. Data Lifecycle Management: S3 allows users to define lifecycle rules to automatically
transition objects between storage classes based on age or usage. For example, frequently
accessed objects can be moved to infrequent-access classes, or archived to Glacier for long-
term storage. Lifecycle rules can also automatically delete outdated objects, helping to manage
costs efficiently.
5. Event Notifications: S3 can trigger notifications to other AWS services when certain events,
like object uploads or deletions, occur in a bucket. These notifications can be sent to AWS
Lambda for processing, Amazon Simple Notification Service (SNS) for alerts, or Amazon
Simple Queue Service (SQS) for integration with other applications.
6. Data Transfer Options: S3 integrates with AWS DataSync, AWS Snowball, and AWS Direct
Connect to facilitate large-scale data transfers. This allows users to upload and retrieve data
efficiently, even with massive datasets or in cases of limited network bandwidth.

1
Use Cases for Amazon S3
Amazon S3’s versatility makes it suitable for a wide range of applications and industries. Here are
some of the primary use cases:
1. Data Backup and Disaster Recovery: S3 provides reliable and durable storage for backups
and disaster recovery plans. Its high durability, combined with features like cross-region
replication, ensures that critical data is safe and accessible, even during unexpected failures.
2. Content Distribution and Media Storage: S3 can host and serve static assets (such as images,
videos, and documents) used by web applications. When combined with Amazon CloudFront,
AWS’s content delivery network (CDN), S3 is ideal for delivering content globally with low
latency.
3. Big Data Analytics: S3 serves as a scalable storage solution for big data and analytics, allowing
businesses to store large datasets for processing by AWS analytics services like Amazon EMR,
AWS Glue, and Amazon Redshift.
4. Data Archiving and Compliance: S3 Glacier and Glacier Deep Archive allow for cost-
effective, long-term archival storage, suitable for industries that require data retention for
compliance, such as healthcare and finance.
5. Application Hosting: S3 can host static websites, serving HTML, CSS, JavaScript, and media
files directly from an S3 bucket. This approach is cost-effective for simple websites or single-
page applications.

Advantages of Amazon S3
1. Scalability: S3’s scalability means it can store an unlimited amount of data, and users only pay
for what they use, making it ideal for unpredictable or rapidly growing storage needs.
2. Reliability and Durability: With 11 nines of durability, S3 ensures data protection through
multiple redundancies and geographic distribution within AWS regions, safeguarding against
data loss.
3. Cost-Effectiveness: With its variety of storage classes, S3 enables users to optimize costs
based on data access frequency and retrieval needs, offering low-cost storage options for
infrequent access or archival data.
4. Security: S3 provides robust security features, including encryption, access management, and
logging capabilities that comply with data protection regulations.
5. Easy Integration: S3 integrates with a wide range of AWS services, such as Lambda, IAM,
Athena, and CloudFront, providing a seamless experience for building applications and
managing data.

Conclusion
Amazon S3’s flexible storage, high durability, and advanced features make it a cornerstone of AWS
cloud storage solutions. Whether for data backup, archiving, big data analytics, or application hosting,
S3 provides scalable, secure, and cost-effective storage. Its vast set of features and integrations enable
businesses to innovate without worrying about storage limitations, while its reliable security and
compliance tools make it a trusted choice across industries.

2
Amazon S3 Console Demonstration
Using the AWS Management Console, you can create and manage Amazon S3 buckets and objects
with ease. Here’s a step-by-step guide to creating a bucket, uploading files, configuring permissions,
and enabling lifecycle policies through the AWS Console.

Step 1: Create an S3 Bucket


1. Access the S3 Console: Log in to the AWS Management Console, then navigate to Services >
S3.
2. Create Bucket:
o Click Create Bucket to start the process.
o Enter a unique Bucket Name (globally unique and DNS-compliant, such as “my-
unique- bucket-name”).
o Choose an AWS Region closest to your user base or where the application will access
the data.

3. Configure Settings:

o You can enable Bucket Versioning if you want to retain previous versions of objects,
useful for recovery.
o Set Block Public Access options based on your needs. AWS recommends blocking
public access for private data, but if the bucket will store publicly accessible files (e.g.,
a website), you may adjust this setting.
o Click Create Bucket. Your bucket should now appear in the bucket list.

2
Step 2: Upload Files to the S3 Bucket
1. Open the Bucket:
o Click on the bucket name to view its contents. By default, it will be empty.
2. Upload Files:

o Click Upload and then Add Files to choose the files you want to upload. You can
select multiple files if needed.
o You may configure specific settings for each file, such as storage class (e.g., Standard
or Standard-IA) and encryption (using AWS-managed or KMS keys).

o Click Upload to begin transferring the files. Once complete, you’ll see your files listed
in the bucket.

Step 3: Configure Access Permissions


1. Set Object-Level Permissions:
o To make an object public, select the object from the list, click Actions > Make public,
and confirm the change.

o Alternatively, click on the object, navigate to the Permissions tab, and adjust the
Access Control List (ACL) settings to allow public or private access as required.

2. Bucket Policies:
o For more complex access rules, go to the Permissions tab at the bucket level and select
Bucket Policy.

o You can add JSON policies to define permissions, such as allowing access from
specific IP addresses or making all objects public.

Step 4: Enable Lifecycle Rules


Lifecycle rules can help manage costs by transitioning objects to different storage classes or deleting
them based on age.
1. Create Lifecycle Policy:
o In the bucket, go to the Management tab and click Create lifecycle rule.
o Name your rule and choose the scope (apply to all objects or specific prefixes).
2. Configure Transitions:
oYou can set rules to move objects to cheaper storage classes like S3 Standard-IA or
Glacier based on how long the object has been stored.
3. Set Expiration:

o Choose a timeframe for deleting objects, such as after 365 days. This is useful for non-
critical data that doesn’t need long-term storage.

4. Save the Rule: Once configured, save the rule. The policy will now automatically manage
objects according to your settings.
2
Step 5: Enable Versioning (Optional)
Enabling versioning allows you to keep multiple versions of an object, which can be useful for
recovery from accidental deletions or overwrites.
1. Go to the Bucket Settings:

o In the Properties tab, locate the Versioning section.


2. Enable Versioning: Select Enable and save. Now, each time you upload a file with the same
name, S3 will keep the previous versions, which can be accessed or restored if needed.

Conclusion
Using the AWS Management Console, you can easily set up and manage Amazon S3 buckets for data
storage. This process includes creating buckets, uploading and managing files, configuring access, and
automating transitions and deletions with lifecycle rules. Amazon S3’s flexibility and automation
options make it a robust choice for securely storing and managing large amounts of data.

Amazon EFS (Elastic File System)


Amazon Elastic File System (EFS) is a fully managed, scalable file storage service offered by AWS.
Designed to work with AWS compute services like Amazon EC2, Lambda, and containerized services,
EFS provides a shared file system that can be accessed by multiple instances simultaneously. This
makes it ideal for workloads that require shared access to data, such as content management systems,
web serving, data analytics, and home directories.

2
Key Features of Amazon EFS
1. Elasticity and Scalability: EFS automatically scales as data is added or removed, without any
need for manual provisioning or capacity planning. It can handle petabytes of data, making it a
highly elastic solution for applications with unpredictable or growing storage demands. This
elasticity makes EFS cost-effective since you only pay for the storage you actually use.
2. Multi-AZ Availability and Durability: EFS stores data across multiple Availability Zones
(AZs) within a region, ensuring high availability and durability. Data redundancy across AZs
makes EFS resilient to hardware failures, ensuring consistent performance and availability.
3. POSIX Compliance: Amazon EFS is POSIX-compliant, meaning it supports standard file
system operations and permissions. This allows applications that require file-based access to
work seamlessly with EFS without modifications. EFS supports common protocols, enabling it
to be mounted and used like a traditional file system.
4. High Throughput and Low Latency: EFS offers high throughput and low latency, making it
suitable for workloads that need quick access to data, like web serving and big data processing.
EFS offers two performance modes: General Purpose for latency-sensitive tasks, and Max
I/O for high-throughput applications.
5. Access Control and Security: EFS integrates with AWS Identity and Access Management
(IAM) and AWS Key Management Service (KMS) for secure access and encryption. File
systems can be encrypted both at rest and in transit, ensuring data security. Additionally, EFS
supports VPC access, allowing you to control which resources can access the file system within
your private network.

2
Amazon EFS Storage Classes
Amazon EFS offers two storage classes to help manage costs based on data access patterns:

1. Standard: The EFS Standard storage class is optimized for files that are frequently accessed. It
provides low-latency access, ideal for active workloads.

2. Infrequent Access (IA): The EFS IA storage class is intended for files that are accessed less
frequently but still need to be available quickly when required. By automatically moving
infrequently accessed files to EFS IA, you can reduce storage costs.

Use Cases for Amazon EFS


1. Web Serving and Content Management: EFS is commonly used for hosting shared content
across multiple instances, making it ideal for web servers, media streaming applications, and
content management systems.
2. Big Data Analytics: With its scalable and high-throughput architecture, EFS can store large
datasets for analytics workloads, supporting tools like Apache Spark and Hadoop that need
parallel data processing.
3. DevOps and Application Development: EFS provides shared storage that multiple instances
can access, making it an ideal solution for managing shared codebases, build artifacts, or
CI/CD resources in development and testing environments.
4. Machine Learning and Scientific Computing: EFS supports data-driven applications that
require large amounts of storage, such as machine learning and scientific computing, where
massive datasets need to be processed or analyzed.

Pricing and Cost Management


Amazon EFS pricing is based on the storage used, making it flexible and cost-effective. EFS also has a
cost-saving feature that automatically moves infrequently accessed data to the Infrequent Access (IA)
storage class. This allows users to optimize costs without compromising data availability, as IA data
can still be accessed within milliseconds.

AWS Console Demonstration: Amazon S3 and Amazon EFS


This guide provides a step-by-step demonstration of how to set up and manage Amazon S3 (for object
storage) and Amazon EFS (for file storage) using the AWS Management Console. Both services offer
unique capabilities: S3 is ideal for scalable, high-durability object storage, while EFS provides shared
file storage that multiple EC2 instances can access simultaneously.

2
Part 1: Amazon S3 Console Demonstration
Step 1: Create an S3 Bucket
1. Log in to the AWS Management Console and go to S3.

2. Click Create Bucket.


o Enter a unique Bucket Name (e.g., "my-example-bucket").
o Select the Region where the bucket should reside.

3. Configure additional settings if needed, such as Block Public Access (to secure the bucket).
4. Click Create Bucket to finish.

Step 2: Upload Files


1. Click on your new bucket to open it.

2. Click Upload and then Add Files to select files from your computer.
3. Choose Upload. Your files will now appear in the bucket, accessible via S3’s console or API.

Step 3: Configure Permissions and Access


1. To make an object public, select it, go to Actions > Make public, and confirm.
2. Alternatively, set up a Bucket Policy for more advanced access control.
o Go to Permissions > Bucket Policy and add a JSON policy (e.g., granting read-only
access to everyone).
3. Save the policy to apply permissions.

2
Step 4: Enable Lifecycle Rules
1. In the Management tab, click Create lifecycle rule.
o Name your rule (e.g., "Move to Glacier after 30 days").
o Define transition settings to move objects to cost-effective storage classes like S3
Glacier after a set time.

2. Save the rule. Your objects will now transition automatically based on this policy.

Part 2: Amazon EFS Console Demonstration


Step 1: Create an EFS File System
1. In the AWS Management Console, go to EFS.

2. Click Create file system.


o Select a VPC (Virtual Private Cloud) where the file system will reside.
3. Choose settings for Throughput Mode (default is "Bursting") and Performance Mode.

4. Click Create to launch the file system.

Step 2: Configure Access Points


1. After the file system is created, click on it and go to the Access Points tab.

2. Click Create Access Point to simplify managing permissions for applications.


o Specify a path (e.g., /shared-data) and configure permissions.
3. Save the access point, which applications can now use to securely access the EFS file system.

2
Step 3: Mount EFS on an EC2 Instance
1. Launch an EC2 instance within the same VPC as your EFS file system.

2. Connect to the EC2 instance via SSH.

3. Install the nfs-utils package if it’s not already installed (Amazon Linux example):

Conclusion
Through the AWS Management Console, you can set up and manage Amazon S3 and Amazon EFS
with ease. S3’s object storage is excellent for general-purpose storage needs with flexible access
control, while EFS provides a scalable, shared file system that can be mounted on multiple EC2
instances. These storage solutions together offer powerful options for a wide range of cloud
applications.

Amazon S3 Glacier: Cost-Effective Archival Storage


Amazon S3 Glacier is a low-cost cloud storage service from AWS designed for long-term data
archiving and backup. It provides secure, durable, and cost-effective storage for infrequently accessed
data, ideal for organizations that need to retain data for regulatory compliance, historical records, or
archival purposes. Glacier’s pricing model makes it significantly cheaper than traditional S3 storage,
especially when combined with the S3 Glacier Deep Archive class.

Key Features of Amazon S3 Glacier


1. Low-Cost Storage: Glacier is one of AWS’s most affordable storage options, with costs
optimized for long-term storage rather than frequent access. The service is priced per GB of
data stored, making it cost-effective for large datasets that don’t require immediate retrieval.
2. Data Retrieval Options: Glacier offers several retrieval options to balance cost and retrieval
speed. These include:
o Expedited Retrieval: For urgent access, data can be retrieved within 1–5 minutes, ideal
for scenarios where archived data is occasionally needed quickly.
o Standard Retrieval: The standard retrieval option takes 3–5 hours, suitable for less
time- sensitive access to archived data.

o Bulk Retrieval: The most economical option, bulk retrievals, take 5–12 hours and are
ideal for accessing large volumes of data when speed is not a priority.
3. Scalability and Elasticity: Glacier automatically scales with your data needs, allowing storage
of virtually unlimited amounts of data. As with other S3 storage classes, you only pay for what
you store and retrieve.
4. Durability and Security: S3 Glacier is designed for 99.999999999% (11 nines) durability,
achieved by replicating data across multiple Availability Zones. Security is also prioritized,
with options for encryption at rest and in transit, as well as integration with AWS Identity and
Access Management (IAM) for access control.

2
5. Lifecycle Management and Integration with S3: Glacier integrates seamlessly with Amazon
S3, allowing users to set up lifecycle policies that automatically transition objects from S3
Standard or S3 Standard-IA (Infrequent Access) to Glacier. This is particularly useful for data
that starts off in frequent use and becomes infrequent over time.
6. S3 Glacier Deep Archive: An even more cost-effective option within Glacier, the Deep
Archive class is designed for data that is accessed less than once per year. It provides long-term
storage with retrieval times of 12–48 hours, making it suitable for the most infrequently
accessed data, such as historical records and compliance archives.

Use Cases for Amazon S3 Glacier


1. Compliance and Regulatory Data: Many organizations are required to retain data for
extended periods to meet regulatory compliance requirements. S3 Glacier is ideal for storing
compliance records, audit logs, and other long-term archives affordably.
2. Digital Media Archives: Glacier provides a reliable solution for storing large volumes of
digital media, such as video, audio, and images, which may be accessed occasionally but need
secure, durable storage.
3. Backups and Disaster Recovery: Glacier can store backup copies of databases, applications,
and other critical systems for disaster recovery purposes. In the event of data loss, backups can
be retrieved when needed, balancing costs and retrieval times.
4. Scientific and Research Data: Research institutions and government agencies often generate
massive datasets that need to be preserved for long periods. Glacier provides a secure, scalable,
and compliant storage solution for these large-scale data archives.

Pricing and Cost Management


Amazon S3 Glacier offers an economical model that charges for the amount of data stored, as well as
for retrieval requests. While storing data is affordable, retrieval costs can vary depending on the chosen
retrieval speed. To manage costs effectively, businesses often select a retrieval plan that balances cost
with retrieval urgency, choosing bulk retrieval for larger, less urgent data requests.

2
Amazon S3 Glacier Console Demonstration
Amazon S3 Glacier is designed for affordable, secure, and long-term data storage. The AWS
Management Console allows users to easily move data to Glacier and manage archival storage. Here’s
a step-by-step guide to using Glacier through the console, including setting up storage, moving objects,
and retrieving archived data.

Step 1: Set Up a Bucket and Enable Lifecycle Policies


Since Amazon S3 Glacier is a storage class within Amazon S3, you first need an S3 bucket to move data
to Glacier.
1. Create an S3 Bucket:
o In the AWS Management Console, go to Services > S3.

o Click Create Bucket, specify a unique bucket name, and choose a region.
o Set additional configuration options as desired, then click Create.
2. Upload Objects to S3:

o Open the new bucket and click Upload to add files.


o Select files, choose upload options, and click Upload. These objects will initially reside
in the S3 Standard storage class.

3. Set Up Lifecycle Rules to Transition Data to Glacier:


o In the bucket, go to the Management tab.
o Click Create lifecycle rule and name the rule (e.g., “Move to Glacier after 30 days”).
o Define which objects the rule should apply to, or apply it to all objects in the bucket.
o Under Transitions, set the rule to automatically move objects to the Glacier storage
class after a certain number of days (e.g., 30 days after creation).
o Configure expiration if needed (e.g., delete objects after 365 days) to optimize storage
costs further.

o Save the rule. The system will now automatically transition eligible objects to Glacier
based on the lifecycle policy.

Step 2: Verify Objects Transitioned to Glacier


Lifecycle transitions can take a day or more to apply, but you can check the status over time.
1. Check Object Storage Class:

o After the defined transition period, go back to the bucket and view the objects.
o Each object’s Storage Class will now indicate “Glacier” once the transition has
occurred.

3
Step 3: Retrieve Objects from Glacier
Retrieving data from Glacier involves selecting a retrieval option (Expedited, Standard, or Bulk) based
on how quickly you need the data.
1. Select Objects for Retrieval:

o In the S3 bucket, select the Glacier-stored objects you want to retrieve.


o Click Actions > Initiate Restore.
o Choose the Retrieval Option:
 Expedited (1–5 minutes) for urgent access.

 Standard (3–5 hours) for general access needs.

 Bulk (5–12 hours) for large or non-urgent retrievals.


o Set the Number of Days for which the retrieved object should be temporarily
accessible in S3 (e.g., 7 days).
2. Monitor the Retrieval Process:

o Once the retrieval request is submitted, you can track progress within the Restorations
section of the S3 bucket.

o When the object is available, the Storage Class will temporarily update to “Glacier
(Restored)” for the specified duration.
3. Download the Restored Object:
o After the retrieval completes, you can download the restored object by clicking on it
and selecting Download.

o Once the temporary retrieval period ends, the object will return to its Glacier state.

Step 4: Manage Retrieval Costs and


Policies
To keep costs under control:
 Use Bulk retrieval for large archives that don’t require immediate access.

 Set lifecycle rules to delete unneeded objects after a defined retention period.

Conclusion
Using the AWS Console, you can easily manage data archiving and retrieval in Amazon S3 Glacier.
By setting up lifecycle rules, you can automate data transitions to Glacier, lowering storage costs for
infrequently accessed data. When data retrieval is necessary, the console provides options to manage
access speed and cost, making Glacier an efficient solution for long-term data storage.

You might also like