Skill Enhancement Course-I Aws Cloud Computing: Submitted by
Skill Enhancement Course-I Aws Cloud Computing: Submitted by
i
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
ADITYA INSTITUTE OF TECHNOLOGY AND MANAGEMENT
This is to certify that the Skill Enhancement Course – I entitled “AWS Cloud
Computing” is being submitted by MUDDAPU CHARAN (23A51A05A8), MUNGI
PAVANI (23A51A05A9), KINTALI VENKATA SRI LAKSHMI (23A51A05B0),
NIMMADA RAVI TEJA (23A51A05B1), NUNNA VIJAY KRISHNA
(23A51A05B2), PALLI
JITENDRA (23A51A05B3) in partial fulfilment of the requirements of the degree of
Bachelor of Technology in Computer Science and Engineering, Aditya Institute of
Technology and Management, Tekkali, is a record of genuine work carried out by them
under my guidance and supervision during the academic year 2024-2025
ii
CONTENTS
Introduction to AWS 01
Storage in AWS 05
AWS EBS and Volumes 10
Console Demonstration - EBS 13
AWS S3 17
Console Demonstration - S3 20
AWS EFS 22
Console Demonstration - S3 & EFS 25
AWS S3 Glacier 27
Console Demonstration - Glacier 29
iii
Introduction to AWS Cloud Computing
In today’s digital era, cloud computing has become a foundation for modern business operations,
providing flexible, scalable, and cost-effective IT solutions that facilitate innovation and productivity.
Amazon Web Services (AWS) is one of the most widely adopted cloud platforms, known for its vast
array of services and global infrastructure that enable companies to meet their unique business needs.
This introduction explores AWS cloud computing, covering its fundamental concepts, features, and
benefits for organizations of all sizes.
What is AWS?
Amazon Web Services (AWS) is a leading cloud provider offering a comprehensive suite of over 200
cloud-based services. AWS was launched by Amazon in 2006 and has since grown to serve millions of
customers globally, from startups to large enterprises and government agencies. AWS operates through
a network of data centers spread across multiple geographic regions worldwide, ensuring high
availability, security, and resilience.
AWS provides services under three core models:
1
Key Components of AWS Cloud Computing
AWS offers various services organized into categories, each addressing specific business requirements:
1. Compute Services
AWS’s compute services provide scalable processing power, allowing organizations to run
applications, host websites, and analyze data.
Amazon EC2 (Elastic Compute Cloud): This service offers virtual servers to run
applications. Users can choose the instance type, operating system, and storage capacity to suit
their needs. EC2 instances are scalable, meaning resources can be adjusted as demand
fluctuates.
AWS Lambda: A serverless compute service that runs code without provisioning or managing
servers. It is ideal for tasks like data processing and backend services, as users are charged only
for the compute time they consume.
2. Storage Services
Storage is essential for any organization’s IT infrastructure, and AWS provides several solutions to
securely store and retrieve data.
Amazon S3 (Simple Storage Service): S3 is an object storage service that stores any amount
of data and makes it accessible over the internet. It is ideal for storing backups, archives, and
multimedia files, with multiple storage classes to optimize costs.
Amazon EBS (Elastic Block Store): Provides persistent block storage for EC2 instances,
suitable for applications requiring low-latency storage.
Amazon VPC (Virtual Private Cloud): Allows users to create a logically isolated network
within the AWS cloud. VPCs enable control over network configuration, including IP address
range and subnet creation.
Amazon CloudFront: A content delivery network (CDN) service that distributes content
globally with low latency, ideal for streaming media or delivering applications to users
worldwide.
4. Database Services
AWS offers various database services to handle structured, semi-structured, and unstructured data.
Amazon RDS (Relational Database Service): Provides managed relational databases (e.g.,
MySQL, PostgreSQL, Oracle, and SQL Server). It simplifies database setup, scaling, and
maintenance.
Amazon DynamoDB: A fully managed NoSQL database service suitable for applications
2
requiring high performance and scalability, such as e-commerce sites and mobile apps.
3
5. Security and Identity
AWS prioritizes security and provides numerous services to protect user data.
AWS IAM (Identity and Access Management): Enables administrators to control access to
AWS resources through role-based permissions.
AWS Key Management Service (KMS): Provides encryption keys to secure data at rest and
in transit.
6. Machine Learning and Artificial Intelligence
AWS’s AI and ML services allow organizations to build intelligent applications.
Amazon SageMaker: A fully managed service that enables data scientists to build, train, and
deploy machine learning models at scale.
Amazon Rekognition: An image and video analysis service that can detect objects, people,
and activities, useful in security and media applications.
Advantages of AWS Cloud Computing
AWS has gained popularity due to the following key advantages:
4
AWS Pricing Model
AWS pricing is designed to be flexible and transparent. The following are some commonly used
models:
5
AWS Storage Solutions: An In-depth Guide
In the cloud computing ecosystem, storage plays a critical role, enabling organizations to store,
manage, and retrieve data efficiently and securely. Amazon Web Services (AWS) offers a
comprehensive suite of storage solutions tailored to diverse use cases, from simple backups to highly
sophisticated data lakes for big data analytics. This guide explores AWS’s storage offerings, covering
key services, features, benefits, and use cases.
6
Let’s examine each of these storage services in detail.
o S3 One Zone-IA: A lower-cost option for data that can be recreated if lost, as it stores
data in a single Availability Zone.
7
Versioning and Lifecycle Policies: Users can maintain multiple versions of an object, while
lifecycle policies enable automatic transitions of objects to different storage classes over time,
optimizing cost management.
Security: S3 provides data encryption both at rest and in transit, using AWS Key Management
Service (KMS) integration and Secure Socket Layer (SSL).
8
Amazon EFS
Amazon EFS is a scalable file storage system that allows multiple EC2 instances to access a shared file
system. EFS is suitable for applications that require standard file-based access patterns, such as web
content management systems, development environments, and home directories.
Elastic Scaling: EFS automatically adjusts storage capacity based on usage.
Performance Modes: EFS offers different performance modes (General Purpose and Max I/O)
and throughput modes (Bursty and Provisioned), allowing users to optimize cost and
performance.
High Availability: Data is stored across multiple Availability Zones, providing resilience.
Amazon FSx
Amazon FSx offers fully managed file storage solutions for specific applications and environments:
FSx for Windows File Server: Provides Windows-native shared storage for applications that
require SMB protocol support.
FSx for Lustre: High-performance file storage optimized for compute-intensive workloads,
such as machine learning and high-performance computing (HPC).
Use Cases for EFS and FSx
Shared Storage for Applications: EFS and FSx allow multiple EC2 instances to access
shared data, enabling collaboration and high availability.
Media Processing: FSx for Lustre is ideal for media rendering and other high-performance
applications.
Machine Learning: High-speed storage like FSx for Lustre can accelerate model training and
data processing.
9
Use Cases for Amazon S3 Glacier
Compliance and Regulatory Storage: Glacier is ideal for storing data that needs to be
retained for regulatory purposes but is rarely accessed.
Long-Term Backups: Organizations store infrequently accessed backups and historical data in
Glacier to reduce costs.
Digital Preservation: Institutions and companies with large archives of digital media use
Glacier for affordable, long-term storage.
5. Data Transfer and Migration Tools
AWS provides tools to facilitate data transfer and migration, enabling organizations to move data from
on-premises storage to AWS or between AWS storage services.
AWS Snow Family: AWS Snowcone, Snowball, and Snowmobile are physical devices used to
transfer large volumes of data to AWS when network bandwidth is limited or data transfer over
the internet is impractical.
AWS DataSync: A managed service for automating and accelerating data transfers between
on- premises storage and AWS, or between AWS services.
AWS Storage Gateway: A hybrid cloud storage service that provides seamless integration
between on-premises storage and AWS, enabling data transfer without significant
infrastructure changes.
Scalability: AWS’s storage services scale up and down as required, ensuring that resources
match demand without over-provisioning.
Security and Compliance: AWS storage solutions comply with regulatory standards (e.g.,
GDPR, HIPAA) and offer robust security features, including encryption and access
management.
Cost-Effectiveness: By leveraging different storage classes, organizations can optimize costs
based on data access patterns, using cheaper options like Glacier for long-term storage.
Flexibility and Integration: AWS storage integrates with other AWS services, allowing users
to build complex workflows for data processing, machine learning, and analytics.
Conclusion
AWS offers a robust and flexible set of storage solutions tailored to diverse data storage needs, from
frequently accessed data to long-term archives. With its global infrastructure, comprehensive security
features, and scalable options, AWS enables organizations to store and manage data efficiently,
ensuring business continuity and agility in a rapidly evolving digital landscape. As cloud storage
becomes increasingly vital for modern businesses, AWS’s storage services provide a foundation for
efficient, secure, and cost-effective data management.
1
Amazon EBS (Elastic Block Store) & Volumes
Amazon Elastic Block Store (EBS) is a scalable, high-performance block storage service designed to
work with Amazon Elastic Compute Cloud (EC2) instances. EBS provides reliable and persistent
storage that is particularly suited for workloads requiring high performance, low latency, and frequent
read/write operations. This makes it ideal for applications like databases, big data analytics, and
enterprise applications. This guide explores Amazon EBS, its features, volume types, and use cases, as
well as how it integrates with AWS’s ecosystem.
2. Scalability: EBS volumes can be resized to meet changing demands without interrupting
applications. This flexibility allows businesses to scale storage capacity up or down as their
data storage requirements change.
1
3. Data Encryption: EBS supports encryption at rest and in transit. Encryption can be enabled at
volume creation, and EBS integrates with AWS Key Management Service (KMS) to generate
and manage encryption keys.
4. Snapshots: EBS snapshots enable users to create point-in-time backups of volumes. These
snapshots are stored in Amazon S3 and can be used to create new volumes, restore data, or
transfer data across regions. Snapshots can also be automated to simplify backup and recovery.
5. Flexible Performance Options: EBS offers different volume types optimized for performance,
cost, or both, allowing users to select the right storage for their application requirements.
1
3. Throughput Optimized HDD (st1)
Throughput Optimized HDD (st1) volumes are designed for big data and log processing, where large
sequential read and write throughput is prioritized over IOPS. st1 volumes offer consistent throughput
at a lower cost than SSD volumes, making them a good choice for workloads with heavy read/write
sequences.
Use Cases for Throughput Optimized HDD: st1 is ideal for applications that need high throughput,
such as data warehouses, log processing systems, and big data analytics workloads.
EBS Snapshots
EBS snapshots are incremental backups stored in Amazon S3, capturing only the changed blocks since
the last snapshot. They provide an efficient method for creating backups, restoring data, and replicating
volumes across AWS regions. Snapshots can be automated using AWS Backup or scheduled via
CloudWatch Events to ensure regular backups without manual intervention. Snapshots also support
data encryption, ensuring compliance with data security standards.
Use Cases for EBS Snapshots:
Backup and Disaster Recovery: EBS snapshots provide a reliable backup strategy, enabling
quick restoration in case of failure.
Data Migration: Snapshots can be copied across regions, facilitating data transfer and
enabling multi-region replication.
Testing and Development: Snapshots enable quick provisioning of identical environments by
restoring snapshots to create new EBS volumes.
Benefits of Amazon EBS
High Performance: EBS volumes provide low-latency performance for read/write operations,
ideal for databases and transactional systems.
Reliability: With multiple replication copies within an Availability Zone, EBS minimizes data
loss from hardware failures.
Security: EBS offers encryption options at both volume and snapshot levels, supporting
industry compliance standards.
Flexible Cost Management: With various volume types, users can optimize cost by selecting
storage that best fits workload demands.
1
Integration with AWS Ecosystem
EBS seamlessly integrates with other AWS services, enhancing its functionality:
EC2: EBS volumes are most commonly used with EC2 instances, providing reliable storage
that can be easily detached and re-attached to other instances as needed.
CloudWatch: Monitoring and alerting capabilities in CloudWatch allow users to track EBS
volume performance, ensuring optimal usage.
Auto Scaling: Auto Scaling groups can use EBS volumes to launch instances dynamically
based on traffic and performance needs, ensuring efficient resource management.
AWS Backup: AWS Backup simplifies the process of managing backups for EBS volumes,
ensuring compliance and business continuity.
Conclusion
Amazon EBS is a versatile and robust storage solution that supports a range of applications, from
simple boot volumes to high-performance databases. With its flexible volume types, seamless
integration with EC2, and security and backup features, EBS helps organizations build scalable,
secure, and efficient infrastructure on AWS. By selecting the right volume type and leveraging
snapshots and automated backups, businesses can optimize costs while ensuring data integrity and
availability, making EBS a vital component of AWS’s storage portfolio.
1
Step 1: Creating an EBS Volume
1. Access the AWS Management Console: Log in to the AWS Management Console and go to the
EBS Volumes section by navigating to EC2 > Elastic Block Store > Volumes.
2. Create Volume:
o Click Create Volume to open the volume configuration screen.
o Choose the Volume Type based on your workload requirements. For general-purpose
use, select General Purpose SSD (gp3). For higher performance needs, consider
Provisioned IOPS SSD (io2).
o Specify the Size in GiB according to your data storage needs. Keep in mind that certain
volume types have size limitations.
o Choose the Availability Zone that matches the zone of your target EC2 instance. EBS
volumes must be in the same zone as the EC2 instance they are attached to.
3. Create Volume: After configuring settings, click Create Volume. AWS will provision the
volume, and it will appear in the EBS volumes list with a status of "available."
o Choose a Device Name (e.g., /dev/sdf). This name represents how the instance will
identify the volume.
3. Attach: Click Attach Volume. The volume’s status will update to "in-use" once it’s
successfully attached.
Step 3: Formatting and Mounting the EBS Volume on the EC2 Instance
1. Connect to the EC2 Instance: Use SSH to log in to the instance where the EBS volume is
attached. You can connect directly from the EC2 console by clicking Connect and following
the SSH instructions.
1
2. Verify the Attached Volume:
o Run the command to list available disks:
bash
Copy code
lsblk
o You should see the new volume (e.g., /dev/sdf) listed as an unformatted disk.
3. Format the Volume:
o If the volume is new and unformatted, format it with a file system (e.g.,
ext4): bash
Copy code
sudo mkfs -t ext4 /dev/sdf
o Mount the
volume: bash
Copy code
sudo mount /dev/sdf /mnt/ebs-volume
o Verify it is mounted by running df -h. You should see the EBS volume listed with the
specified mount point.
5. Persist the Mounting (Optional): To ensure the volume automatically remounts after a
reboot, add an entry in the /etc/fstab file. Use the following format:
bash
Copy code
/dev/sdf /mnt/ebs-volume ext4 defaults,nofail 0 0
o Go to the Volumes section, select the volume, click Actions > Create Snapshot.
o Provide a description for the snapshot and click Create Snapshot.
2. Automate Backups (Optional): AWS Backup and Lifecycle Manager can automate snapshot
creation for regular backups, ensuring data durability without manual intervention.
1
Step 5: Detaching and Deleting the EBS Volume
1. Detach Volume:
o Return to the Volumes section in the AWS Console, select the volume, and click Actions
> Detach Volume. Confirm the detachment.
2. Delete Volume:
o Once detached, the volume can be deleted to free up storage and avoid additional costs. In
the Volumes section, select the volume, click Actions > Delete Volume, and confirm
deletion.
Conclusion
The AWS Management Console simplifies the process of managing Amazon EBS volumes, allowing
you to create, attach, format, snapshot, and delete volumes easily. EBS volumes add flexibility and
persistence to your EC2 instances, providing scalable storage for databases, big data, and enterprise
applications. Following these steps ensures your application data remains secure, available, and backed
up, demonstrating the advantages of AWS EBS for robust, persistent storage.
1
Amazon S3 (Simple Storage Service)
Amazon Simple Storage Service, known as Amazon S3, is a highly scalable, secure, and cost-effective
object storage service provided by Amazon Web Services (AWS). S3 enables users to store and
retrieve any amount of data from anywhere on the web, making it an essential tool for handling large
volumes of unstructured data, such as media files, backups, and application data. S3 is popular for its
durability, easy integration, and security features, making it a reliable storage solution for a wide range
of applications and industries.
1
Key Features of Amazon S3
1. Durability and Availability: Amazon S3 is designed to provide 99.999999999% (11 nines)
durability, ensuring that data is highly resistant to loss. This is achieved by automatically
replicating data across multiple physical facilities within an AWS region. Additionally, S3
offers high availability, meaning stored data is always accessible.
2. Storage Classes: Amazon S3 provides various storage classes optimized for different data
access patterns, including:
o S3 Standard: Designed for frequently accessed data, offering high durability,
availability, and low latency.
o Access Control: S3 integrates with AWS Identity and Access Management (IAM) to
define granular access permissions. Bucket policies, ACLs (Access Control Lists), and
IAM roles control who can access or modify data.
1
Use Cases for Amazon S3
Amazon S3’s versatility makes it suitable for a wide range of applications and industries. Here are
some of the primary use cases:
1. Data Backup and Disaster Recovery: S3 provides reliable and durable storage for backups
and disaster recovery plans. Its high durability, combined with features like cross-region
replication, ensures that critical data is safe and accessible, even during unexpected failures.
2. Content Distribution and Media Storage: S3 can host and serve static assets (such as images,
videos, and documents) used by web applications. When combined with Amazon CloudFront,
AWS’s content delivery network (CDN), S3 is ideal for delivering content globally with low
latency.
3. Big Data Analytics: S3 serves as a scalable storage solution for big data and analytics, allowing
businesses to store large datasets for processing by AWS analytics services like Amazon EMR,
AWS Glue, and Amazon Redshift.
4. Data Archiving and Compliance: S3 Glacier and Glacier Deep Archive allow for cost-
effective, long-term archival storage, suitable for industries that require data retention for
compliance, such as healthcare and finance.
5. Application Hosting: S3 can host static websites, serving HTML, CSS, JavaScript, and media
files directly from an S3 bucket. This approach is cost-effective for simple websites or single-
page applications.
Advantages of Amazon S3
1. Scalability: S3’s scalability means it can store an unlimited amount of data, and users only pay
for what they use, making it ideal for unpredictable or rapidly growing storage needs.
2. Reliability and Durability: With 11 nines of durability, S3 ensures data protection through
multiple redundancies and geographic distribution within AWS regions, safeguarding against
data loss.
3. Cost-Effectiveness: With its variety of storage classes, S3 enables users to optimize costs
based on data access frequency and retrieval needs, offering low-cost storage options for
infrequent access or archival data.
4. Security: S3 provides robust security features, including encryption, access management, and
logging capabilities that comply with data protection regulations.
5. Easy Integration: S3 integrates with a wide range of AWS services, such as Lambda, IAM,
Athena, and CloudFront, providing a seamless experience for building applications and
managing data.
Conclusion
Amazon S3’s flexible storage, high durability, and advanced features make it a cornerstone of AWS
cloud storage solutions. Whether for data backup, archiving, big data analytics, or application hosting,
S3 provides scalable, secure, and cost-effective storage. Its vast set of features and integrations enable
businesses to innovate without worrying about storage limitations, while its reliable security and
compliance tools make it a trusted choice across industries.
2
Amazon S3 Console Demonstration
Using the AWS Management Console, you can create and manage Amazon S3 buckets and objects
with ease. Here’s a step-by-step guide to creating a bucket, uploading files, configuring permissions,
and enabling lifecycle policies through the AWS Console.
3. Configure Settings:
o You can enable Bucket Versioning if you want to retain previous versions of objects,
useful for recovery.
o Set Block Public Access options based on your needs. AWS recommends blocking
public access for private data, but if the bucket will store publicly accessible files (e.g.,
a website), you may adjust this setting.
o Click Create Bucket. Your bucket should now appear in the bucket list.
2
Step 2: Upload Files to the S3 Bucket
1. Open the Bucket:
o Click on the bucket name to view its contents. By default, it will be empty.
2. Upload Files:
o Click Upload and then Add Files to choose the files you want to upload. You can
select multiple files if needed.
o You may configure specific settings for each file, such as storage class (e.g., Standard
or Standard-IA) and encryption (using AWS-managed or KMS keys).
o Click Upload to begin transferring the files. Once complete, you’ll see your files listed
in the bucket.
o Alternatively, click on the object, navigate to the Permissions tab, and adjust the
Access Control List (ACL) settings to allow public or private access as required.
2. Bucket Policies:
o For more complex access rules, go to the Permissions tab at the bucket level and select
Bucket Policy.
o You can add JSON policies to define permissions, such as allowing access from
specific IP addresses or making all objects public.
o Choose a timeframe for deleting objects, such as after 365 days. This is useful for non-
critical data that doesn’t need long-term storage.
4. Save the Rule: Once configured, save the rule. The policy will now automatically manage
objects according to your settings.
2
Step 5: Enable Versioning (Optional)
Enabling versioning allows you to keep multiple versions of an object, which can be useful for
recovery from accidental deletions or overwrites.
1. Go to the Bucket Settings:
Conclusion
Using the AWS Management Console, you can easily set up and manage Amazon S3 buckets for data
storage. This process includes creating buckets, uploading and managing files, configuring access, and
automating transitions and deletions with lifecycle rules. Amazon S3’s flexibility and automation
options make it a robust choice for securely storing and managing large amounts of data.
2
Key Features of Amazon EFS
1. Elasticity and Scalability: EFS automatically scales as data is added or removed, without any
need for manual provisioning or capacity planning. It can handle petabytes of data, making it a
highly elastic solution for applications with unpredictable or growing storage demands. This
elasticity makes EFS cost-effective since you only pay for the storage you actually use.
2. Multi-AZ Availability and Durability: EFS stores data across multiple Availability Zones
(AZs) within a region, ensuring high availability and durability. Data redundancy across AZs
makes EFS resilient to hardware failures, ensuring consistent performance and availability.
3. POSIX Compliance: Amazon EFS is POSIX-compliant, meaning it supports standard file
system operations and permissions. This allows applications that require file-based access to
work seamlessly with EFS without modifications. EFS supports common protocols, enabling it
to be mounted and used like a traditional file system.
4. High Throughput and Low Latency: EFS offers high throughput and low latency, making it
suitable for workloads that need quick access to data, like web serving and big data processing.
EFS offers two performance modes: General Purpose for latency-sensitive tasks, and Max
I/O for high-throughput applications.
5. Access Control and Security: EFS integrates with AWS Identity and Access Management
(IAM) and AWS Key Management Service (KMS) for secure access and encryption. File
systems can be encrypted both at rest and in transit, ensuring data security. Additionally, EFS
supports VPC access, allowing you to control which resources can access the file system within
your private network.
2
Amazon EFS Storage Classes
Amazon EFS offers two storage classes to help manage costs based on data access patterns:
1. Standard: The EFS Standard storage class is optimized for files that are frequently accessed. It
provides low-latency access, ideal for active workloads.
2. Infrequent Access (IA): The EFS IA storage class is intended for files that are accessed less
frequently but still need to be available quickly when required. By automatically moving
infrequently accessed files to EFS IA, you can reduce storage costs.
2
Part 1: Amazon S3 Console Demonstration
Step 1: Create an S3 Bucket
1. Log in to the AWS Management Console and go to S3.
3. Configure additional settings if needed, such as Block Public Access (to secure the bucket).
4. Click Create Bucket to finish.
2. Click Upload and then Add Files to select files from your computer.
3. Choose Upload. Your files will now appear in the bucket, accessible via S3’s console or API.
2
Step 4: Enable Lifecycle Rules
1. In the Management tab, click Create lifecycle rule.
o Name your rule (e.g., "Move to Glacier after 30 days").
o Define transition settings to move objects to cost-effective storage classes like S3
Glacier after a set time.
2. Save the rule. Your objects will now transition automatically based on this policy.
2
Step 3: Mount EFS on an EC2 Instance
1. Launch an EC2 instance within the same VPC as your EFS file system.
3. Install the nfs-utils package if it’s not already installed (Amazon Linux example):
Conclusion
Through the AWS Management Console, you can set up and manage Amazon S3 and Amazon EFS
with ease. S3’s object storage is excellent for general-purpose storage needs with flexible access
control, while EFS provides a scalable, shared file system that can be mounted on multiple EC2
instances. These storage solutions together offer powerful options for a wide range of cloud
applications.
o Bulk Retrieval: The most economical option, bulk retrievals, take 5–12 hours and are
ideal for accessing large volumes of data when speed is not a priority.
3. Scalability and Elasticity: Glacier automatically scales with your data needs, allowing storage
of virtually unlimited amounts of data. As with other S3 storage classes, you only pay for what
you store and retrieve.
4. Durability and Security: S3 Glacier is designed for 99.999999999% (11 nines) durability,
achieved by replicating data across multiple Availability Zones. Security is also prioritized,
with options for encryption at rest and in transit, as well as integration with AWS Identity and
Access Management (IAM) for access control.
2
5. Lifecycle Management and Integration with S3: Glacier integrates seamlessly with Amazon
S3, allowing users to set up lifecycle policies that automatically transition objects from S3
Standard or S3 Standard-IA (Infrequent Access) to Glacier. This is particularly useful for data
that starts off in frequent use and becomes infrequent over time.
6. S3 Glacier Deep Archive: An even more cost-effective option within Glacier, the Deep
Archive class is designed for data that is accessed less than once per year. It provides long-term
storage with retrieval times of 12–48 hours, making it suitable for the most infrequently
accessed data, such as historical records and compliance archives.
2
Amazon S3 Glacier Console Demonstration
Amazon S3 Glacier is designed for affordable, secure, and long-term data storage. The AWS
Management Console allows users to easily move data to Glacier and manage archival storage. Here’s
a step-by-step guide to using Glacier through the console, including setting up storage, moving objects,
and retrieving archived data.
o Click Create Bucket, specify a unique bucket name, and choose a region.
o Set additional configuration options as desired, then click Create.
2. Upload Objects to S3:
o Save the rule. The system will now automatically transition eligible objects to Glacier
based on the lifecycle policy.
o After the defined transition period, go back to the bucket and view the objects.
o Each object’s Storage Class will now indicate “Glacier” once the transition has
occurred.
3
Step 3: Retrieve Objects from Glacier
Retrieving data from Glacier involves selecting a retrieval option (Expedited, Standard, or Bulk) based
on how quickly you need the data.
1. Select Objects for Retrieval:
o Once the retrieval request is submitted, you can track progress within the Restorations
section of the S3 bucket.
o When the object is available, the Storage Class will temporarily update to “Glacier
(Restored)” for the specified duration.
3. Download the Restored Object:
o After the retrieval completes, you can download the restored object by clicking on it
and selecting Download.
o Once the temporary retrieval period ends, the object will return to its Glacier state.
Set lifecycle rules to delete unneeded objects after a defined retention period.
Conclusion
Using the AWS Console, you can easily manage data archiving and retrieval in Amazon S3 Glacier.
By setting up lifecycle rules, you can automate data transitions to Glacier, lowering storage costs for
infrequently accessed data. When data retrieval is necessary, the console provides options to manage
access speed and cost, making Glacier an efficient solution for long-term data storage.