Storage in AWS
Storage in AWS
67
16
93
Bhupinder Rajput
46
,8
om
l.c
ai
Storage in AWS
gm
6@
59
ry
ha
ud
ha
ac
hn
ris
(k
frj
N
0m
to
n gs
lo
be
F
PD
is
Th
Type of Storage
Th
is
PD
F
be
lo
ngs
to
Simple Storage Service(S3) : Object Level Storage and access it from anywhere.
0m
N
frj
(k
ris
hn
Elastic File System(EFS) : Is only for Linux.
ac
ha
ud
ha
ry
59
Elastic Block Store(EBS) : Block level storage which can only be accessed
6@
through EC2 to it is attached.Default-4 KiB and Max. Block Size -64 KiB
gm
ai
l.c
om
Glacier : Is recently called as S3 Glacier, which mostly used to store data which is
,8
46
not most imp but we want to store for future use.
93
16
67
6)
Snowball : Uses for data migration consists of huge data, which is a portable device
sent usually in trucks to data centers of company which wants to migrate to cloud.
2
Type of Storage (Contd...)
Th
is
PD
F
be
lo
✓
n
AWS offers a complete range of storage services to support both application
gs
to
0m
and archival compliance requirements. Select from objects, file and block storage
N
frj
(k
services as well as cloud data migration options to start designing the foundation
ris
hn
ac
of our cloud IT Environment.
ha
ud
ha
ry
Block Storage
59
6@
gm
• It is suitable for transactional databases, random read/write loads and
ai
l.c
structured database storage.
om
,8
• It is SAN Storage.
46
93
• Block storage divide the data to be stored in evenly sized blocks (data chunks)
16
67
for instance, it can split into evenly sized blocks before it is stored.
6)
• Data blocks stored in block storage would not contain metadata (data created,
data size, data modified, content type etc).
• Block storage only keeps the address (index) where the data blocks are
stored, it does not care what is in that block. Just now how to retrieve it when
required. 3
• EBS Volume : Can only be accessed through EC2.
Type of Storage (Contd...)
Th
is
PD
F
be
lo
ngs
Object Storage
to
0m
N
frj
• It stores the files as a whole and does not divide them.
(k
ris
• In this, an object is the file/data itself, its metadata, object global
hn
ac
ha
unique id.
ud
ha
• The Object global unique id is a unique identifier for the object(can
ry
59
be the object name itself) and it must be unique such that it can be
6@
gm
retrived disregarding where its physical storage location is.
ai
l.c
• Object storage cannot be mounted as a drive.
om
,8
• It is accessed from the Internet.
46
93
• Ex : AWS S3, Dropbox etc.
16
67
6)
4
Amazon S3
Th
is
PD
F
be
lo
ngs
Simple Storage Service (S3)
to
0m
N
• S3 is a storage for the internet.it has a simple webservices interface for
frj
(k
ris
simple storing & retrieving of any amount of data, anytime from anywhere
hn
ac
on the internet.
ha
• S3 is object based storage.
ud
ha
ry
• We cannot install OS on S3.
59
6@
• S3 has a distributed data-store architecture where objects are redundantly
gm
ai
stored in multiple locations (Min 3 Location in same region).
l.c
om
• Data is stored in Buckets.
,8
46
• A bucket is a flat container of objects.
93
16
• Max capacity of a object in a bucket is 5TB.
67
6)
• We can create folder in our bucket (Available through Console).
• We cannot create nested buckets.
• Bucket ownership is non-transferrable.
• S3 bucket is region specific.
• We can have upto 100 Buckets per account (may expand on request). 5
Amazon S3
Th
is
PD
F
be
lo
ngs
S3 Bucket Naming Rules
to
0m
N
frj
• S3 bucket names(keys) are globally unique across all AWS regions.
(k
ris
hn
• Bucket names cannot be changed after they are created.
ac
ha
• If a bucket is deleted, its name becomes available again to us or other
ud
ha
accounts to use.
ry
59
6@
• Bucket names must be at least 3 and no more than 63 characters.
gm
• Bucket names are part of the URL used to access a bucket.
ai
l.c
om
• Bucket name must be a series of one or more lables (xyz.bucket).
,8
46
• Bucket names can contain lowercase,numbers and hypen(-).We
93
16
cannot use uppercase letters.
67
6)
• Bucket name should not be an IP Address.
• Each label must start and end with a lowercase letter or a number.
• By default buckets and its objects are private .
• By default only owner can access the bucket. 6
Amazon S3
Th
is
PD
F
be
lo
ngs
S3-Bucket Sub resources
to
0m
N
frj
• This Includes :
(k
ris
• 1.Lifecycle -To decide an objects lifecycle management.
hn
ac
• 2.Website -To hold configurations related to static website hosted S3
ha
ud
Bucket.
ha
ry
• 3.Versioning -Keep object versions as it changes (Gets Updated).
59
6@
Can be enabled or suspended but cannot be disabled after enabling.
gm
• 4.Access Control List : Bucket Policies.
ai
l.c
• The name is simply two parts-Bucket Region's endpoint and bucket
om
,8
name
46
93
• Eg : S3 bucket named mybucket in europe west region is https://s3-
16
67
eu-west1.amazonaws.com/mybucket
6)
7
Amazon S3
Th
is
PD
F
be
lo
ngs
S3 Objects
to
0m
N
frj
• An object size stored in an S3 bucket can be 0 Byte to 5TB.
(k
ris
• Each object is stored and retrived by unique key (ID or Name).
hn
ac
• An object in AWS S3 is uniquely identified and addressed through -
ha
ud
service endpoint, bucket name, object key(name), optionally object
ha
ry
versions.
59
6@
• Objects stored in a S3 bucket in a region will never leave that region
gm
unless we specifically move them to another region or CRR.
ai
l.c
• A bucket owner can grant cross account permissions to another aws
om
,8
account(or users in another account) to upload objects.
46
• We can grant S3 Bucket/object permissions to Individual users, AWS
93
16
Account, Make the resource public, to all authentic users.
67
6)
8
Amazon S3
Th
is
PD
F
be
lo
ngs
S3 Bucket Versioning
to
0m
• Bucket versioning is a S3 bucket sub-resource used to protect against accidental object/data deletion or overwritten.
N
frj
• Versioning can also be used for data retention or archive.
(k
ris
• Once we enable versioning on a bucket, it cannot be disabled however it can be suspended.
hn
• When enabled, bucket versioning will protect existing object and new object and maintains their versions as they
ac
ha
are updated.
ud
• Updating objects refers to PUT, POST, COPY, Delete actions on objects.
ha
ry
• When versioning is enabled and we try to delete an object a delete marker is placed on the object.
59
• We can still view the object and the delete marker.
6@
gm
• If we reconsider deleting the objects, we can delete the “Delete marker” and the object will be available again.
ai
• We will be charged for all S3 storage cost for all object versions stored.
l.c
om
• We can use versioning with S3 lifecycle policies to delete older versions or we can move them to a cheaper S3 storage
,8
(Or Glacier).
46
• Bucket Version States : Enabled, Suspended, Un-versioned.
93
16
• Versioning applies to all objects in a bucket and not partially applied.
67
6)
• Object existing before enabling versioning will have a version ID as ‘Null’.
• If we have a bucket that is already versioned,then when we suspend versioning existing objects and their versions
remain as it is.
• However they will not be updated/versioned further with future updates while the bucket versioning is suspended.
• New objects (uploaded after suspension), they will have a version ID as ‘Null’.
• If the same key (name) is used to store another objects, it will override the existing one.
• An object deletion in a suspended versioning buckets will only delete the objects with ID ‘Null’. 9
Amazon S3
Th
is
PD
F
be
lo
ngs
to
S3 Bucket Versioning - MFA Delete
0m
N
frj
• MFA delete is a versioning capacity that adds another layer of security in case
(k
ris
our account is compromised.
hn
ac
• This adds another layer of security for changing our buckets versioing state
ha
ud
and permanently deleting an object version.
ha
ry
• MFA delete requires our security credentials and the code displayed on an
59
6@
approved physical or software based authentication device.
gm
ai
l.c
om
S3 Multipart Upload
,8
46
93
• Is used to upload an object in parts.
16
67
• Parts are uploaded independently and in parallel in any order.
6)
• It is recommended for object sizes 100MB or more as min size is 5MB to
multiport upload.
• We must use it for objects larger than 5GB.
• This is done through S3 multipart upload API.
10
Amazon S3
Th
is
PD
F
be
lo
ngs
Copying S3 Objects
to
0m
•The copy operation creates a copy of an object that is already stored in Amazon S3.
N
frj
•We can create a copy of our object upto 5Gb in size in a single atomic operations.
(k
ris
•However to copy an object greater than 5GB,we must use the multipart upload API.
hn
ac
•Incur charges if copy to another region.
ha
ud
•Use the copy operation to generate additional copies of the subject, renaming object (copy to a new
ha
name), Changing the copy's storage class or encrypt it at rest.
ry
59
•Move object across AWS location/region.
6@
•Change object metadata.
gm
ai
l.c
om
,8
Storage Classes of Amazon S3
46
93
•1.S3-Standard (for normal and very frequent access).
16
67
•2.S3 Glacier Deep Archive (Cheapest).
6)
•3.Amazon Glacier (Long Term Storage).
•4.S3 Standard Infrequent Access (Less cost but we pay to access it more frequently),-Standard IA.
•5.S3 One-Zone-IA (Only stores1 copy with less cost).
•6.S3 Intelligent Tiering (Automatically shifts data between standard, standard IA, glacier etc
based on usage).
•7.S3 Reduced Redundancy Storage (was removed). 11
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
to
1. S3 Standard
0m
N
frj
(k
2. Reduced Redundancy Frequently Access
ris
hn
ac
3. S3-Express One Zone
ha
ud
ha
ry
4. Glacier Deep Archive
59
6@
gm
5. Glacier Flexible Retrieval (Formerly Glacier)
ai
l.c
om
,8
6. Standard-IA
46
93
16
7. Intelligent tiering (No Retrieval fee)
67
6)
8. One Zone-IA
9. Glacier Instant Retrieval
12
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
to
1. Amazon S3 Standard
0m
N
frj
(k
• It offers high durability availability and performance object
ris
hn
ac
storage for frequently accessed data.
ha
ud
• Durability is 99.999999999% (11 time 9).
ha
ry
59
• Resilient against events that impact entire AZ.
6@
gm
• Designed for 99.99% availability over a given year.
ai
l.c
• Support SSL for data in-transit and encryption of data at rest.
om
,8
46
• Storage costs for the object is fairly high but there is very less
93
16
charge for accessing the objects.
67
6)
• Largest object that can be uploaded in a single put is 5TB.
• Backed with the Amazon S3 service level Agreement for
availability.
13
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
to
2. Amazon S3-IA
0m
N
frj
(k
• S3-IA is for data that is accessed less frequently but required rapid
ris
hn
ac
access when needed.
ha
ud
• The storage cost is much cheaper than S3-Standard, almost half
ha
ry
the price. But we are charged more heavily for accessing our
59
6@
objects.
gm
• Durability is 99.999999999%.
ai
l.c
om
• Resilient against events that impact entire AZ.
,8
46
• Availability is 99.9% in year.
93
16
• Support SSL for data in transit and encryption of data at rest.
67
6)
• Data that is deleted from S3-IA within 3 days will be charged for a
full 30 days.
• Backed with the Amazon S3 service level Agreement for availability.
14
Amazon S3 Classes
Th
is
PD
F
be
lo
n
3. Amazon S3 Intelligent Tiering
gs
to
0m
• This storage class is designed to optimize cost by automatically moving
N
frj
(k
data to the most cost effective access-tier.
ris
hn
• It works by storing objects in three access tiers i.e S3-Standard,S3-IA
ac
ha
and Glacier-Deep Archive.
ud
ha
• If an object in the infrequent access tier is accessed, it is automatically
ry
59
moved back to the frequent access tier.
6@
gm
• There are no retrieval fess when using the S3-Intelligent tiering storage
ai
l.c
class and no additional tiering fee when objects are moved between
om
,8
access tiers.
46
93
• Resilient against events that impact entire AZ.
16
67
• Same low latency and high performance of S3-Standard.
6)
• Object less than 128Kb cannot move to IA.
• Durability is 99.999999999%.
• Availability is 99.9%.
• Backed with the Amazon S3 service level Agreement for availability. 15
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
to
4. Amazon S3 One-Zone IA
0m
N
frj
•
(k
It is for data that is accessed less frequently but require rapid access when needed.
ris
hn
• Data stored in single AZ.
ac
ha
• Ideal for those who want lower cost option of IA-data.
ud
•
ha
It is good choice for storing secondary backup copies of on-premise data or easily
ry
59
recreatable data.
6@
• We can use S3 lifecycle policies.
gm
•
ai
Durability is 99.999999999%.
l.c
•
om
Availability is 99.5%.
,8
• Because S3 one zone-IA stores data in single AZ, data stored in this storage class
46
93
will be lost in the event of AZ destruction.
16
•
67
Backed with the Amazon S3 service level Agreement for availability.
6)
• The S3 Standard-IA and S3 One Zone-IA storage classes are suitable for objects
larger than 128 KB that you plan to store for at least 30 days. If an object is less than
128 KB, Amazon S3 charges you for 128 KB. If you delete an object before the end
of the 30-day minimum storage duration period, you are charged for 30 days.
16
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
to
5. Amazon S3 Glacier
0m
N
frj
(k
• S3 glacier is a secure, durable, low cost storage class for data
ris
hn
ac
archiving.
ha
ud
• To keep cost low yet suitable for varying needs, S3 glacier provides
ha
ry
three retrieval options that range from a few minutes to hours.
59
6@
• We can upload object directly to glacier or use lifecycle policies.
gm
ai
• Durability is 99.999999999%.
l.c
om
• Data is resilient in the event of one entire AZ destruction.
,8
46
• Support SSL for data in transit and encryption of data at rest.
93
16
67
• We can retrieve 10GB of our amazon S3 glacier data per month for
6)
free with free tier account.
• Backed with the Amazon S3 service level Agreement for availability.
17
Amazon S3 Classes
Th
is
PD
F
be
lo
ngs
6. Amazon S3 Glacier Deep Archieve
to
0m
N
frj
• It is amazon S3's cheapest storage.
(k
ris
hn
• Design to retain data for long period Eg : 10Years.
ac
ha
ud
• All objects stored in S3-Glacier deep archieve are replicated
ha
ry
and stored across atleast at three geographically dispersed AZ.
59
6@
• Durability is 99.999999999%.
gm
ai
l.c
• Ideal alternative to magnetic tape libraries.
om
,8
• Retrieval time within 12hours.
46
93
• Storage cost is upto 75% less than for the existing S3 glacier
16
67
6)
storage class.
• Availability is 99.9%
• Backed with the Amazon S3 service level Agreement for
availability.
18
Reduced Redundancy Storage
Th
is
PD
F
be
lo
ngs
to
▪
0m
The Reduced Redundancy Storage (RRS) storage class is designed
N
frj
(k
for noncritical, reproducible data that can be stored with less
ris
hn
ac
ha
redundancy than the S3 Standard storage class.
ud
ha
ry
▪
59
For durability, RRS objects have an average annual expected loss
6@
gm
of 0.01 percent of objects. If an RRS object is lost, when requests
ai
l.c
om
,8
are made to that object, Amazon S3 returns a 405 error.
46
93
16
67
6)
19
Availability and Durability Difference
Th
is
PD
F
be
lo
ngs
to
▪
0m
Availability is typically measure as a percentage. For example, the
N
frj
(k
service level agreement for S3 is that it will be available 99.99% of
ris
hn
ac
ha
the time. Durability is used to measure the likelihood of data loss.
ud
ha
ry
For example, assume you have an important document in your
59
6@
gm
safe at home.
ai
l.c
om
,8
46
93
16
67
6)
20
S3 Glacier Instant Retrieval
Th
is
PD
F
be
lo
ngs
to
▪
0m
Use for archiving data that is rarely accessed and requires
N
frj
(k
milliseconds retrieval. Data stored in the S3 Glacier Instant
ris
hn
ac
ha
Retrieval storage class offers a cost savings compared to the S3
ud
ha
ry
Standard-IA storage class, with the same latency and throughput
59
6@
gm
performance as the S3 Standard-IA storage class. S3 Glacier
ai
l.c
om
Instant Retrieval has higher data access costs than S3 Standard-IA.
,8
46
93
16
67
6)
21
Cross Region Replication
Th
is
PD
F
be
❑ CRR enables automatic, asynchronous copying of objects across buckets in
lo
ngs
to
different AWS regions. Buckets configured for cross regions replication can be
0m
N
owned by the same AWS account or by diff accounts
frj
(k
ris
❑ CRR is enabled with a bucket level configuration. We add the replication
hn
ac
configuration to our source bucket.
ha
ud
ha
❑ In the min configuration, we provide the destination bucket, where we want amazon
ry
59
6@
S3 to replicate objects and an AWS IAM role that Amazon S3 can assume to replicate
gm
objects on our behalf.
ai
l.c
om
,8
46
93
16
• Comply with compliance requirements.
67
When to
6)
• Minimize latency.
• Increase operational efficiency.
Th
is
PD
F
be
lo
ngs
to
0m
▪ Amazon S3 Transfer Acceleration is a bucket-level
N
frj
(k
ris
hn
feature that enables fast, easy, and secure
ac
ha
ud
ha
transfers of files over long distances between
ry
59
6@
gm
your client and an S3 bucket.
ai
l.c
om
,8
46
▪ Transfer Acceleration takes advantage of the
93
16
67
6)
globally distributed edge locations in Amazon
CloudFront.
23
Why use Transfer Acceleration?
Th
is
PD
F
be
lo
ngs
to
0m
You might want to use Transfer Acceleration on a bucket for various
N
frj
(k
reasons:
ris
hn
ac
•
ha
ud
Your customers upload to a centralized bucket from all over the
ha
ry
59
world.
6@
gm
•
ai
You transfer gigabytes to terabytes of data on a regular basis
l.c
om
,8
across continents.
46
93
16
•
67
You can't use all of your available bandwidth over the internet
6)
when uploading to Amazon S3.
24
Elastic File System (EFS)
Th
is
PD
F
be
lo
n
➢
gs
It is fully managed service that makes it easy to set up,scale and cost-optimize file
to
0m
storage in the Amazon Cloud.
N
➢
frj
With few clicks in the aws mgt. console, we can create file systems that are accessible to
(k
ris
Amazon EC2 instances via a file system interface (using standard OS file I/O API's) and
hn
ac
support full file system access semantics (such as strong consistency and file locking).
ha
ud
➢
ha
These file systems can automatically scale from gigabytes to petabytes of data without
ry
59
needing to provision storage.
6@
➢
gm
Tens, hundereds or even thousands of Amazon EC2 instances can access an Amazon
ai
l.c
EFS file system at the same time and Amazon EFS provides consistent performance to
om
,8
each amazon EC2 Instance.
46
93
➢ Amazon EFS is designed to be highly durable and highly available.
16
67
➢ There are no min fee or setup costs,we pay only for what we use.
6)
➢ This is only for Linux machines.
➢ Designed to provide performance for a broad spectrum of workloads and applications,
including big data and analytics, media processing workflows, content mgt., web serving
and home directories. 25
➢ This is not available in all regions.
Types of Block Store Device
Th
is
PD
F
be
lo
✓
n
Two types of block store devices are available for EC2 :
gs
to
0m
N
frj
Elastic Block Store
(k
ris
hn
•Persistent and Network attached drive.
ac
ha
•EBS Volume behave like raw,unformatted external block storage devices that we can attach
ud
to our EC2 instance.
ha
ry
•EBS Volumes are block storage devices suitable for database style data that require
59
6@
frequent read & write.
gm
•EBS Volumes are attached to our EC2 instances through the aws network, like Virtual
ai
hard drives.
l.c
om
•An EBS volume can attach to a single EC2 instance only at a time.
,8
•Both EBC Volumes and EC2 instance must be in the same AZ.
46
93
•An EBS volume data is replicated by aws accross multiple servers in the same az to
16
67
prevent data loss resulting from any single aws component failure.
6)
Instance Store Backed EC2
•Basically the virtual hard drive on the host allocated to this EC2 instance is limited to 10GB per
device, ephemeral storage (non-persistent storage) and the EC2 instance can't be stopped (can
only be rebooted) or terminated and terminate and stop will delete data. 26
Elastic Block Storage (EBS) (Contd...)
Th
is
PD
F
be
lo
✓ EBS Volume Types:
ngs
to
0m
N
frj
(k
ris
hn
ac
ha
ud
ha
ry
59
1.SSD Backed Volume: 2.HDD Backed Volume: 3.Magnetic Standard:
6@
gm
•General Purpose SSD (GP2) which •Throughput Optimized HHD (st1) •This is Bootable
ai
l.c
is default volume and Provisioned and cold HDD (sc1). These are non-
om
IOPS SSD (io1). These are bootable.
,8
46
bootable.
93
16
67
6)
27
Elastic Block Storage (EBS)
Th
is
PD
F
be
lo
✓ EBS Volume Types:
ngs
to
0m
N
frj
General Purpose SSD (GP2):
(k
ris
hn
•GP2 is the default EBS volume type for the amazon ec2 instances and are backed by
ac
ha
SSD.
ud
•GP,balances both price and performance.
Backed
Volume
ha
ry
•Ratio of 3IOPS/GB with upto 10,000 IOPS (Input/Output per second).
59
SSD
6@
•Boot volume having low latency.
gm
•Volume size 1Gb to 16TB.
ai
l.c
•Price : $0.10 per GB/Month.
om
,8
Provisioned IOPS SSD (IO1):
46
93
•These volumes are ideal for both IOPS intensive and throughput intensive workloads
16
67
that require extremely low latency or for mission critical application.
6)
•Designed for I/O intensive applications such as large relational or NoSQL Databases.
•Use if we need more than 10,000 IOPS.
•Can provision upto 32000 IOPS per volume and 64,000 IOPS for nitro based instances.
•Volume Size : 4GB to 16TB.
•Price : $0.125 per GB/month.
28
Elastic Block Storage (EBS)
Th
is
PD
F
be
lo
✓ EBS Volume Types:
ngs
to
0m
N
frj
Throughput Optimized HDD(st1):
(k
ris
• ST1 is backed by hard disk drives and is ideal for frequently accessed, throughput intensive
hn
workloads with large datasets.
ac
ha
• ST1 volumes deliver performance in term of throughput, measured in MB/s.
ud
Backed
Volume
• Big data, data warehouse, log processing.
ha
ry
• It cannot be a boot volume.
59
HDD
6@
• Can provisioned upto 500 IOPS per volume.
gm
• Volume Size : 500GB to 16TB.
ai
• Price : $0.045 per GB/month.
l.c
om
,8
46
Cold HDD(SC1) :
93
16
• SC1 is also backed by HDD and provides the lowest cost per GB of all EBS volume Types.
67
• Lowest cost storage for infrequent access workloads.
6)
• Used in file servers.
• Cannot be a boot volume.
• Can provisioned upto 250 IOPS per volume.
• Volume Size : 500GB to 16TB.
• Price : $0.025 per GB/month.
29
Elastic Block Storage (EBS)
Th
is
PD
F
be
lo
✓ EBS Volume Types:
ngs
to
0m
N
frj
(k
Magnetic Standard
ris
hn
ac
ha
ud
• Lowest cost per GB of all EBS volume type that is
ha
ry
bootable.
59
6@
• Magnetic volumes are ideal for workloads where data
gm
ai
l.c
is accessed infrequently and application where the
om
,8
lowest storage cost is important.
46
93
• Price : $0.05 per GB/month.
16
67
6)
• Volume Size : 1GB to 1TB.
• Max IOPS/Volume : 40 to 200.
30
EBS Snapshots
Th
is
PD
F
be
lo
ngs
EBS Snapshots are point-in-time images/copies of our ebs volume.
to
0m
N
frj
Any data written to the volume after the snapshot process is initiated,will be included in the resulting snapshot (but wiill be
(k
included in future,incremental update).
ris
hn
ac
ha
Per aws account,upto 5000 ebs volumes can be created.
ud
ha
ry
59
Per account,upto 10,000 EBS snapshots can be created.
6@
gm
ai
l.c
EBS snapshots are stored in S3,however we cannot access them directly,we can only access them through EC2 API's.
om
,8
46
93
While EBS volumes are AZ specific, snapshots are region specific.
16
67
6)
Any AZ in region can use snapshot to create EBS volume.
To migrate on EBS from one AZ to another, create a snapshot (region specific) and create an EBS volume from the snapshot in
the intended AZ.
We can create a snapshot to an EBS volume of the same or larger size than the original volume size from which the snapshot 31
was initially created.
EBS Snapshots (Contd...)
Th
is
PD
F
be
lo
ngs
We can take a snapshot of a non-root ebs volume while the volume is in use of a running EC2 instance.
to
0m
N
frj
(k
This means we can still access it while the snapshot is being processed.
ris
hn
ac
ha
ud
However the snapshot will only include data that is already written to our volume.
ha
ry
59
6@
The snapshot is created immediately but it may stay in pending status until the full snapshot is completed.This may
gm
take few hours to complete specifically for the first time snapshot of a volume.
ai
l.c
om
During the period, when the snapshot status is pending,we can still access the volume (non-root), but I/O might be
,8
slower because of the snapshot activity.
46
93
16
67
While in pending state,an in-progress snapshot will not include data from ongoing reads and writes to the volume.
6)
To take complete snapshot of our non-root EBS volume : stop or unmount the volume.
To create a snapshot for a root ebs volume, we must stop the instance first then take the snapshot.
32
EBS Encryption (Contd...)
Th
is
PD
F
be
lo
ngs
to
0m
EBS encryption is supported on all ebs volume
N
frj
(k
types and all EC2 instance familes.
ris
hn
ac
ha
ud
ha
Snapshots of encrypted volumes are also
ry
59
6@
encrypted.
gm
ai
l.c
om
,8
Creating an ebs volumes from an encrypted
46
93
snapshot will result in an encrypted volume.
16
67
6)
Data encryption at rest means encrypting data
while it is stored on the data storage device.
33
EBS Encryption (Contd...)
Th
is
PD
F
be
lo
n
✓ There are many ways we can encrypt data on an ebs volume at rest,while the
gs
to
0m
volume is attached to an EC2 instance :-
N
frj
(k
ris
hn
1. Using 3rd party encryption technique on EBS Volume.
ac
ha
ud
ha
2. Encryption Tools.
ry
59
6@
gm
3.Using encrypted ebs volumes.
ai
l.c
om
,8
46
4.Using encryption at the OS Level.
93
16
67
6)
5.Encrypt data at the application level before storing it to the volume.
34
EBS Encryption (Contd...)
Th
is
PD
F
be
lo
ngs
to
Encrypted volumes are accessed exactly like Unencrypted ones, basically
0m
N
encryption is handled transparently.
frj
(k
ris
hn
ac
We can attach an encrypted and unencrypted volumes to the same EC2 instance.
ha
ud
ha
ry
59
Remember that the ebs volumes are not physically attached to the EC2 instance,
6@
gm
rather they are virtually attached through the EBS Infrastructure.
ai
l.c
om
,8
This means when we encrypt data ona an EBS Volume, data is actually encrypted
46
93
on the EC2 instance then transferred, Encrypted to be stored on the EBS Volume.
16
67
6)
This means data in transit between EC2 and encrypted EBS volume in also
encrypted.
Th
is
PD
F
be
lo
ngs
to
Encrypted volumes are accessed exactly like Unencrypted ones, basically
0m
N
encryption is handled transparently.
frj
(k
ris
hn
ac
We can attach an encrypted and unencrypted volumes to the same EC2 instance.
ha
ud
ha
ry
59
Remember that the ebs volumes are not physically attached to the EC2 instance,
6@
gm
rather they are virtually attached through the EBS Infrastructure.
ai
l.c
om
,8
This means when we encrypt data ona an EBS Volume, data is actually encrypted
46
93
on the EC2 instance then transferred, Encrypted to be stored on the EBS Volume.
16
67
6)
This means data in transit between EC2 and encrypted EBS volume in also
encrypted.
Th
is
PD
F
be
lo
n
✓ To change the state(indirectly) we need to follw either of the followig two ways :-
gs
to
0m
N
frj
Attach a new, Encrypted EBS
(k
ris
volume to the EC2 instance Create a Snapshot of the
hn
ac
that has the data to be Encrypted Volume
ha
ud
Encrypted then
ha
ry
59
6@
• Mount the new volume to the • Copy the snapshot and
gm
EC2 instance. choose Encryption for the
ai
l.c
om
• Copy the data from the new copy, this will create an
,8
encrypted copy of the
46
unencrypted volume to the
93
snapshot.
16
new volume.
67
• Use this new copy to create
6)
• Both volumes must be on the
same EC2 instance. on ebs volume which will be
encrypted too.
• Attach the new encrypted ebs
volume to the EC2 instance. 37
EBS Encryption (Contd...)
Th
is
PD
F
be
lo
ngs
to
0m
N
Root EBS volume Encryption
frj
(k
ris
hn
• There is no direct way to change the encryption state of a volume.
ac
ha
• There is an indirect workaround to this.
ud
ha
ry
• Launch the instance with EBS volume required.
59
6@
• Do whatever patching or install applications.
gm
ai
• Create an AMI from the EC2 instance.
l.c
om
• Copy the AMI from the EC2 instance.
,8
46
93
• Copy the AMI and choose encryption while copying.
16
67
• This results it an encrypted AMI that is private.
6)
• Use the encrypted AMI to launch new ec2 instances which will
have these ebs root volume encrypted.
38
EBS Encryption (Contd...)
Th
is
PD
F
be
lo
ngs
to
0m
EBS Encryption Key
N
frj
(k
•To encrypt a volume or a snapshot we need an encryption key, these keys are called Customer
ris
hn
Masters Key (CMK) and are managed by AWS Key Mgt. Service (KMS).
ac
•When encrypting the first EBS volume, AWS KMS creates a default CMK key.
ha
ud
•This key is used for our first volume encryption. Encryption of snapshots created from this volumes
ha
ry
and subsequent volumes created from these snapshots.
59
•After that each newly encrypted volume is encrypted with a unique/seperate AES-256 bit
6@
gm
encryption key.
ai
•This key is used to encrypt the volume, its snapshots and any volumes created of the snapshots.
l.c
om
,8
46
Changing Encryption Key
93
16
67
•We cannot change the encryption (CMK) key used to encrypt an existing encrypted snapshot or
6)
encrypt EBS volume.
•If we want to change the key,create a copy of the snapshot and specify,during the copy
process,that we want to re-encrypt the copy with a diff. Key.
•This comes in handy when we have a snapshot that was encrypted using our default CMK key and
we wnat to change the key in order to be able to share the snapshot with other accounts.
39
Sharing EBS Snapshot
Th
is
PD
F
be
lo
ngs
By default, Only the account owner can create volumes from the account snapshot.
to
0m
N
frj
We can share our unencrypted snapshots with the aws community by making them
(k
public.
ris
hn
ac
ha
Also we can share our unencrypted snapshot with a selected aws accounts by
ud
making them private then selecting aws accounts to share with.
ha
ry
59
6@
We can not make our encrypted snapshot public.
gm
ai
l.c
om
We cannot make a snapshot of an encrypted ebs volume public on aws.
,8
46
93
16
67
We can share our encrypted snapshot with specific aws accounts as follows:
6)
• Make sure that we use a non default/custom cmk key to encrypt the snapshot not the default CMK key (aws
will not allow the sharing if default CMK key is used).
• Configure cross account permissions in order to give the accounts with which we want to share the
snapshot, access to the custom CMK key used to encrypt the snapshot.
• Without this the other accounts will not be able to copy the snapshot, nor will be able to create volumes of
snapshots. 40
Sharing EBS Snapshot (Contd...)
Th
is
PD
F
be
lo
ngs
AWS will not allow us to share snapshots encrypted using our default CMK Key.
to
0m
N
frj
(k
ris
hn
ac
For the aws accounts with whom an encrypted snapshot is shared :
ha
ud
ha
ry
• They must first create their own copies of the snapshot.
59
6@
• Then they use that copy to restore/create ebs volume
gm
ai
We can only make a copy of the snapshot when it has been fully saved to S3 (its
l.c
om
status show as complete) and not during the snapshots pending status (when
,8
46
data blocks are being moved to S3).
93
16
67
6)
Amazon S3 Server Side Encryption(SSE) protect the snapshot data-in-transit
while copying.
We can have upto 5 snapshots copy request running in a single destination per
account. 41
42
[email protected]
6)
@Technical Guftgu
67
Thanks!
16
93
46
,8
om
l.c
ai
Any questions?
gm
6@
59
ry
ha
ud
ha
ac
hn
ris
(k
frj
N
0m
to
n gs
lo
be
F
PD
is
Th