s3 User Guide PDF
s3 User Guide PDF
s3 User Guide PDF
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon Simple Storage Service Console User Guide
Table of Contents
Welcome to the Amazon S3 Console User Guide .................................................................................... 1
Changing the Console Language .......................................................................................................... 2
Creating and Configuring a Bucket ....................................................................................................... 3
Creating a Bucket ....................................................................................................................... 3
More Info .......................................................................................................................... 8
Deleting a Bucket ....................................................................................................................... 8
More Info ........................................................................................................................ 10
Emptying a Bucket ................................................................................................................... 10
Viewing Bucket Properties ......................................................................................................... 11
Enabling or Disabling Versioning ................................................................................................ 12
Enabling Default Encryption ...................................................................................................... 13
More Info ........................................................................................................................ 16
Enabling Server Access Logging ................................................................................................. 16
Enabling Object-Level Logging ................................................................................................... 19
More Info ........................................................................................................................ 21
Configuring Static Website Hosting ............................................................................................ 21
Redirecting Website Requests .................................................................................................... 25
Advanced Settings .................................................................................................................... 26
Setting Up a Destination for Event Notifications ................................................................... 27
Enabling and Configuring Event Notifications ...................................................................... 28
Enabling Transfer Acceleration ........................................................................................... 33
Access Points ................................................................................................................................... 35
Creating an Amazon S3 Access Point .......................................................................................... 35
Managing and Using Amazon S3 Access Points ............................................................................ 36
Uploading, Downloading, and Managing Objects .................................................................................. 38
Uploading S3 Objects ............................................................................................................... 38
Uploading Files and Folders by Using Drag and Drop ............................................................ 39
Uploading Files by Pointing and Clicking ............................................................................. 44
More Info ........................................................................................................................ 46
Downloading S3 Objects ........................................................................................................... 47
Related Topics ................................................................................................................. 50
Deleting Objects ...................................................................................................................... 50
More Info ........................................................................................................................ 50
Undeleting Objects ................................................................................................................... 51
More Info ........................................................................................................................ 51
Restoring Archived S3 Objects ................................................................................................... 51
Archive Retrieval Options .................................................................................................. 52
Restoring an Archived S3 Object ........................................................................................ 52
Upgrade an In-Progress Restore ......................................................................................... 55
Checking Archive Restore Status and Expiration Date ............................................................ 56
Locking Amazon S3 Objects ...................................................................................................... 57
More Info ........................................................................................................................ 59
Viewing an Overview of an Object ............................................................................................. 59
More Info ........................................................................................................................ 61
Viewing Object Versions ............................................................................................................ 62
More Info ........................................................................................................................ 63
Viewing Object Properties ......................................................................................................... 63
Adding Encryption to an Object ................................................................................................. 65
More Info ........................................................................................................................ 67
Adding Metadata to an Object ................................................................................................... 67
Adding System-Defined Metadata ...................................................................................... 68
Adding User-Defined Metadata .......................................................................................... 70
Adding Tags to an Object .......................................................................................................... 72
More Info ........................................................................................................................ 75
iii
Amazon Simple Storage Service Console User Guide
iv
Amazon Simple Storage Service Console User Guide
Amazon S3 provides virtually limitless storage on the internet. This guide explains how you can manage
buckets, objects, and folders in Amazon S3 by using the AWS Management Console, a browser-based
graphical user interface for interacting with AWS services.
For detailed conceptual information about how Amazon S3 works, see What Is Amazon S3? in the
Amazon Simple Storage Service Developer Guide. The developer guide also has detailed information about
Amazon S3 features and code examples to support those features.
Topics
1
Amazon Simple Storage Service Console User Guide
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Scroll down until you see the bar at the bottom of the window, and then choose the language on the
left side of the bar.
2
Amazon Simple Storage Service Console User Guide
Creating a Bucket
Every object you store in Amazon S3 resides in a bucket. You can use buckets to group related objects in
the same way that you use a directory to group files in a file system.
Amazon S3 creates buckets in the AWS Region that you specify. You can choose any AWS Region that is
geographically close to you to optimize latency, minimize costs, or address regulatory requirements. For
example, if you reside in Europe, you might find it advantageous to create buckets in the EU (Ireland) or
EU (Frankfurt) regions. For a list of Amazon S3 AWS Regions, see Regions and Endpoints in the Amazon
Web Services General Reference.
You are not charged for creating a bucket. You are only charged for storing objects in the bucket and for
transferring objects out of the bucket. For more information about pricing, see Amazon Simple Storage
Service (S3) FAQs.
Amazon S3 bucket names are globally unique, regardless of the AWS Region in which you create the
bucket. You specify the name at the time you create the bucket. For bucket naming guidelines, see
Bucket Restrictions and Limitations in the Amazon Simple Storage Service Developer Guide.
The following topics explain how to use the Amazon S3 console to create, delete, and manage buckets.
Topics
• How Do I Create an S3 Bucket? (p. 3)
• How Do I Delete an S3 Bucket? (p. 8)
• How Do I Empty an S3 Bucket? (p. 10)
• How Do I View the Properties for an S3 Bucket? (p. 11)
• How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12)
• How Do I Enable Default Encryption for an Amazon S3 Bucket? (p. 13)
• How Do I Enable Server Access Logging for an S3 Bucket? (p. 16)
• How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data Events? (p. 19)
• How Do I Configure an S3 Bucket for Static Website Hosting? (p. 21)
• How Do I Redirect Requests to an S3 Bucket Hosted Website to Another Host? (p. 25)
• Advanced Settings for S3 Bucket Properties (p. 26)
3
Amazon Simple Storage Service Console User Guide
Creating a Bucket
A bucket is owned by the AWS account that created it. By default, you can create up to 100 buckets in
each of your AWS accounts. If you need additional buckets, you can increase your account bucket limit
to a maximum of 1,000 buckets by submitting a service limit increase. For information about how to
increase your bucket limit, see AWS Service Limits in the AWS General Reference.
Buckets have configuration properties, including their geographical region, who has access to the objects
in the bucket, and other metadata.
To create an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Create bucket.
3. On the Name and region page, enter a name for your bucket and choose the AWS Region where you
want the bucket to reside. Complete the fields on this page as follows:
a. For Bucket name, enter a unique DNS-compliant name for your new bucket. Follow these
naming guidelines:
• The name must be unique across all existing bucket names in Amazon S3.
• The name must not contain uppercase characters.
• The name must start with a lowercase letter or number.
• The name must be between 3 and 63 characters long.
• After you create the bucket, you cannot change the name, so choose wisely.
• Choose a bucket name that reflects the objects in the bucket because the bucket name is
visible in the URL that points to the objects that you're going to put in your bucket.
For information about naming buckets, see Rules for Bucket Naming in the Amazon Simple
Storage Service Developer Guide.
b. For Region, choose the AWS Region where you want the bucket to reside. Choose a Region close
to you to minimize latency and costs, or to address regulatory requirements. Objects stored in
a Region never leave that Region unless you explicitly transfer them to another Region. For a
list of Amazon S3 AWS Regions, see Regions and Endpoints in the Amazon Web Services General
Reference.
c. (Optional) If you have already set up a bucket that has the same settings that you want to use
for the new bucket that you want to create, you can set it up quickly by choosing Copy settings
from an existing bucket, and then choosing the bucket whose settings you want to copy.
The settings for the following bucket properties are copied: versioning, tags, and logging.
d. Do one of the following:
• If you copied settings from another bucket, choose Create. You're done, so skip the following
steps.
• If not, choose Next.
4
Amazon Simple Storage Service Console User Guide
Creating a Bucket
4. On the Configure options page, you can configure the following properties and Amazon
CloudWatch metrics for the bucket. Or, you can configure these properties and CloudWatch metrics
later, after you create the bucket.
a. Versioning
To enable object versioning for the bucket, select Keep all versions of an object in the same
bucket.
For more information on enabling versioning, see How Do I Enable or Suspend Versioning for an
S3 Bucket? (p. 12).
b. Server access logging
To enable server access logging on the bucket, select Log requests for access to your bucket.
Server access logging provides detailed records for the requests that are made to your bucket.
For more information about enabling server access logging, see How Do I Enable Server Access
Logging for an S3 Bucket? (p. 16).
5
Amazon Simple Storage Service Console User Guide
Creating a Bucket
c. Tags
To add a cost allocation bucket tag, enter a Key and a Value. Choose Add another to add
another tag.
You can use cost allocation bucket tags to annotate billing for your use of a bucket. Each tag is
a key-value pair that represents a label that you assign to a bucket. For more information about
cost allocation tags, see Using Cost Allocation S3 Bucket Tags in the Amazon Simple Storage
Service Developer Guide.
d. Object-level logging
To enable object-level logging with CloudTrail, select Record object-level API activity by
using CloudTrail for an additional cost. For more information about enabling object-level
logging, see How Do I Enable Object-Level Logging for an S3 Bucket with AWS CloudTrail Data
Events? (p. 19).
e. Default encryption
To enable default encryption for the bucket, select Automatically encrypt objects when they
are stored in S3.
You can enable default encryption for a bucket so that all objects are encrypted when they are
stored in the bucket. For more information about enabling default encryption, see How Do I
Enable Default Encryption for an Amazon S3 Bucket? (p. 13).
6
Amazon Simple Storage Service Console User Guide
Creating a Bucket
f. Object lock
If you want to be able to lock objects in the bucket, select Permanently allow objects in this
bucket to be locked.
Object lock requires that you enable versioning on the bucket. For more information about
object locking, see Introduction to Amazon S3 Object Lock in the Amazon Simple Storage Service
Developer Guide.
g. CloudWatch request metrics
To configure CloudWatch request metrics for the bucket, select Monitor requests in your
bucket for an additional cost.
For more information about CloudWatch request metrics, see How Do I Configure Request
Metrics for an S3 Bucket? (p. 109).
5. Choose Next.
6. On the Set permissions page, you manage the permissions that are set on the bucket that you are
creating.
Under Block public access (bucket settings), we recommend that you do not change the default
settings that are listed under Block all public access. You can change the permissions after you
create the bucket. For more information about setting bucket permissions, see How Do I Set ACL
Bucket Permissions? (p. 124). If you intend to use the bucket to host a static website, you can edit
the block public access settings after you create it. For more information, see How Do I Configure an
S3 Bucket for Static Website Hosting? (p. 21)
Warning
We highly recommend that you keep the default access settings for blocking public access
to the bucket that you are creating. Public access means that anyone in the world can access
the objects in the bucket.
If you intend to use the bucket to store Amazon S3 server access logs, in the Manage system
permissions list, choose Grant Amazon S3 Log Delivery group write access to this bucket. For
more information about server access logs, see How Do I Enable Server Access Logging for an S3
Bucket? (p. 16).
7
Amazon Simple Storage Service Console User Guide
More Info
More Info
• How Do I Delete an S3 Bucket? (p. 8)
• How Do I Set ACL Bucket Permissions? (p. 124)
When you delete a bucket with versioning enabled, all versions of all the objects in the bucket are
permanently deleted. For more information about versioning, see Managing Objects in a Versioning-
Enabled Bucket in the Amazon Simple Storage Service Developer Guide.
• Bucket names are unique. If you delete a bucket, another AWS user can use the name.
• When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted,
including objects that transitioned to the Amazon S3 Glacier storage class.
• If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted zone
as described in Create and Configure Amazon Route 53 Hosted Zone: You must clean up the Route 53
hosted zone settings that are related to the bucket as described in Delete the Route 53 Hosted Zone.
8
Amazon Simple Storage Service Console User Guide
Deleting a Bucket
• If the bucket receives log data from Elastic Load Balancing (ELB): We recommend that you stop the
delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user
creates a bucket using the same name, your log data could potentially be delivered to that bucket. For
information about ELB access logs, see Access Logs in the User Guide for Classic Load Balancers and
Access Logs in the User Guide for Application Load Balancers.
Important
If you want to continue to use the same bucket name, don't delete the bucket. We recommend
that you empty the bucket and keep it. After a bucket is deleted, the name becomes available
to reuse, but the name might not be available for you to reuse for various reasons. For example,
it might take some time before the name can be reused, and some other account could create a
bucket with that name before you do.
To delete an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the bucket icon next to the name of the bucket that you want to
delete and then choose Delete bucket.
3. In the Delete bucket dialog box, type the name of the bucket that you want to delete for
confirmation, and then choose Confirm.
Note
The text in the dialog box changes depending on whether the bucket is empty, is used for a
static website, or is used for ELB access logs.
9
Amazon Simple Storage Service Console User Guide
More Info
More Info
• How Do I Empty an S3 Bucket? (p. 10)
• How Do I Delete Objects from an S3 Bucket? (p. 50)
To empty an S3 bucket
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to empty and then choose
Empty.
10
Amazon Simple Storage Service Console User Guide
Viewing Bucket Properties
3. In the Empty bucket dialog box, type the name of the bucket you want to empty for confirmation
and then choose Confirm.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to view the properties for.
3. Choose Properties.
11
Amazon Simple Storage Service Console User Guide
Enabling or Disabling Versioning
4. On the Properties page, you can configure the following properties for the bucket.
a. Versioning – Versioning enables you to keep multiple versions of an object in one bucket. By
default, versioning is disabled for a new bucket. For information about enabling versioning, see
How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12).
b. Server access logging – Server access logging provides detailed records for the requests
that are made to your bucket. By default, Amazon S3 does not collect server access logs. For
information about enabling server access logging, see How Do I Enable Server Access Logging
for an S3 Bucket? (p. 16).
c. Static website hosting – You can host a static website on Amazon S3. To enable static website
hosting, choose Static website hosting and then specify the settings you want to use. For more
information, see How Do I Configure an S3 Bucket for Static Website Hosting? (p. 21).
d. Object-level logging – Object-level logging records object-level API activity by using CloudTrail
data events. For information about enabling object-level logging, see How Do I Enable Object-
Level Logging for an S3 Bucket with AWS CloudTrail Data Events? (p. 19).
e. Tags – With AWS cost allocation, you can use bucket tags to annotate billing for your use of a
bucket. A tag is a key-value pair that represents a label that you assign to a bucket. To add tags,
choose Tags, and then choose Add tag. For more information, see Using Cost Allocation Tags
for S3 Buckets in the Amazon Simple Storage Service Developer Guide.
f. Transfer acceleration – Amazon S3 Transfer Acceleration enables fast, easy, and secure
transfers of files over long distances between your client and an S3 bucket. For information
about enabling transfer acceleration, see How Do I Enable Transfer Acceleration for an S3
Bucket? (p. 33).
g. Events – You can enable certain Amazon S3 bucket events to send a notification message to
a destination whenever the events occur. To enable events, choose Events and then specify
the settings you want to use. For more information, see How Do I Enable and Configure Event
Notifications for an S3 Bucket? (p. 28).
h. Requester Pays – You can enable Requester Pays so that the requester (instead of the bucket
owner) pays for requests and data transfers. For more information, see Requester Pays Buckets
in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to enable versioning for.
12
Amazon Simple Storage Service Console User Guide
Enabling Default Encryption
3. Choose Properties.
4. Choose Versioning.
When you use server-side encryption, Amazon S3 encrypts an object before saving it to disk in its data
centers and decrypts it when you download the objects. For more information about protecting data
using server-side encryption and encryption key management, see Protecting Data Using Server-Side
Encryption in the Amazon Simple Storage Service Developer Guide.
Default encryption works with all existing and new Amazon S3 buckets. Without default encryption, to
encrypt all objects stored in a bucket, you must include encryption information with every object storage
13
Amazon Simple Storage Service Console User Guide
Enabling Default Encryption
request. You must also set up an Amazon S3 bucket policy to reject storage requests that don't include
encryption information.
There are no new charges for using default encryption for S3 buckets. Requests to configure the default
encryption feature incur standard Amazon S3 request charges. For information about pricing, see
Amazon S3 Pricing. For SSE-KMS CMK storage, AWS KMS charges apply and are listed at AWS KMS
Pricing.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
3. Choose Properties.
5. If you want to use keys that are managed by Amazon S3 for default encryption, choose AES-256,
and choose Save.
For more information about using Amazon S3 server-side encryption to encrypt your data, see
Protecting Data with Amazon S3-Managed Encryption Keys in the Amazon Simple Storage Service
Developer Guide.
14
Amazon Simple Storage Service Console User Guide
Enabling Default Encryption
Important
You might need to update your bucket policy when enabling default encryption. For more
information, see Moving to Default Encryption from Using Bucket Policies for Encryption
Enforcement in the Amazon Simple Storage Service Developer Guide.
6. If you want to use CMKs that are stored in AWS KMS for default encryption, follow these steps:
a. Choose AWS-KMS.
b. To choose a customer-managed AWS KMS CMK that you have created, use one of these
methods:
Important
When you use an AWS KMS CMK for server-side encryption in Amazon S3, you
must choose a symmetric CMK. Amazon S3 only supports symmetric CMKs and not
asymmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys in
the AWS Key Management Service Developer Guide.
15
Amazon Simple Storage Service Console User Guide
More Info
Important
If you use the AWS KMS option for your default encryption configuration, you are subject
to the RPS (requests per second) limits of AWS KMS. For more information about AWS KMS
limits and how to request a limit increase, see AWS KMS limits.
For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide. For more information about using AWS KMS with Amazon S3,
see Protecting Data with Keys Stored in AWS KMS in the Amazon Simple Storage Service Developer
Guide.
7. Choose Save.
More Info
• Amazon S3 Default Encryption for S3 Buckets in the Amazon Simple Storage Service Developer Guide
• How Do I Add Encryption to an S3 Object? (p. 65)
16
Amazon Simple Storage Service Console User Guide
Enabling Server Access Logging
By default, Amazon Simple Storage Service (Amazon S3) doesn't collect server access logs. When you
enable logging, Amazon S3 delivers access logs for a source bucket to a target bucket that you choose.
The target bucket must be in the same AWS Region as the source bucket and must not have a default
retention period configuration.
Server access logging provides detailed records for the requests that are made to an S3 bucket. Server
access logs are useful for many applications. For example, access log information can be useful in
security and access audits. It can also help you learn about your customer base and understand your
Amazon S3 bill.
An access log record contains details about the requests that are made to a bucket. This information
can include the request type, the resources that are specified in the request, and the time and date that
the request was processed. For more information, see Server Access Log Format in the Amazon Simple
Storage Service Developer Guide.
Important
There is no extra charge for enabling server access logging on an Amazon S3 bucket. However,
any log files that the system delivers to you will accrue the usual charges for storage. (You can
delete the log files at any time.) We do not assess data transfer charges for log file delivery, but
we do charge the normal data transfer rate for accessing the log files.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to enable server access
logging for.
3. Choose Properties.
17
Amazon Simple Storage Service Console User Guide
Enabling Server Access Logging
5. Choose Enable Logging. For Target, choose the name of the bucket that you want to receive the
log record objects. The target bucket must be in the same Region as the source bucket and must not
have a default retention period configuration.
6. (Optional) For Target prefix, type a key name prefix for log objects, so that all of the log object
names begin with the same string.
7. Choose Save.
You can view the logs in the target bucket. If you specified a prefix, the prefix shows as a folder in
the target bucket in the console. After you enable server access logging, it might take a few hours
before the logs are delivered to the target bucket. For more information about how and when logs
are delivered, see Server Access Logging in the Amazon Simple Storage Service Developer Guide.
More Info
18
Amazon Simple Storage Service Console User Guide
Enabling Object-Level Logging
To configure a trail to log data events for an S3 bucket, you can use either the AWS CloudTrail console or
the Amazon S3 console. If you are configuring a trail to log data events for all the Amazon S3 buckets in
your AWS account, it's easier to use the CloudTrail console. For information about using the CloudTrail
console to configure a trail to log S3 data events, see Data Events in the AWS CloudTrail User Guide.
The following procedure shows how to use the Amazon S3 console to enable a CloudTrail trail to log
data events for an S3 bucket.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket.
3. Choose Properties.
The trail you select must be in the same AWS Region as your bucket, so the drop-down list contains
only trails that are in the same Region as the bucket or trails that were created for all Regions.
19
Amazon Simple Storage Service Console User Guide
Enabling Object-Level Logging
If you need to create a trail, choose the CloudTrail console link to go to the CloudTrail console.
For information about how to create trails in the CloudTrail console, see Creating a Trail with the
Console in the AWS CloudTrail User Guide.
• Read to specify that you want CloudTrail to log Amazon S3 read APIs such as GetObject.
• Write to log Amazon S3 write APIs such as PutObject.
• Read and Write to log both read and write object APIs.
For a list of supported data events that CloudTrail logs for Amazon S3 objects, see Amazon S3
Object-Level Actions Tracked by CloudTrail Logging in the Amazon Simple Storage Service Developer
Guide.
20
Amazon Simple Storage Service Console User Guide
More Info
To disable object-level logging for the bucket, you must go to the CloudTrail console and remove the
bucket name from the trail's Data events.
Note
If you use the CloudTrail console or the Amazon S3 console to configure a trail to log data
events for an S3 bucket, the Amazon S3 console shows that object-level logging is enabled
for the bucket.
For information about enabling object-level logging when you create an S3 bucket, see How Do I Create
an S3 Bucket? (p. 3).
More Info
• How Do I View the Properties for an S3 Bucket? (p. 11)
• Logging Amazon S3 API Calls By Using AWS CloudTrail in the Amazon Simple Storage Service Developer
Guide
• Working with CloudTrail Log Files in the AWS CloudTrail User Guide
The following is a quick procedure to configure an Amazon S3 bucket for static website hosting in the
Amazon S3 console. If you’re looking for more in-depth information, and walkthroughs on using a
custom domain name for your static website or speeding up your website, see Hosting a Static Website
on Amazon S3 in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to enable static website
hosting for.
21
Amazon Simple Storage Service Console User Guide
Configuring Static Website Hosting
3. Choose Properties.
After you enable your bucket for static website hosting, web browsers can access all of your content
through the Amazon S3 website endpoint for your bucket.
a. In Index document, enter the name of the index document, which is typically index.html.
When you configure a bucket for website hosting, you must specify an index document. Amazon
S3 returns this index document when requests are made to the root domain or any of the
subfolders. For more information, see Configuring a Bucket for Website Hosting in the Amazon
Simple Storage Service Developer Guide.
b. (Optional) For 4XX class errors, you can optionally provide your own custom error document
that provides additional guidance for your users.
For Error Document, enter the name of the file that contains the custom error document. If an
error occurs, Amazon S3 returns an HTML error document. For more information, see Custom
Error Document Support in the Amazon Simple Storage Service Developer Guide.
c. (Optional) If you want to specify advanced redirection rules, in the Edit redirection rules text
area, use XML to describe the rules.
For example, you can conditionally route requests according to specific object key names or
prefixes in the request. For more information, see Configuring a Bucket for Website Hosting in
the Amazon Simple Storage Service Developer Guide.
22
Amazon Simple Storage Service Console User Guide
Configuring Static Website Hosting
6. Choose Save.
7. To disable block public access for the bucket, follow these steps:
23
Amazon Simple Storage Service Console User Guide
Configuring Static Website Hosting
8. To grant public read access for your website, add a bucket policy to the website bucket that grants
public read access to the bucket.
a. Copy the following bucket policy, and paste it in the Bucket policy editor.
b. Update the Resource to include your bucket name.
24
Amazon Simple Storage Service Console User Guide
Redirecting Website Requests
In the example below, example-bucket is the bucket. To use this bucket policy with your own
bucket, you must update this name to match your bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
]
}
]
}
c. Choose Save.
For information about adding a bucket policy, see How Do I Add an S3 Bucket Policy? (p. 127) For
more information about website permissions, see Permissions Required for Website Access in the
Amazon Simple Storage Service Developer Guide.
Note
If you choose Disable website hosting, Amazon S3 removes the website configuration from
the bucket, so that the bucket is no longer accessible from the website endpoint. However, the
bucket is still available at the REST endpoint. For a list of Amazon S3 endpoints, see Amazon S3
Regions and Endpoints in the Amazon Web Services General Reference.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to redirect all requests from.
3. Choose Properties.
25
Amazon Simple Storage Service Console User Guide
Advanced Settings
a. For Target bucket or domain, type the name of the bucket or the domain name where you
want requests to be redirected. To redirect requests to another bucket, type the name of the
target bucket. For example, if you are redirecting to a root domain address, you would type
www.example.com. For more information, see Configure a Bucket for Website Hosting in the
Amazon Simple Storage Service Developer Guide.
b. For Protocol, type the protocol (http, https) for the redirected requests. If no protocol is
specified, the protocol of the original request is used. If you redirect all requests, any request
made to the bucket's website endpoint will be redirected to the specified host name.
6. Choose Save.
Topics
• How Do I Set Up a Destination to Receive Event Notifications? (p. 27)
• How Do I Enable and Configure Event Notifications for an S3 Bucket? (p. 28)
• How Do I Enable Transfer Acceleration for an S3 Bucket? (p. 33)
26
Amazon Simple Storage Service Console User Guide
Setting Up a Destination for Event Notifications
Amazon Simple Notification Service (Amazon SNS) is a web service that coordinates and manages
the delivery or sending of messages to subscribing endpoints or clients. You can use the Amazon
SNS console to create an Amazon SNS topic that your notifications can be sent to. The Amazon
SNS topic must be in the same region as your Amazon S3 bucket. For information about creating an
Amazon SNS topic, see Getting Started in the Amazon Simple Notification Service Developer Guide.
Before you can use the Amazon SNS topic that you create as an event notification destination, you
need the following:
• The Amazon Resource Name (ARN) for the Amazon SNS topic
• A valid Amazon SNS topic subscription (the topic subscribers are notified when a message is
published to your Amazon SNS topic)
• A permissions policy that you set up in the Amazon SNS console (as shown in the following
example)
{
"Version":"2012-10-17",
"Id": "__example_policy_ID",
"Statement":[
{
"Sid": "example-statement-ID",
"Effect":"Allow",
"Principal": "*",
"Action": "SNS:Publish",
"Resource":"arn:aws:sns:region:account-number:topic-name",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::bucket-name"
}
}
}
]
}
You can use the Amazon SQS console to create an Amazon SQS queue that your notifications can
be sent to. The Amazon SQS queue must be in the same region as your Amazon S3 bucket. For
information about creating an Amazon SQS queue, see Getting Started with Amazon SQS in the
Amazon Simple Queue Service Developer Guide.
Before you can use the Amazon SQS queue as an event notification destination, you need the
following:
• The Amazon Resource Name (ARN) for the Amazon SQS topic
• A permissions policy that you set up in the Amazon SQS console (as shown in the following
example)
{
"Version":"2012-10-17",
"Id": "__example_policy_ID",
"Statement":[
27
Amazon Simple Storage Service Console User Guide
Enabling and Configuring Event Notifications
{
"Sid": "example-statement-ID",
"Effect":"Allow",
"Principal": "*",
"Action": "SQS:*",
"Resource":"arn:aws:sqs:region:account-number:queue-name",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "arn:aws:s3:::bucket-name"
}
}
}
]
}
A Lambda function
You can use the AWS Lambda console to create a Lambda function. The Lambda function must be in
the same region as your S3 bucket. For information about creating a Lambda function, see the AWS
Lambda Developer Guide.
Before you can use the Lambda function as an event notification destination, you must have the
name or the ARN of a Lambda function to set up the Lambda function as a event notification
destination.
For information about using Lambda with Amazon S3, see Using AWS Lambda: with Amazon S3 in
the AWS Lambda Developer Guide.
Topics
• Amazon S3 Event Notification Types and Destinations (p. 28)
• Enabling and Configuring Event Notifications (p. 29)
• More Info (p. 33)
• An object created event – You choose ObjectCreated (All) when configuring your events in the
console to enable notifications for anytime an object is created in your bucket. Or, you can select one
or more of the specific object-creation actions to trigger event notifications. These actions are Put,
Post, Copy, and CompleteMultiPartUpload.
• Object delete events – You select ObjectDelete (All) when configuring your events in the console
to enable notification for anytime an object is deleted. Or, you can select Delete to trigger event
28
Amazon Simple Storage Service Console User Guide
Enabling and Configuring Event Notifications
• Restore object events – When configuring events in the console to enable notifications for the
restoration of objects stored in the GLACIER storage class. Select Restore from Glacier initiated to be
notified of when the restore is initiated. Select Restore from Glacier completed to be notified when
restoration of an object is complete.
• Reduced Redundancy Storage (RRS) object lost events – You select RRSObjectLost to be notified
when Amazon S3 detects that an object of the RRS storage class has been lost.
• Replication events – You can choose to receive replication event notifications if you have replication
with S3 Replication Time Control (S3 RTC) enabled. For more information, see all the replication events
and their descriptions in the Supported Event Types section in the Amazon Simple Storage Service
Developer Guide.
• An Amazon Simple Notification Service (Amazon SNS) topic – A web service that coordinates and
manages the delivery or sending of messages to subscribing endpoints or clients.
• An Amazon Simple Queue Service (Amazon SQS) queue – Offers reliable and scalable hosted queues
for storing messages as they travel between computer.
• A Lambda function – AWS Lambda is a compute service where you can upload your code and the
service can run the code on your behalf using the AWS infrastructure. You package up and upload your
custom code to AWS Lambda when you create a Lambda function.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to enable events for.
3. Choose Properties.
29
Amazon Simple Storage Service Console User Guide
Enabling and Configuring Event Notifications
6. In Name, enter a descriptive name for your event configuration. If you don't enter a name, a GUID is
generated and used for the name.
7. Under Events, select one or more of the types of event occurrences that you want to receive
notifications for. When the event occurs, a notification is sent to a destination that you choose
in Step 9. For a description of the event types, see Amazon S3 Event Notification Types and
Destinations (p. 28).
30
Amazon Simple Storage Service Console User Guide
Enabling and Configuring Event Notifications
For information about deleting versioned objects, see Deleting Object Versions. For information
about object versioning, see Object Versioning and Using Versioning.
Note
When you delete the last object from a folder, Amazon S3 can generate an object creation
event. The Amazon S3 console displays a folder under the following circumstances:
• When a zero-byte object has a trailing slash (/) in its name (in this case there is an actual
Amazon S3 object of 0 bytes that represents a folder).
• If the object has a slash (/) within its name (in this case there isn't an actual object
representing the folder).
When there are multiple objects with the same prefix with a trailing slash (/) as part of their
names, those objects are shown as being part of a folder. The name of the folder is formed
from the characters preceding the trailing slash (/). When you delete all the objects listed
under that folder, no actual object is available to represent the empty folder. Under such
circumstances, the Amazon S3 console creates a zero-byte object to represent that folder.
If you enabled event notification for the creation of objects, the zero-byte object creation
action that is taken by the console triggers an object creation event.
8. Enter an object name Prefix or a Suffix to filter the event notifications by the prefix or suffix. For
example, you can set up a filter so that you are sent a notification only when files are added to
an image folder (for example, objects with the name prefix images/). For more information, see
Configuring Notifications with Object Key Name Filtering.
9. Choose the type of destination to have the event notifications sent to. For a description of the
destinations, see Amazon S3 Event Notification Types and Destinations (p. 28).
31
Amazon Simple Storage Service Console User Guide
Enabling and Configuring Event Notifications
i. In the SNS topic box, enter the name of (or choose from the menu) the Amazon SNS topic
that will receive notifications from Amazon S3. For information about the Amazon SNS
topic format, see SNS FAQ.
ii. (Optional) You can also select Add SNS topic ARN from the menu and enter the ARN of the
SNS topic in SNS topic ARN.
i. In SQS queue, enter or choose a name from the menu of the Amazon SQS queue that you
want to receive notifications from Amazon S3. For information about Amazon SQS, see
What is Amazon Simple Queue Service? in the Amazon Simple Queue Service Developer
Guide.
ii. (Optional) You can also choose Add SQS topic ARN from the menu and enter the ARN of
the SQS queue in SQS queue ARN.
c. If you choose the Lambda Function destination type, do the following:
i. In Lambda Function, enter or choose the name of the Lambda function that you want to
receive notifications from Amazon S3.
ii. If you don't have any Lambda functions in the Region that contains your bucket, you are
prompted to enter a Lambda function ARN. In Lambda Function ARN, enter the ARN of the
Lambda function that you want to receive notifications from Amazon S3.
32
Amazon Simple Storage Service Console User Guide
Enabling Transfer Acceleration
iii. (Optional) You can also choose Add Lambda function ARN from the menu and enter the
ARN of the Lambda function in Lambda function ARN.
For information about using Lambda with Amazon S3, see Using AWS Lambda with Amazon S3
in the AWS Lambda Developer Guide.
10. Choose Save. Amazon S3 sends a test message to the event notification destination.
More Info
• How Do I Restore an S3 Object That Has Been Archived? (p. 51)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to enable transfer
acceleration for.
3. Choose Properties.
33
Amazon Simple Storage Service Console User Guide
Enabling Transfer Acceleration
Endpoint displays the endpoint domain name that you use to access accelerated data transfers to
and from the bucket that is enabled for transfer acceleration. If you suspend transfer acceleration,
the accelerate endpoint no longer works.
6. (Optional) If you want to run the Amazon S3 Transfer Acceleration Speed Comparison tool, which
compares accelerated and non-accelerated upload speeds starting with the Region in which the
transfer acceleration bucket is enabled, choose the Want to compare your data transfer speed
by region? option. The Speed Comparison tool uses multipart uploads to transfer a file from your
browser to various AWS Regions with and without using Amazon S3 transfer acceleration.
More Info
34
Amazon Simple Storage Service Console User Guide
Creating an Amazon S3 Access Point
For more information about Amazon S3 Access Points, see Managing Data Access with Amazon S3 Access
Points in the Amazon Simple Storage Service Developer Guide.
The following topics explain how to use the S3 Management Console to create, manage, and use
Amazon S3 Access Points.
Topics
• Creating an Amazon S3 Access Point (p. 35)
• Managing and Using Amazon S3 Access Points (p. 36)
An access point is associated with exactly one Amazon S3 bucket. Before you begin, make sure that you
have created a bucket that you want to use with this access point. For more information about creating
buckets, see Creating and Configuring an S3 Bucket (p. 3).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the S3 buckets section, select the bucket that you want to attach this access point to.
3. On the bucket detail page, choose the Access points tab.
4. Choose Create access point.
5. Enter your desired name for the access point in the Access point name field.
6. Choose a Network access type. If you choose Virtual private cloud (VPC), enter the VPC ID that you
want to use with the access point.
For more information about network access type for access points, see Creating Access Points
Restricted to a Virtual Private Cloud in the Amazon Simple Storage Service Developer Guide.
7. Select the block public access settings that you want to apply to the access point. All block public
access settings are enabled by default for new access points, and we recommend that you leave
all settings enabled unless you know you have a specific need to disable any of them. Amazon S3
35
Amazon Simple Storage Service Console User Guide
Managing and Using Amazon S3 Access Points
currently doesn't support changing an access point's block public access settings after the access
point has been created.
For more information about using Amazon S3 Block Public Access with access points, see Managing
Public Access to Access Points in the Amazon Simple Storage Service Developer Guide.
8. (Optional) Specify the access point policy. The console automatically displays the Amazon Resource
Name (ARN) for the access point, which you can use in the policy.
9. Choose Create access point.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the S3 buckets section, select the bucket whose access points you want to manage.
3. On the bucket detail page, choose the Access points tab.
From the Access points tab, you can view an access point's configuration details, edit an access point's
policy, use an access point to access your bucket, or delete an access point. The following procedures
explain how to perform each of these tasks.
36
Amazon Simple Storage Service Console User Guide
Managing and Using Amazon S3 Access Points
Note
The S3 Management Console doesn't support using virtual private cloud (VPC) access points to
access bucket resources. To access bucket resources from a VPC access point, use the AWS CLI,
AWS SDKs, or Amazon S3 REST APIs.
37
Amazon Simple Storage Service Console User Guide
Uploading S3 Objects
The data that you store in Amazon S3 consists of objects. Every object resides within a bucket that you
create in a specific AWS Region. Every object that you store in Amazon S3 resides in a bucket.
Objects stored in a region never leave the region unless you explicitly transfer them to another region.
For example, objects stored in the EU (Ireland) region never leave it. The objects stored in an AWS region
physically remain in that region. Amazon S3 does not keep copies of objects or move them to any other
region. However, you can access the objects from anywhere, as long as you have necessary permissions
to do so.
Before you can upload an object into Amazon S3, you must have write permissions to a bucket.
Objects can be any file type: images, backups, data, movies, etc. You can have an unlimited number of
objects in a bucket. The maximum size of file you can upload by using the Amazon S3 console is 160
GB. To upload a file larger than 160 GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API. For more
information, see Uploading Objects in the Amazon Simple Storage Service Developer Guide.
The following topics explain how to use the Amazon S3 console to upload, delete, and manage objects.
Topics
• How Do I Upload Files and Folders to an S3 Bucket? (p. 38)
• How Do I Download an Object from an S3 Bucket? (p. 47)
• How Do I Delete Objects from an S3 Bucket? (p. 50)
• How Do I Undelete a Deleted S3 Object? (p. 51)
• How Do I Restore an S3 Object That Has Been Archived? (p. 51)
• How Do I Lock an Amazon S3 Object? (p. 57)
• How Do I See an Overview of an Object? (p. 59)
• How Do I See the Versions of an S3 Object? (p. 62)
• How Do I View the Properties of an Object? (p. 63)
• How Do I Add Encryption to an S3 Object? (p. 65)
• How Do I Add Metadata to an S3 Object? (p. 67)
• How Do I Add Tags to an S3 Object? (p. 72)
• How Do I Use Folders in an S3 Bucket? (p. 75)
38
Amazon Simple Storage Service Console User Guide
Uploading Files and Folders by Using Drag and Drop
When you upload a file to Amazon S3, it is stored as an S3 object. Objects consist of the file data and
metadata that describes the object. You can have an unlimited number of objects in a bucket.
You can upload any file type—images, backups, data, movies, etc.—into an S3 bucket. The maximum size
of a file that you can upload by using the Amazon S3 console is 160 GB. To upload a file larger than 160
GB, use the AWS CLI, AWS SDK, or Amazon S3 REST API. For more information, see Uploading Objects in
the Amazon Simple Storage Service Developer Guide.
You can upload files by dragging and dropping or by pointing and clicking. To upload folders, you must
drag and drop them. Drag and drop functionality is supported only for the Chrome and Firefox browsers.
For information about which Chrome and Firefox browser versions are supported, see Which Browsers
are Supported for Use with the AWS Management Console?.
When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder to
your bucket. It then assigns an object key name that is a combination of the uploaded file name and the
folder name. For example, if you upload a folder called /images that contains two files, sample1.jpg
and sample2.jpg, Amazon S3 uploads the files and then assigns the corresponding key names,
images/sample1.jpg and images/sample2.jpg. The key names include the folder name as a prefix.
The Amazon S3 console displays only the part of the key name that follows the last “/”. For example,
within an images folder the images/sample1.jpg and images/sample2.jpg objects are displayed as
sample1.jpg and a sample2.jpg.
If you upload individual files and you have a folder open in the Amazon S3 console, when Amazon S3
uploads the files, it includes the name of the open folder as the prefix of the key names. For example,
if you have a folder named backup open in the Amazon S3 console and you upload a file named
sample1.jpg, the key name is backup/sample1.jpg. However, the object is displayed in the console
as sample1.jpg in the backup folder.
If you upload individual files and you do not have a folder open in the Amazon S3 console, when Amazon
S3 uploads the files, it assigns only the file name as the key name. For example, if you upload a file
named sample1.jpg, the key name is sample1.jpg. For more information on key names, see Object
Key and Metadata in the Amazon Simple Storage Service Developer Guide.
If you upload an object with a key name that already exists in a versioning-enabled bucket, Amazon
S3 creates another version of the object instead of replacing the existing object. For more information
about versioning, see How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12).
Topics
• Uploading Files and Folders by Using Drag and Drop (p. 39)
• Uploading Files by Pointing and Clicking (p. 44)
• More Info (p. 46)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to upload your folders or files
to.
39
Amazon Simple Storage Service Console User Guide
Uploading Files and Folders by Using Drag and Drop
3. In a window other than the console window, select the files and folders that you want to upload.
Then drag and drop your selections into the console window that lists the objects in the destination
bucket.
The files you chose are listed in the Upload dialog box.
4. In the Upload dialog box, do one of the following:
a. Drag and drop more files and folders to the console window that displays the Upload dialog
box. To add more files, you can also choose Add more files. This option works only for files, not
folders.
b. To immediately upload the listed files and folders without granting or removing permissions
for specific users or setting public permissions for all of the files that you're uploading, choose
Upload. For information about object access permissions, see How Do I Set Permissions on an
Object? (p. 121).
c. To set permissions or properties for the files that you are uploading, choose Next.
40
Amazon Simple Storage Service Console User Guide
Uploading Files and Folders by Using Drag and Drop
5. On the Set Permissions page, under Manage users you can change the permissions for the AWS
account owner. The owner refers to the AWS account root user, and not an AWS Identity and Access
Management (IAM) user. For more information about the root user, see The AWS Account Root User.
Choose Add account to grant access to another AWS account. For more information about granting
permissions to another AWS account, see How Do I Set ACL Bucket Permissions? (p. 124).
Under Manage public permissions you can grant read access to your objects to the general
public (everyone in the world), for all of the files that you're uploading. Granting public read
access is applicable to a small subset of use cases such as when buckets are used for websites. We
recommend that you do not change the default setting of Do not grant public read access to this
object(s). You can always make changes to object permissions after you upload the object. For
information about object access permissions, see How Do I Set Permissions on an Object? (p. 121).
41
Amazon Simple Storage Service Console User Guide
Uploading Files and Folders by Using Drag and Drop
6. On the Set Properties page, choose the storage class and encryption method to use for the files
that you are uploading. You can also add or modify metadata.
a. Choose a storage class for the files you're uploading. For more information about storage
classes, see Storage Classes in the Amazon Simple Storage Service Developer Guide.
b. Choose the type of encryption for the files that you're uploading. If you don't want to encrypt
them, choose None.
i. To encrypt the uploaded files using keys that are managed by Amazon S3, choose Amazon
S3 master-key. For more information, see Protecting Data with Amazon S3-Managed
Encryption Keys Classes in the Amazon Simple Storage Service Developer Guide.
42
Amazon Simple Storage Service Console User Guide
Uploading Files and Folders by Using Drag and Drop
ii. To encrypt the uploaded files using the AWS Key Management Service (AWS KMS), choose
AWS KMS master-key. Then choose a customer master key (CMK) from the list of AWS KMS
CMKs.
Note
To encrypt objects in a bucket, you can use only CMKs that are available in the
same AWS Region as the bucket.
You can give an external account the ability to use an object that is protected by an AWS
KMS CMK. To do this, select Custom KMS ARN from the list and enter the Amazon Resource
Name (ARN) for the external account. Administrators of an external account that have
usage permissions to an object protected by your AWS KMS CMK can further restrict access
by creating a resource-level IAM policy.
For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide. For more information about protecting data with
AWS KMS, see Protecting Data Using Keys Stored in AWS KMS (SSE-KMS) in the Amazon
Simple Storage Service Developer Guide.
c. Metadata for Amazon S3 objects is represented by a name-value (key-value) pair. There are two
kinds of metadata: system-defined metadata and user-defined metadata.
If you want to add Amazon S3 system-defined metadata to all of the objects you are uploading,
for Header, select a header. You can select common HTTP headers, such as Content-Type and
Content-Disposition. Type a value for the header, and then choose Save. For a list of system-
defined metadata and information about whether you can add the value, see System-Defined
Metadata in the Amazon Simple Storage Service Developer Guide.
d. Any metadata starting with prefix x-amz-meta- is treated as user-defined metadata. User-
defined metadata is stored with the object, and is returned when you download the object.
To add user-defined metadata to all of the objects that you are uploading, type x-amz-meta-
plus a custom metadata name in the Header field. Type a value for the header, and then
choose Save. Both the keys and their values must conform to US-ASCII standards. User-defined
metadata can be as large as 2 KB. For more information about user-defined metadata, see User-
Defined Metadata in the Amazon Simple Storage Service Developer Guide.
e. Object tagging gives you a way to categorize storage. Each tag is a key-value pair. Key and tag
values are case sensitive. You can have up to 10 tags per object.
To add tags to all of the objects that you are uploading, type a tag name in the Key field. Type
a value for the tag, and then choose Save. A tag key can be up to 128 Unicode characters in
length and tag values can be up to 255 Unicode characters in length. For more information
about object tags, see Object Tagging in the Amazon Simple Storage Service Developer Guide.
43
Amazon Simple Storage Service Console User Guide
Uploading Files by Pointing and Clicking
7. Choose Next.
8. On the Upload review page, verify that your settings are correct, and then choose Upload. To make
changes, choose Previous.
9. To see the progress of the upload, choose In progress at the bottom of the browser window.
44
Amazon Simple Storage Service Console User Guide
Uploading Files by Pointing and Clicking
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to upload your files to.
3. Choose Upload.
45
Amazon Simple Storage Service Console User Guide
More Info
6. After you see the files that you chose listed in the Upload dialog box, do one of the following:
7. To set permissions and properties, start with Step 5 of Uploading Files and Folders by Using Drag
and Drop (p. 39).
More Info
• How Do I Set Permissions on an Object? (p. 121).
• How Do I Download an Object from an S3 Bucket? (p. 47)
46
Amazon Simple Storage Service Console User Guide
Downloading S3 Objects
Data transfer fees apply when you download objects. For information about Amazon S3 features, and
pricing, see Amazon S3.
Important
If an object key name consists of a single period (.), or two periods (..), you can’t download the
object using the Amazon S3 console. To download an object with a key name of “.” or “..”, you
must use the AWS CLI, AWS SDKs, or REST API. For more information about naming objects, see
Object Key Naming Guidelines in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to download an object from.
3. You can download an object from an S3 bucket in any of the following ways:
• In the Name list, select the check box next to the object you want to download, and then choose
Download on the object description page that appears.
47
Amazon Simple Storage Service Console User Guide
Downloading S3 Objects
48
Amazon Simple Storage Service Console User Guide
Downloading S3 Objects
• Choose the name of the object that you want to download and then choose Download as on the
Overview page.
• Choose the name of the object that you want to download. Choose Latest version and then
choose the download icon.
49
Amazon Simple Storage Service Console User Guide
Related Topics
Related Topics
• How Do I Upload Files and Folders to an S3 Bucket? (p. 38)
For information about Amazon S3 features and pricing, see Amazon S3.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete an object from.
3. You can delete objects from an S3 bucket in any of the following ways:
• In the Name list, select the check box next to the objects and folders that you want to delete,
choose Actions, and then choose Delete from the drop-down menu.
In the Delete objects dialog box, verify that the name(s) of the object(s) and/or folder(s) you
selected for deletion are listed and then choose Delete.
• Or, choose the name of the object that you want to delete, choose Latest version, and then
choose the trash can icon.
More Info
• How Do I Undelete a Deleted S3 Object? (p. 51)
• How Do I Create a Lifecycle Policy for an S3 Bucket? (p. 82)
50
Amazon Simple Storage Service Console User Guide
Undeleting Objects
To be able to undelete a deleted object, you must have had versioning enabled on the bucket that
contains the object before the object was deleted. For information about enabling versioning, see How
Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12).
When you delete an object in a versioning-enabled bucket, all versions remain in the bucket and Amazon
S3 creates a delete marker for the object. To undelete the object, you must delete this delete marker.
For more information about versioning and delete markers, see Object Versioning in the Amazon Simple
Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
3. To see a list of the versions of the objects in the bucket, select Show. You'll be able to see the delete
markers for deleted objects.
4. To undelete an object, you must delete the delete marker. Select the check box next to the delete
marker of the object to recover, and then choose delete from the Actions menu.
5. Then, choose Hide and you'll see the undeleted object listed.
More Info
• How Do I See the Versions of an S3 Object? (p. 62)
• How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12)
• Using Versioning in the Amazon Simple Storage Service Developer Guide
When you restore an archive, you pay for both the archive and the restored copy. Because there is a
storage cost for the copy, restore objects only for the duration you need them. If you want a permanent
copy of the object, create a copy of it in your S3 bucket. For information about Amazon S3 features and
pricing, see Amazon S3.
After restoring an object, you can download it from the Overview page. For more information, see How
Do I See an Overview of an Object? (p. 59).
Topics
51
Amazon Simple Storage Service Console User Guide
Archive Retrieval Options
• Expedited - Expedited retrievals allow you to quickly access your data stored in the GLACIER storage
class when occasional urgent requests for a subset of archives are required. For all but the largest
archived objects (250 MB+), data accessed using Expedited retrievals is typically made available
within 1–5 minutes. Provisioned capacity ensures that retrieval capacity for Expedited retrievals is
available when you need it. For more information, see Provisioned Capacity. Expedited retrievals and
provisioned capacity are not available for objects stored in the DEEP_ARCHIVE storage class.
• Standard - Standard retrievals allow you to access any of your archived objects within several hours.
This is the default option for the GLACIER and DEEP_ARCHIVE retrieval requests that do not specify
the retrieval option. Standard retrievals typically finish within 3–5 hours for objects stored in the
GLACIER storage class. They typically finish within 12 hours for objects stored in the DEEP_ARCHIVE
storage class.
• Bulk - Bulk retrievals are the lowest-cost retrieval option in Amazon S3 Glacier, enabling you to
retrieve large amounts, even petabytes, of data inexpensively. Bulk retrievals typically finish within
5–12 hours for objects stored in the GLACIER storage class. They typically finish within 48 hours for
objects stored in the DEEP_ARCHIVE storage class.
For more information about retrieval options, see Restoring Archived Objects in the Amazon Simple
Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the objects that you want to
restore.
3. In the Name list, select the object or objects that you want to restore, choose Actions, and then
choose Restore.
52
Amazon Simple Storage Service Console User Guide
Restoring an Archived S3 Object
4. In the Initiate restore dialog box, type the number of days that you want your archived data to be
accessible.
5. Choose one of the following retrieval options from the Retrieval options menu.
53
Amazon Simple Storage Service Console User Guide
Restoring an Archived S3 Object
6. Provisioned capacity is only available only for the Glacier storage class. If you have provisioned
capacity, choose Restore to start a provisioned retrieval. If you have provisioned capacity, all of
your expedited retrievals are served by your provisioned capacity. For more information about
provisioned capacity, see Provisioned Capacity.
• If you don't have provisioned capacity and you don't want to buy it, choose Restore.
• If you don't have provisioned capacity, but you want to buy it, choose Add capacity unit, and then
choose Buy. When you get the Purchase succeeded message, choose Restore to start provisioned
retrieval.
54
Amazon Simple Storage Service Console User Guide
Upgrade an In-Progress Restore
1. In the Name list, select one or more of the objects that you are restoring, choose Actions, and then
choose Restore from Glacier. For information about checking the restoration status of an object, see
Checking Archive Restore Status and Expiration Date (p. 56).
55
Amazon Simple Storage Service Console User Guide
Checking Archive Restore Status and Expiration Date
2. Choose the tier that you want to upgrade to and then choose Restore. For more information about
upgrading to a faster restore tier, see see Restoring Archived Objects in the Amazon Simple Storage
Service Developer Guide.
When the temporary copy of the object is available, the object's Overview section shows the Restoration
expiry date. This is when Amazon S3 will remove the restored copy of your archive.
Restored objects are stored only for the number of days that you specify. If you want a permanent copy
of the object, create a copy of it in your Amazon S3 bucket.
Amazon S3 calculates the expiry date by adding the number of days that you specify to the time you
request to restore the object, and then rounding to the next day at midnight UTC. This calculation
applies to the initial restoration of the object and to any extensions to availability that you request.
For example, if an object was restored on 10/15/2012 10:30 AM UTC and the number of days that you
specified is 3, then the object is available until 10/19/2012 00:00 UTC. If, on 10/16/2012 11:00 AM
UTC you change the number of days that you want it to be accessible to 1, then Amazon S3 makes the
restored object available until 10/18/2012 00:00 UTC.
After restoring an object, you can download it from the Overview page. For more information, see How
Do I See an Overview of an Object? (p. 59).
More Info
• How Do I Create a Lifecycle Policy for an S3 Bucket? (p. 82)
• How Do I Undelete a Deleted S3 Object? (p. 51)
56
Amazon Simple Storage Service Console User Guide
Locking Amazon S3 Objects
Before you lock any objects, you have to enable a bucket to use Amazon S3 object lock. You enable
object lock when you create a bucket. After you enable Amazon S3 object lock on a bucket, you can lock
objects in that bucket. When you create a bucket with object lock enabled, you can't disable object lock
or suspend versioning for that bucket.
For information about creating a bucket with Amazon S3 object lock enabled, see How Do I Create an S3
Bucket? (p. 3).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
3. In the Name list, choose the name of the object that you want to lock.
4. Choose Properties.
57
Amazon Simple Storage Service Console User Guide
Locking Amazon S3 Objects
6. Choose a retention mode. You can change the Retain until date. You can also choose to enable
a legal hold. For more information, see Amazon S3 Object Lock Overview in the Amazon Simple
Storage Service Developer Guide.
58
Amazon Simple Storage Service Console User Guide
More Info
7. Choose Save.
More Info
• Setting Bucket and Object Access Permissions (p. 116)
59
Amazon Simple Storage Service Console User Guide
Viewing an Overview of an Object
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, select the check box next to the name of the object for which you want an
overview.
4. To download the object, choose Download in the object overview panel. To copy the path of the
object to the clipboard, choose Copy Path.
60
Amazon Simple Storage Service Console User Guide
More Info
5. If versioning is enabled on the bucket, choose Latest versions to list the versions of the object. You
can then choose the download icon to download an object version, or choose the trash can icon to
delete an object version.
Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted. For more information, see Object
Versioning and Using Versioning in the Amazon Simple Storage Service Developer Guide.
More Info
• How Do I See the Versions of an S3 Object? (p. 62)
61
Amazon Simple Storage Service Console User Guide
Viewing Object Versions
A versioning-enabled bucket can have many versions of the same object:, one current (latest) version
and zero or more noncurrent (previous) versions. Amazon S3 assigns each object a unique version
ID. For information about enabling versioning, see How Do I Enable or Suspend Versioning for an S3
Bucket? (p. 12).
If a bucket is versioning-enabled, Amazon S3 creates another version of an object under the following
conditions:
• If you upload an object that has the same name as an object that already exists in the bucket, Amazon
S3 creates another version of the object instead of replacing the existing object.
• If you update any object properties after you upload the object to the bucket, such as changing the
storage details or other metadata , Amazon S3 creates a new object version in the bucket.
For more information about versioning support in Amazon S3, see Object Versioning and Using
Versioning in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. To see a list of the versions of the objects in the bucket, choose Show. For each object version, the
console shows a unique version ID, the date and time the object version was created, and other
properties. (Objects stored in your bucket before you set the versioning state have a version ID of
null.)
62
Amazon Simple Storage Service Console User Guide
More Info
You also can view, download, and delete object versions in the object overview panel. For more
information, see How Do I See an Overview of an Object? (p. 59).
Important
You can undelete an object only if it was deleted as the latest (current) version. You can't
undelete a previous version of an object that was deleted. For more information, see Object
Versioning and Using Versioning in the Amazon Simple Storage Service Developer Guide.
More Info
• How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12)
• How Do I Create a Lifecycle Policy for an S3 Bucket? (p. 82)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, choose the name of the object you want to view the properties for.
63
Amazon Simple Storage Service Console User Guide
Viewing Object Properties
4. Choose Properties.
5. On the Properties page, you can configure the following properties for the object.
a. Storage class – Each object in Amazon S3 has a storage class associated with it. The storage
class that you choose to use depends on how frequently you access the object. The default
storage class for S3 objects is STANDARD. You choose which storage class to use when you
upload an object. For more information about storage classes, see Storage Classes in the
Amazon Simple Storage Service Developer Guide.
To change the storage class after you upload an object, choose Storage class. Choose the
storage class that you want, and then choose Save.
b. Encryption – You can encrypt your S3 objects. For more information, see How Do I Add
Encryption to an S3 Object? (p. 65).
c. Metadata – Each object in Amazon S3 has a set of name-value pairs that represents its
metadata. For information on adding metadata to an S3 object, see How Do I Add Metadata to
an S3 Object? (p. 67).
64
Amazon Simple Storage Service Console User Guide
Adding Encryption to an Object
d. Tags – You can add tags to an S3 object. For more information, see How Do I Add Tags to an S3
Object? (p. 72).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, choose the name of the object that you want to add or change encryption for.
4. Choose Properties, and then choose Encryption.
The Encryption dialog opens, giving you three choices for object encryption:
6. If you want to encrypt your object using keys that are managed by Amazon S3, follow these steps:
a. ChooseAES-256.
For more information about using Amazon S3 server-side encryption to encrypt your data,
see Protecting Data with Amazon S3-Managed Encryption Keys Classes in the Amazon Simple
Storage Service Developer Guide.
b. Choose Save.
65
Amazon Simple Storage Service Console User Guide
Adding Encryption to an Object
7. If you want to encrypt your object using AWS KMS, follow these steps:
a. Choose AWS-KMS.
b. Choose an AWS KMS CMK.
The list shows Customer managed CMKs that you have created and your AWS managed CMK
for Amazon S3. For more information about creating a customer managed AWS KMS CMK, see
Creating Keys in the AWS Key Management Service Developer Guide.
c. Choose Save.
Important
To encrypt objects in the bucket, you can use only CMKs that are enabled in the same AWS
Region as the bucket. Amazon S3 only supports symmetric CMKs. Amazon S3 does not
support asymmetric CMKs. For more information, see Using Symmetric and Asymmetric
Keys.
8. To give an external account the ability to use an object that is protected by an AWS KMS CMK, follow
these steps:
a. Choose AWS-KMS.
b. Type the Amazon Resource Name (ARN) for the external account.
c. Choose Save.
66
Amazon Simple Storage Service Console User Guide
More Info
More Info
• How Do I Enable Default Encryption for an Amazon S3 Bucket? (p. 13)
• Amazon S3 Default Encryption for S3 Buckets in the Amazon Simple Storage Service Developer Guide
• How Do I View the Properties of an Object? (p. 63)
• Uploading, Downloading, and Managing Objects (p. 38)
Object metadata is a set of name-value (key-value) pairs. For example, the metadata for content length,
Content-Length, is the name (key) and the size of the object in bytes (value). For more information
about object metadata, see Object Metadata in the Amazon Simple Storage Service Developer Guide.
There are two kinds of metadata for an S3 object, Amazon S3 system metadata and user-defined
metadata:
• System metadata–There are two categories of system metadata. Metadata such as the Last-
Modified date is controlled by the system. Only Amazon S3 can modify the value. There is also
system metadata that you control, for example, the storage class configured for the object.
• User-defined metadata–You can define your own custom metadata, called user-defined metadata.
You can assign user-defined metadata to an object when you upload the object or after the object has
been uploaded. User-defined metadata is stored with the object and is returned when you download
the object. Amazon S3 does not process user-defined metadata.
67
Amazon Simple Storage Service Console User Guide
Adding System-Defined Metadata
Topics
• Adding System-Defined Metadata to an S3 Object (p. 68)
• Adding User-Defined Metadata to an S3 Object (p. 70)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, choose the name of the object that you want to add metadata to.
68
Amazon Simple Storage Service Console User Guide
Adding System-Defined Metadata
5. Choose Add Metadata, and then choose a key from the Select a key menu.
6. Depending on which key you chose, choose a value from the Select a value menu or type a value.
69
Amazon Simple Storage Service Console User Guide
Adding User-Defined Metadata
7. Choose Save.
User-defined metadata can be as large as 2 KB. Both keys and their values must conform to US-ASCII
standards. For more information, see User-Defined Metadata in the Amazon Simple Storage Service
Developer Guide.
70
Amazon Simple Storage Service Console User Guide
Adding User-Defined Metadata
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, choose the name of the object that you want to add metadata to.
5. Choose Add Metadata, and then choose the x-amz-meta- key from the Select a key menu. Any
metadata starting with the prefix x-amz-meta- is user-defined metadata.
71
Amazon Simple Storage Service Console User Guide
Adding Tags to an Object
6. Type a custom name following the x-amz-meta- key. For example, for the custom name alt-name,
the metadata key would be x-amz-meta-alt-name. Enter a value for the custom key, and then
choose Save.
72
Amazon Simple Storage Service Console User Guide
Adding Tags to an Object
• You can associate up to 10 tags with an object. Tags associated with an object must have unique tag
keys.
• A tag key can be up to 128 Unicode characters in length and tag values can be up to 255 Unicode
characters in length.
• Key and tag values are case sensitive.
For more information about object tags, see Object Tagging in the Amazon Simple Storage Service
Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
3. In the Name list, choose the name of the object you want to add tags to.
4. Choose Properties.
73
Amazon Simple Storage Service Console User Guide
Adding Tags to an Object
6. Each tag is a key-value pair. Type a Key and a Value. Then choose Add Tag to add another tag or
choose Save.
74
Amazon Simple Storage Service Console User Guide
More Info
More Info
• How Do I View the Properties of an Object? (p. 63)
• Uploading, Downloading, and Managing Objects (p. 38)
For example, you can create a folder on the console named photos and store an object named
myphoto.jpg in it. The object is then stored with the key name photos/myphoto.jpg, where
photos/ is the prefix.
Topics
• Creating a Folder (p. 76)
• How Do I Delete Folders from an S3 Bucket? (p. 77)
• Making Folders Public (p. 79)
You can have folders within folders, but not buckets within buckets. You can upload and copy objects
directly into a folder. Folders can be created, deleted, and made public, but they cannot be renamed.
Objects can be copied from one folder to another.
75
Amazon Simple Storage Service Console User Guide
Creating a Folder
Important
The Amazon S3 console treats all objects that have a forward slash ("/") character as the last
(trailing) character in the key name as a folder, for example examplekeyname/. You can't
upload an object that has a key name with a trailing "/" character using the Amazon S3 console.
However, you can upload objects that are named with a trailing "/" with the Amazon S3 API by
using the AWS CLI, the AWS SDKs, or REST API.
An object that is named with a trailing "/" appears as a folder in the Amazon S3 console. The
Amazon S3 console does not display the content and metadata for such an object. When you
use the console to copy an object named with a trailing "/", a new folder is created in the
destination location, but the object's data and metadata are not copied.
Creating a Folder
This section describes how to use the Amazon S3 console to create a folder.
To create a folder
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to create a folder in.
4. Enter a name for the folder (for example, favorite-pics). Choose the encryption setting for the
folder object, and then choose Save.
76
Amazon Simple Storage Service Console User Guide
Deleting Folders
For information about Amazon S3 features and pricing, see Amazon S3.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to delete folders from.
3. In the Name list, select the check box next to the folders and objects that you want to delete, choose
More, and then choose Delete.
77
Amazon Simple Storage Service Console User Guide
Deleting Folders
In the Delete objects dialog box, verify that the names of the folders you selected for deletion are
listed and then choose Delete.
78
Amazon Simple Storage Service Console User Guide
Making Folders Public
Related Topics
• How Do I Delete Objects from an S3 Bucket? (p. 50)
We recommend blocking all public access to your Amazon S3 folders and buckets unless you specifically
require a public folder or bucket. When you make a folder public, anyone on the internet can view all the
objects that are grouped in that folder. In the Amazon S3 console, you can make a folder public. You can
also make a folder public by creating a bucket policy that limits access by prefix. For more information,
see Setting Bucket and Object Access Permissions (p. 116).
Warning
After you make a folder public in the Amazon S3 console, you can't make it private again.
Instead, you must set permissions on each individual object in the public folder so that the
objects have no public access. For more information, see How Do I Set Permissions on an
Object? (p. 121)
More Info
• How Do I Delete Folders from an S3 Bucket? (p. 77)
• How Do I Set ACL Bucket Permissions? (p. 124)
• How Do I Block Public Access to S3 Buckets? (p. 117)
79
Amazon Simple Storage Service Console User Guide
Creating an Amazon S3 Batch Operations Job
The following topics explain how to use the Amazon S3 console to configure and run batch operations.
Topics
• Creating an Amazon S3 Batch Operations Job (p. 80)
• Managing Batch Operations Jobs (p. 81)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Batch Operations on the navigation pane of the Amazon S3 console.
3. Choose Create job.
4. Choose the Region where you want to create your job.
5. Under Manifest format choose the type of manifest object to use.
• If you choose S3 Inventory report, enter the path to the manifest.json object that Amazon S3
generated as part of the CSV-formatted Inventory report, and optionally the version ID for the
manifest object if you want to use a version other than the most recent.
• If you choose CSV, enter the path to a CSV-formatted manifest object. The manifest object must
follow the format described in the console. You can optionally include the version ID for the
manifest object if you want to use a version other than the most recent.
6. Under Operation choose the operation that you want to perform on all objects listed in the
manifest. Fill out the information for the operation you chose and then choose Next.
7. Fill out the information for Configure additional options and then choose Next.
8. For Review, verify the settings. If you need to make changes, choose Previous. Otherwise, choose
Create Job.
More Info
• The Basics: Amazon S3 Batch Operations Jobs in the Amazon Simple Storage Service Developer Guide
80
Amazon Simple Storage Service Console User Guide
Managing Batch Operations Jobs
• Creating a Batch Operations Job in the Amazon Simple Storage Service Developer Guide
• Operations in the Amazon Simple Storage Service Developer Guide
More Info
• The Basics: Amazon S3 Batch Operations Jobs in the Amazon Simple Storage Service Developer Guide
• Creating a Batch Operations Job in the Amazon Simple Storage Service Developer Guide
• Operations in the Amazon Simple Storage Service Developer Guide
81
Amazon Simple Storage Service Console User Guide
Creating a Lifecycle Policy
Storage Management
This section explains how to configure Amazon S3 storage management tools.
Topics
• How Do I Create a Lifecycle Policy for an S3 Bucket? (p. 82)
• How Do I Add a Replication Rule to an S3 Bucket? (p. 85)
• How Do I Manage the Replication Rules for an S3 Bucket? (p. 100)
• How Do I Configure Storage Class Analysis? (p. 102)
• How Do I Configure Amazon S3 Inventory? (p. 106)
• How Do I Configure Request Metrics for an S3 Bucket? (p. 109)
• How Do I Configure a Request Metrics Filter? (p. 111)
• How Do I View Replication Metrics? (p. 114)
You can define a lifecycle policy for all objects or a subset of objects in the bucket by using a shared
prefix (that is, objects that have names that begin with a common string).
A versioning-enabled bucket can have many versions of the same object, one current version and zero or
more noncurrent (previous) versions. Using a lifecycle policy, you can define actions specific to current
and noncurrent object versions. For more information, see Object Lifecycle Management and Object
Versioning and Using Versioning in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to create a lifecycle policy for.
3. Choose the Management tab, and then choose Add lifecycle rule.
82
Amazon Simple Storage Service Console User Guide
Creating a Lifecycle Policy
4. In the Lifecycle rule dialog box, type a name for your rule to help identify the rule later. The name
must be unique within the bucket. Configure the rule as follows:
• To apply this lifecycle rule to all objects with a specified name prefix (that is, objects with names
that begin with a common string), type a prefix in the box, choose the prefix from the drop-down
list, and then press Enter. For more information about object name prefixes, see Object Keys in
the Amazon Simple Storage Service Developer Guide.
• To apply this lifecycle rule to all objects with one or more object tags, type a tag in the box,
choose the tag from the drop-down list, and then press Enter. Repeat the procedure to add
another tag. You can combine a prefix and tags. For more information about object tags, see
Object Tagging in the Amazon Simple Storage Service Developer Guide.
• To apply this lifecycle rule to all objects in the bucket, choose Next.
5. You configure lifecycle rules by defining rules to transition objects to the Standard-IA, One Zone-IA,
Glacier, and Deep Archive storage classes. For more information, see Storage Classes in the Amazon
Simple Storage Service Developer Guide.
You can define transitions for current or previous object versions, or for both current and previous
versions. Versioning enables you to keep multiple versions of an object in one bucket. For more
information about versioning, see How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12).
a. Select Current version to define transitions that are applied to the current version of the object.
Select Previous versions to define transitions that are applied to all previous versions of the
object.
• Choose Transition to Standard-IA after, and then type the number of days after the creation
of an object that you want the transition to be applied (for example, 30 days).
83
Amazon Simple Storage Service Console User Guide
Creating a Lifecycle Policy
• Choose Transition to One Zone-IA after, and then type the number of days after the creation
of an object that you want the transition to be applied (for example, 30 days).
• Choose Transition to Glacier after, and then type the number of days after the creation of an
object that you want the transition to be applied (for example, 100 days).
• Choose Transition to Glacier Deep Archive after, and then type the number of days after the
creation of an object that you want the transition to be applied (for example, 100 days).
Important
When you choose the Glacier or Glacier Deep Archive storage class, your objects
remain in Amazon S3. You cannot access them directly through the separate Amazon
S3 Glacier service. For more information, see Transitioning Objects Using Amazon S3
Lifecycle.
7. For this example, select both Current version and Previous versions.
8. Select Expire current version of object, and then enter the number of days after object creation
to delete the object (for example, 395 days). If you select this expire option, you cannot select the
option to clean up expired delete markers.
84
Amazon Simple Storage Service Console User Guide
Creating Replication Rules
9. Select Permanently delete previous versions, and then enter the number of days after an object
becomes a previous version to permanently delete the object (for example, 465 days).
10. It is a recommended best practice to always select Clean up incomplete multipart uploads. For
example, type 7 for the number of days after the multipart upload initiation date that you want
to end and clean up any multipart uploads that have not completed. For more information about
multipart uploads, see Multipart Upload Overview in the Amazon Simple Storage Service Developer
Guide.
11. Choose Next.
12. For Review, verify the settings for your rule. If you need to make changes, choose Previous.
Otherwise, choose Save.
13. If the rule does not contain any errors, it is listed on the Lifecycle page and is enabled.
85
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
Replication requires versioning to be enabled on both the source and destination buckets. To review
the full list of requirements, see Requirements for Replication in the Amazon Simple Storage Service
Developer Guide. For more information about versioning, see How Do I Enable or Suspend Versioning for
an S3 Bucket? (p. 12)
The object replicas in the destination bucket are exact replicas of the objects in the source bucket. They
have the same key names and the same metadata—for example, creation time, owner, user-defined
metadata, version ID, access control list (ACL), and storage class. Optionally, you can explicitly specify a
different storage class for object replicas. And regardless of who owns the source bucket or the source
object, you can choose to change replica ownership to the AWS account that owns the destination
bucket. For more information, see Changing the Replica Owner in the Amazon Simple Storage Service
Developer Guide.
You can use S3 Replication Time Control (S3 RTC) to replicate your data in the same AWS Region or
across different AWS Regions in a predictable timeframe. S3 RTC replicates 99.99 percent of new objects
stored in Amazon S3 within 15 minutes and most objects within seconds. For more information, see
Replicating Objects Using S3 Replication Time Control (S3 RTC) in the Amazon Simple Storage Service
Developer Guide.
Note about replication and lifecycle rules
Metadata for an object remains identical between original objects and replica objects. Lifecycle
rules abide by the creation time of the original object, and not by when the replicated object
becomes available in the destination bucket. However, lifecycle does not act on objects that are
pending replication until replication is complete.
You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define
which source bucket objects to replicate and the destination bucket where the replicated objects are
stored. You can create a rule to replicate all the objects in a bucket or a subset of objects with a specific
key name prefix, one or more object tags, or both. A destination bucket can be in the same AWS account
as the source bucket, or it can be in a different account.
If the destination bucket is in a different account from the source bucket, you must add a bucket policy
to the destination bucket to grant the owner of the source bucket account permission to replicate objects
in the destination bucket. The Amazon S3 console builds this required bucket policy for you to copy and
add to the destination bucket in the other account.
When you add a replication rule to a bucket, the rule is enabled by default, so it starts working as soon as
you save it.
Topics
• Adding a Replication Rule When the Destination Bucket Is in the Same AWS Account (p. 86)
• Adding a Replication Rule When the Destination Bucket Is in a Different AWS Account (p. 93)
• More Info (p. 100)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
86
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
4. In the Replication rule wizard, under Set source, you have the following options for setting the
replication source:
87
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
The new schema supports prefix and tag filtering and the prioritization of rules. For more
information about the new schema, see Replication Configuration Backward Compatibility in the
Amazon Simple Storage Service Developer Guide. The developer guide describes the XML used with
the Amazon S3 API that works behind the user interface. In the developer guide, the new schema is
described as replication configuration XML V2.
5. To replicate objects in the source bucket that are encrypted with AWS Key Management Service
(AWS KMS), under Replication criteria, select Replicate objects encrypted with AWS KMS. Under
Choose one or more keys for decrypting source objects are the source AWS KMS customer master
keys (CMKs) that you allow replication to use. All source CMKs are included by default. You can
choose to narrow the CMK selection.
Objects encrypted by AWS KMS CMKs that you do not select are not replicated. A CMK or a group
of CMKs is chosen for you, but you can choose the CMKs if you want. For information about using
AWS KMS with replication, see Replicating Objects Created with Server-Side Encryption (SSE) Using
Encryption Keys Stored in AWS KMS in the Amazon Simple Storage Service Developer Guide.
88
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
Important
When you replicate objects that are encrypted with AWS KMS, the AWS KMS request rate
doubles in the source Region and increases in the destination Region by the same amount.
These increased call rates to AWS KMS are due to the way that data is re-encrypted using
the customer master key (CMK) that you define for the replication destination Region.
AWS KMS has a request rate limit that is per calling account per Region. For information
about the limit defaults, see AWS KMS Limits - Requests per Second: Varies in the AWS Key
Management Service Developer Guide.
If your current Amazon S3 PUT object request rate during replication is more than half
the default AWS KMS rate limit for your account, we recommend that you request an
increase to your AWS KMS request rate limit. To request an increase, create a case in the
AWS Support Center at Contact Us. For example, suppose that your current PUT object
request rate is 1,000 requests per second and you use AWS KMS to encrypt your objects. In
this case, we recommend that you ask AWS Support to increase your AWS KMS rate limit
to 2,500 requests per second, in both your source and destination Regions (if different), to
ensure that there is no throttling by AWS KMS.
To see your PUT object request rate in the source bucket, view PutRequests in the Amazon
CloudWatch request metrics for Amazon S3. For information about viewing CloudWatch
metrics, see How Do I Configure Request Metrics for an S3 Bucket? (p. 109)
Choose Next.
6. To choose a destination bucket from the account that you're currently using, on the Set destination
page, under Destination bucket, choose Buckets in this account. Enter the name of the destination
bucket for the replication, or choose a name in the drop-down list.
If you want to choose a destination bucket from a different AWS account, see Adding a Replication
Rule When the Destination Bucket Is in a Different AWS Account (p. 93).
If versioning is not enabled on the destination bucket, you get a warning message that contains an
Enable versioning button. Choose this button to enable versioning on the bucket.
7. If you chose to replicate objects encrypted with AWS KMS, under Destination encryption settings,
enter the Amazon Resource Name (ARN) of the AWS KMS CMK to use to encrypt the replicas in
the destination bucket. You can find the ARN for your AWS KMS CMK in the IAM console, under
Encryption keys. Or, you can choose a CMK name from the drop-down list.
89
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide.
8. If you want to replicate your data into a specific storage class in the destination bucket, on the Set
destination page, under Destination Options, select Change the storage class for the replicated
object(s). Then choose the storage class that you want to use for the replicated objects in the
destination bucket. If you don't select this option, the storage class for replicated objects is the same
class as the original objects.
Similarly, if you want to change Object Ownership in the destination bucket, choose Change object
ownership to the destination bucket owner. For more information about this option, see Adding a
Replication Rule When the Destination Bucket Is in a Different AWS Account (p. 93).
If you want to enable S3 Replication Time Control (S3 RTC) in your replication configuration, select
Replication time control.
Note
When you use S3-RTC, additional per-GB data transfer fees and CloudWatch metrics fees
apply.
90
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
Choose Next.
9. Set up an AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate
objects on your behalf.
To set up an IAM role, on the Configure options page, under Select role, do one of the following:
• We highly recommend that you choose Create new role to have Amazon S3 create a new IAM
role for you. When you save the rule, a new policy is generated for the IAM role that matches
the source and destination buckets that you choose. The name of the generated role is based on
the bucket names and uses the following naming convention: replication_role_for_source-
bucket_to_destination-bucket.
• You can choose to use an existing IAM role. If you do, you must choose a role that grants Amazon
S3 the necessary permissions for replication. Replication fails if this role does not grant Amazon
S3 sufficient permissions to follow your replication rule.
Important
When you add a replication rule to a bucket, you must have the iam:PassRole permission
to be able to pass the IAM role that grants Amazon S3 replication permissions. For more
information, see Granting a User Permissions to Pass a Role to an AWS Service in the IAM
User Guide.
91
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account
Under Rule name, enter a name for your rule to help identify the rule later. The name is required
and must be unique within the bucket.
10. If the bucket has existing replication rules, you are instructed to set a priority for the rule. You
must set a priority for the rule to avoid conflicts caused by objects that are included in the scope of
more than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine
which rule to apply. The higher the number, the higher the priority. For more information about rule
priority, see Replication Configuration Overview in the Amazon Simple Storage Service Developer
Guide.
Under Status, Enabled is selected by default. An enabled rule starts to work as soon as you save it. If
you want to enable the rule later, select Disabled.
Choose Next.
92
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
11. On the Review page, review your replication rule. If it looks correct, choose Save. Otherwise, choose
Previous to edit the rule before saving it.
12. After you save your rule, you can edit, enable, disable, or delete your rule on the Replication page.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
4. If you have never created a replication rule before, start with Adding a Replication Rule When the
Destination Bucket Is in the Same AWS Account (p. 86).
On the Replication rule wizard Set destination page, under Destination bucket, choose Buckets in
another account. Then enter the name of the destination bucket and the account ID from a different
AWS account. Choose Save.
93
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
After you save the destination bucket name and account ID, you might get a warning message
telling you to add a bucket policy to the destination bucket so that Amazon S3 can verify whether
versioning is enabled on the bucket. You'll be presented with a bucket policy in a few steps which
you can copy and add to the destination bucket in the other account. For information about adding
a bucket policy to an S3 bucket and versioning, see How Do I Add an S3 Bucket Policy? (p. 127) and
How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12)
5. If you chose to replicate objects encrypted with AWS KMS, under Destination encryption settings,
enter the Amazon Resource Name (ARN) AWS KMS CMK to use to encrypt the replicas in the
destination bucket.
94
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
For more information about creating an AWS KMS CMK, see Creating Keys in the AWS Key
Management Service Developer Guide.
• To replicate your data into a specific storage class in the destination bucket, select Change the
storage class for the replicated object(s). Then choose the storage class that you want to use for
the replicated objects in the destination bucket. If you don't select this option, the storage class
for replicated objects is the same class as the original objects.
• To change the object ownership of the replica objects to the destination bucket owner, select
Change object ownership to destination owner. This option enables you to separate object
ownership of the replicated data from the source. If asked, type the account ID of the destination
bucket.
When you select this option, regardless of who owns the source bucket or the source object,
the AWS account that owns the destination bucket is granted full permission to replica objects.
For more information, see Changing the Replica Owner in the Amazon Simple Storage Service
Developer Guide.
• If you want to add S3 Replication Time Control (S3 RTC) to your replication configuration, select
Replication time control.
Note
When you use S3 RTC, additional per-GB data transfer fees and CloudWatch metrics fees
apply.
95
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
Choose Next.
7. Set up an AWS Identity and Access Management (IAM) role that Amazon S3 can assume to perform
replication of objects on your behalf.
To set up an IAM role, on the Configure options page, under Select role, do one of the following:
• We highly recommend that you choose Create new role to have Amazon S3 create a new IAM
role for you. When you save the rule, a new policy is generated for the IAM role that matches
the source and destination buckets that you choose. The name of the generated role is based on
the bucket names and uses the following naming convention: replication_role_for_source-
bucket_to_destination-bucket.
• You can choose to use an existing IAM role. If you do, you must choose a role that allows Amazon
S3 to replicate objects from the source bucket to the destination bucket on your behalf.
96
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
8. A bucket policy is provided on the Configure options page that you can copy and add to the
destination bucket in the other account. For information about adding a bucket policy to an S3
bucket, see How Do I Add an S3 Bucket Policy? (p. 127)
9. If you chose to replicate objects encrypted with AWS KMS, an AWS KMS key policy is provided on
the Configure options page. You can copy this policy to add to the key policy for the AWS KMS
97
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
CMK that you are using. The key policy grants the source bucket owner permission to use the CMK.
For information about updating the key policy, see Grant the Source Bucket Owner Permission to
Encrypt Using the AWS KMS CMK (p. 99).
10. On the Review page, review your replication rule. If it looks correct, choose Save. Otherwise, choose
Previous to edit the rule before saving it.
11. After you save your rule, you can edit, enable, disable, or delete your rule on the Replication page.
12. Follow the instructions given on the Replication page under the warning message, The replication
rule is saved, but additional settings are required in the destination account. Sign out of the AWS
account that you are currently in, and then sign in to the destination account.
Important
Replication fails until you sign in to the destination account and complete the following
steps.
13. After you sign in to the destination account, choose the Management tab, choose Replication, and
then choose Receive objects on the Actions menu.
14. On the Receive objects page, you can do the following:
98
Amazon Simple Storage Service Console User Guide
Adding a Replication Rule When the Destination
Bucket Is in a Different AWS Account
99
Amazon Simple Storage Service Console User Guide
More Info
1. Sign in to the AWS Management Console using the AWS account that owns the AWS KMS CMK.
Open the IAM console at https://console.aws.amazon.com/iam/.
2. In the left navigation pane, choose Encryption keys.
3. For Region, choose the appropriate AWS Region. Do not use the Region selector in the navigation
bar (upper-right corner).
4. Choose the alias of the CMK that you want to encrypt with.
5. In the Key Policy section of the page, choose Switch to policy view.
6. Using the Key Policy editor, insert the key policy provided by Amazon S3 into the existing key policy,
and then choose Save Changes. You might want to add the policy to the end of the existing policy.
For more information about creating and editing AWS KMS CMKs, see Getting Started in the AWS Key
Management Service Developer Guide.
More Info
• How Do I Manage the Replication Rules for an S3 Bucket? (p. 100)
• How Do I Enable or Suspend Versioning for an S3 Bucket? (p. 12)
• Replication in the Amazon Simple Storage Service Developer Guide
You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define the
source bucket objects to replicate and the destination bucket where the replicated objects are stored.
For more information about replication, see Replication in the Amazon Simple Storage Service Developer
Guide.
You can manage replication rules on the Replication page. You can add, view, enable, disable, delete,
and change the priority of the replication rules. For information about adding replication rules to a
bucket, see How Do I Add a Replication Rule to an S3 Bucket? (p. 85).
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
100
Amazon Simple Storage Service Console User Guide
Managing Replication Rules
You can change the destination bucket, and the IAM role. If needed, you can copy the required
bucket policy for cross-account destination buckets.
• To change a replication rule, select the rule and choose Edit, which starts the Replication wizard
to help you make the change. For information about using the wizard, see How Do I Add a
Replication Rule to an S3 Bucket? (p. 85).
• To enable or disable a replication rule, select the rule, choose More, and in the drop-down list,
choose Enable rule or Disable rule. You can also disable, enable, or delete all the rules in the
bucket from the More drop-down list.
101
Amazon Simple Storage Service Console User Guide
More Info
• To change the rule priorities, choose Edit priorities. You can then change the priority for each
rule under the Priority column heading. Choose Save to save your changes.
You set rule priorities to avoid conflicts caused by objects that are included in the scope of more
than one rule. In the case of overlapping rules, Amazon S3 uses the rule priority to determine
which rule to apply. The higher the number, the higher the priority. For more information about
rule priority, see Replication Configuration Overview in the Amazon Simple Storage Service
Developer Guide.
More Info
• How Do I Add a Replication Rule to an S3 Bucket? (p. 85)
• Replication in the Amazon Simple Storage Service Developer Guide
For more information about analytics, see Amazon S3 Analytics – Storage Class Analysis in the Amazon
Simple Storage Service Developer Guide.
102
Amazon Simple Storage Service Console User Guide
Configuring Storage Class Analysis
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket for which you want to configure storage
class analysis.
4. Choose Add.
5. Type a name for the filter. If you want to analyze the whole bucket, leave the Prefix / tags field
empty.
103
Amazon Simple Storage Service Console User Guide
Configuring Storage Class Analysis
6. In the Prefix / tags field, type text for the prefix or tag for the objects that you want to analyze, or
choose from the dropdown list that appears when you start typing.
7. If you chose tag, enter a value for the tag. You can enter one prefix and multiple tags.
8. Optionally, you can choose Export data to export analysis reports to a comma-separated values
(.csv) flat file. Choose a destination bucket where the file can be stored. You can type a prefix for the
destination bucket. The destination bucket must be in the same AWS Region as the bucket for which
you are setting up the analysis. The destination bucket can be in a different AWS account.
104
Amazon Simple Storage Service Console User Guide
Configuring Storage Class Analysis
9. Choose Save.
Amazon S3 creates a bucket policy on the destination bucket that grants Amazon S3 write permission.
This allow it to write the export data to the bucket.
If an error occurs when you try to create the bucket policy, you'll be given instructions on how to fix it.
For example, if you chose a destination bucket in another AWS account and do not have permissions
to read and write to the bucket policy, you'll see the following message. You must have the destination
bucket owner add the displayed bucket policy to the destination bucket. If the policy is not added to the
destination bucket you won’t get the export data because Amazon S3 doesn’t have permission to write
to the destination bucket. If the source bucket is owned by a different account than that of the current
user, then the correct account ID of the source bucket must be substituted in the policy.
For information about the exported data and how the filter works, see Amazon S3 Analytics – Storage
Class Analysis in the Amazon Simple Storage Service Developer Guide.
More Info
105
Amazon Simple Storage Service Console User Guide
Configuring Amazon S3 Inventory
To configure inventory
Note
It may take up to 48 hours to deliver the first report.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket for which you want to configure Amazon S3
inventory.
• Optionally, add a prefix for your filter to inventory only objects whose names begin with the same
string.
• Choose the destination bucket where you want reports to be saved. The destination bucket
must be in the same AWS Region as the bucket for which you are setting up the inventory. The
destination bucket can be in a different AWS account.
• Optionally, choose a prefix for the destination bucket.
• Choose how frequently to generate the inventory.
a. Choose either the CSV, ORC, or Parquet output file format for your inventory. For more
information about these formats, see Amazon S3 Inventory in the Amazon Simple Storage
Service Developer Guide.
106
Amazon Simple Storage Service Console User Guide
Configuring Amazon S3 Inventory
b. To include all versions of the objects in the inventory, choose Include all versions in the Object
versions list. By default, the inventory includes only the current versions of the objects.
c. For Optional fields, select one or more of the following to add to the inventory report:
For information about object lock, see Amazon S3 Object Lock Overview in the Amazon
Simple Storage Service Developer Guide.
107
Amazon Simple Storage Service Console User Guide
Destination Bucket Policy
For more information about the contents of an inventory report, see What's Included in an
Amazon S3 Inventory? in the Amazon Simple Storage Service Developer Guide.
d. For Encryption, choose a server-side encryption option to encrypt the inventory report, or
choose None:
If an error occurs when you try to create the bucket policy, you are given instructions on how to fix it. For
example, if you choose a destination bucket in another AWS account and don't have permissions to read
and write to the bucket policy, you see the following message.
In this case, the destination bucket owner must add the displayed bucket policy to the destination
bucket. If the policy is not added to the destination bucket, you won’t get an inventory report because
Amazon S3 doesn’t have permission to write to the destination bucket. If the source bucket is owned by
108
Amazon Simple Storage Service Console User Guide
Grant Amazon S3 Permission to
Encrypt Using Your AWS KMS CMK
a different account than that of the current user, the correct account ID of the source bucket must be
substituted in the policy.
For more information, see Amazon S3 Inventory in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console using the AWS account that owns the AWS KMS CMK.
Open the IAM console at https://console.aws.amazon.com/iam/.
2. In the left navigation pane, choose Encryption keys.
3. For Region, choose the appropriate AWS Region. Do not use the region selector in the navigation bar
(upper-right corner).
4. Choose the alias of the CMK that you want to encrypt inventory with.
5. In the Key Policy section of the page, choose Switch to policy view.
6. Using the Key Policy editor, insert following key policy into the existing policy and then choose Save
Changes. You might want to copy the policy to the end of the existing policy.
{
"Sid": "Allow Amazon S3 use of the key",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"kms:GenerateDataKey*"
],
"Resource": "*"
}
For more information about creating and editing AWS KMS CMKs, see Getting Started in the AWS Key
Management Service Developer Guide.
More Info
• Storage metrics are reported once per day and are provided to all customers at no additional cost.
• Replication metrics are available 15 minutes after enabling a replication rule with S3 Replication Time
Control (S3 RTC). For more information, see How Do I View Replication Metrics? (p. 114)
109
Amazon Simple Storage Service Console User Guide
Configuring Request Metrics
• Request metrics are available at 1-minute intervals after some latency to process, and the metrics are
billed at the standard CloudWatch rate.
For more information about CloudWatch metrics for Amazon S3, see Monitoring Metrics with Amazon
CloudWatch in the Amazon Simple Storage Service Developer Guide.
To get request metrics, you must opt into them by configuring them on the AWS Management Console
or using the Amazon S3 API.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the objects you want request
metrics for.
4. Choose Requests.
5. In the left pane, choose the edit icon next to the name of the bucket.
110
Amazon Simple Storage Service Console User Guide
Configuring a Request Metrics Filter
6. Select the Request metrics check box. This also enables data transfer metrics.
7. Choose Save.
You have now created a metrics configuration for all the objects in an Amazon S3 bucket. About 15
minutes after CloudWatch begins tracking these request metrics, you can see graphs for the metrics on
the Amazon S3 or CloudWatch console.
You can also define a filter so that the metrics are only collected and reported on a subset of objects in
the bucket. For more information, see How Do I Configure a Request Metrics Filter? (p. 111)
For more conceptual information about CloudWatch metrics for Amazon S3, see Monitoring Metrics with
Amazon CloudWatch in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that has the objects you want to get request
metrics for.
111
Amazon Simple Storage Service Console User Guide
Configuring a Request Metrics Filter
4. Choose Requests.
112
Amazon Simple Storage Service Console User Guide
Configuring a Request Metrics Filter
7. Provide one or more prefixes or tags, separated by commas, in Prefix /tags that you want to
monitor. From the drop down, select whether the value you provided is a tag or a prefix.
8. Choose Save.
113
Amazon Simple Storage Service Console User Guide
Viewing Replication Metrics
You have now created a metrics configuration for request metrics on a subset of the objects in an
Amazon S3 bucket. About 15 minutes after CloudWatch begins tracking these request metrics, you can
see graphs for the metrics in both the Amazon S3 or CloudWatch consoles. You can also request metrics
at the bucket level. For information, see How Do I Configure Request Metrics for an S3 Bucket? (p. 109)
Replication metrics track the rule IDs of the replication configuration. A replication rule ID can be specific
to a prefix, a tag, or a combination of both. For more information about S3 Replication Time Control (S3
RTC), see Replicating Objects Using S3 Replication Time Control (S3 RTC) in the Amazon Simple Storage
Service Developer Guide.
For more information about CloudWatch metrics for Amazon S3, see Monitoring Metrics with Amazon
CloudWatch in the Amazon Simple Storage Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the objects you want
replication metrics for.
4. Choose Replication.
Console screenshot showing replication option.
5. In the Rule IDs list in the left pane, select the rule IDs that you want. If you have several rule IDs to
choose from, you can search for the IDs that you want.
114
Amazon Simple Storage Service Console User Guide
Viewing Replication Metrics
6. After choosing the rule IDs that you want, choose Display graphs below the Rule IDs selection box.
You can then view the replication metrics Replication Latency (in seconds), Operations pending
replication, and Bytes pending replication for the rules that you selected. Amazon CloudWatch begins
reporting replication metrics 15 minutes after you enable S3 RTC on the respective replication rule.
You can view replication metrics on the Amazon S3 or CloudWatch console. For information, see Using
Replication Metrics to Monitor Replication Configurations in the Amazon Simple Storage Service Developer
Guide.
115
Amazon Simple Storage Service Console User Guide
Buckets and objects are Amazon S3 resources. You grant access permissions to your buckets and objects
by using resource-based access policies. You can associate an access policy with a resource. An access
policy describes who has access to resources. The resource owner is the AWS account that creates the
resource. For more information about resource ownership and access policies, see Overview of Managing
Access in the Amazon Simple Storage Service Developer Guide.
Bucket access permissions specify which users are allowed access to the objects in a bucket and which
types of access they have. Object access permissions specify which users are allowed access to the object
and which types of access they have. For example, one user might have only read permission, while
another might have read and write permissions.
Bucket and object permissions are independent of each other. An object does not inherit the permissions
from its bucket. For example, if you create a bucket and grant write access to a user, you can't access that
user’s objects unless the user explicitly grants you access.
To grant access to your buckets and objects to other AWS accounts and to the general public, you use
resource-based access policies known as access control lists (ACLs).
A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that grants other
AWS accounts or IAM users access to an S3 bucket. Bucket policies supplement, and in many cases,
replace ACL-based access policies. For more information about using IAM with Amazon S3, see Managing
Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
For more in-depth information about managing access permissions, see Introduction to Managing Access
Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
This section also explains how to use the Amazon S3 console to add a cross-origin resource sharing
(CORS) configuration to an S3 bucket. CORS allows client web applications that are loaded in one domain
to interact with resources in another domain.
Topics
• How Do I Block Public Access to S3 Buckets? (p. 117)
• How Do I Edit Public Access Settings for S3 Buckets? (p. 118)
• How Do I Edit Public Access Settings for All the S3 Buckets in an AWS Account? (p. 120)
• How Do I Set Permissions on an Object? (p. 121)
• How Do I Set ACL Bucket Permissions? (p. 124)
• How Do I Add an S3 Bucket Policy? (p. 127)
• How Do I Add Cross-Domain Resource Sharing with CORS? (p. 128)
• Using Access Analyzer for S3 (p. 129)
116
Amazon Simple Storage Service Console User Guide
Blocking Public Access
The following topics explain how to use the Amazon S3 console to configure block public access settings:
The following sections explain viewing bucket access status and searching by access types.
• Public – Everyone has access to one or more of the following: List objects, Write objects, Read and
write permissions.
• Objects can be public – The bucket is not public, but anyone with the appropriate permissions can
grant public access to objects.
• Buckets and objects not public – The bucket and objects do not have any public access.
• Only authorized users of this account – Access is isolated to IAM users and roles in this account and
AWS service principals because there is a policy that grants public access.
The access column shows the access status of the listed buckets.
You can also filter bucket searches by access type. Choose an access type from the drop-down list that is
next to the Search for buckets bar.
117
Amazon Simple Storage Service Console User Guide
More Info
More Info
• How Do I Edit Public Access Settings for S3 Buckets? (p. 118)
• How Do I Edit Public Access Settings for All the S3 Buckets in an AWS Account? (p. 120)
• Setting Bucket and Object Access Permissions (p. 116)
Topics
• Editing Public Access Settings for an S3 Bucket (p. 118)
• Editing Public Access Settings for Multiple S3 Buckets (p. 119)
• More Info (p. 120)
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want.
3. Choose Permissions.
4. Choose Edit to change the public access settings for the bucket. For more information about the
four Amazon S3 Block Public Access Settings, see Block Public Access Settings in the Amazon Simple
Storage Service Developer Guide.
118
Amazon Simple Storage Service Console User Guide
Editing Public Access Settings for Multiple S3 Buckets
5. Choose the setting that you want to change, and then choose Save.
6. When you're asked for confirmation, enter confirm. Then choose Confirm to save your changes.
To edit the Amazon S3 Block Public Access settings for multiple S3 buckets
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the buckets that you want, and then choose Edit public access
settings.
3. Choose the setting that you want to change, and then choose Save.
119
Amazon Simple Storage Service Console User Guide
More Info
4. When you're asked for confirmation, enter confirm. Then choose Confirm to save your changes.
You can change Amazon S3 Block Public Access settings when you create a bucket. For more information,
see How Do I Create an S3 Bucket? (p. 3).
More Info
• How Do I Block Public Access to S3 Buckets? (p. 117)
• How Do I Edit Public Access Settings for All the S3 Buckets in an AWS Account? (p. 120)
• Setting Bucket and Object Access Permissions (p. 116)
To edit block public access settings for all the S3 buckets in an AWS account
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. Choose Block public access (account settings).
3. Choose Edit to change the block public access settings for all the buckets in your AWS account.
120
Amazon Simple Storage Service Console User Guide
More Info
4. Choose the settings that you want to change, and then choose Save.
5. When you're asked for confirmation, enter confirm. Then choose Confirm to save your changes.
More Info
• How Do I Block Public Access to S3 Buckets? (p. 117)
• How Do I Edit Public Access Settings for S3 Buckets? (p. 118)
• Setting Bucket and Object Access Permissions (p. 116)
Bucket and object permissions are independent of each other. An object does not inherit the permissions
from its bucket. For example, if you create a bucket and grant write access to a user, you can't access that
user’s objects unless the user explicitly grants you access.
You can grant permissions to other AWS accounts or predefined groups. The user or group that you grant
permissions to is called the grantee. By default, the owner, which is the AWS account that created the
bucket, has full permissions.
Each permission you grant for a user or a group adds an entry in the ACL that is associated with
the object. The ACL lists grants, which identify the grantee and the permission granted. For more
information about ACLs, see Managing Access with ACLs in the Amazon Simple Storage Service Developer
Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that contains the object.
121
Amazon Simple Storage Service Console User Guide
Setting Object Permissions
3. In the Name list, choose the name of the object for which you want to set permissions.
4. Choose Permissions.
5. You can manage object access permissions for the following:
The owner refers to the AWS account root user, and not an AWS Identity and Access
Management (IAM) user. For more information about the root user, see The AWS Account Root
User in the IAM User Guide.
To change the owner's object access permissions, under Access for object owner, choose Your
AWS Account (owner).
Select the check boxes for the permissions that you want to change, and then choose Save.
To grant permissions to an AWS user from a different AWS account, under Access for other AWS
accounts, choose Add account. In the Enter an ID field, enter the canonical ID of the AWS user
that you want to grant object permissions to. For information about finding a canonical ID, see
122
Amazon Simple Storage Service Console User Guide
Setting Object Permissions
AWS Account Identifiers in the Amazon Web Services General Reference. You can add as many as
99 users.
Select the check boxes for the permissions that you want to grant to the user, and then choose
Save. To display information about the permissions, choose the Help icons.
c. Public access
To grant access to your object to the general public (everyone in the world), under Public
access, choose Everyone. Granting public access permissions means that anyone in the world
can access the object.
Select the check boxes for the permissions that you want to grant, and then choose Save.
123
Amazon Simple Storage Service Console User Guide
More Info
Warning
Use caution when granting the Everyone group anonymous access to your Amazon
S3 objects. When you grant access to this group, anyone in the world can access your
object. If you need to grant access to everyone, we highly recommend that you only
grant permissions to Read objects.
We highly recommend that you do not grant the Everyone group write object
permissions. Doing so allows anyone to overwrite the ACL permissions for the object.
You can also set object permissions when you upload objects. For more information about setting
permissions when uploading objects, see How Do I Upload Files and Folders to an S3 Bucket? (p. 38).
More Info
• Setting Bucket and Object Access Permissions (p. 116)
• How Do I Set ACL Bucket Permissions? (p. 124)
You can grant permissions to other AWS account users or to predefined groups. The user or group that
you are granting permissions to is called the grantee. By default, the owner, which is the AWS account
that created the bucket, has full permissions.
Each permission you grant for a user or group adds an entry in the ACL that is associated with
the bucket. The ACL lists grants, which identify the grantee and the permission granted. For more
information about ACLs, see Managing Access with ACLs in the Amazon Simple Storage Service Developer
Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to set permissions for.
124
Amazon Simple Storage Service Console User Guide
Setting ACL Bucket Permissions
The owner refers to the AWS account root user, and not an AWS Identity and Access
Management (IAM) user. For more information about the root user, see The AWS Account Root
User in the IAM User Guide.
To change the owner's bucket access permissions, under Access for your AWS accounted root
user, choose Your AWS Account (owner).
Select the check boxes for the permissions that you want to change, and then choose Save.
To grant permissions to an AWS user from a different AWS account, under Access for other AWS
accounts, choose Add account. In the Enter an ID field, enter the canonical ID of the AWS user
that you want to grant bucket permissions to. For information about finding a canonical ID, see
AWS Account Identifiers in the Amazon Web Services General Reference. You can add as many as
99 users.
Select the check boxes next to the permissions that you want to grant to the user, and then
choose Save. To display information about the permissions, choose the Help icons.
125
Amazon Simple Storage Service Console User Guide
Setting ACL Bucket Permissions
Warning
When you grant other AWS accounts access to your resources, be aware that the AWS
accounts can delegate their permissions to users under their accounts. This is known as
cross-account access. For information about using cross-account access, see Creating a
Role to Delegate Permissions to an IAM User in the IAM User Guide.
c. Public access
To grant access to your bucket to the general public (everyone in the world), under Public
access, choose Everyone. Granting public access permissions means that anyone in the world
can access the bucket. Select the check boxes for the permissions that you want to grant, and
then choose Save.
To undo public access to your bucket, under Public access, choose Everyone. Clear all the
permissions check boxes, and then choose Save.
Warning
Use caution when granting the Everyone group public access to your S3 bucket. When
you grant access to this group, anyone in the world can access your bucket. We highly
recommend that you never grant any kind of public write access to your S3 bucket.
d. S3 log delivery group
To grant access to Amazon S3 to write server access logs to the bucket, under S3 log delivery
group, choose Log Delivery.
If a bucket is set up as the target bucket to receive access logs, the bucket permissions must
allow the Log Delivery group write access to the bucket. When you enable server access logging
on a bucket, the Amazon S3 console grants write access to the Log Delivery group for the
target bucket that you choose to receive the logs. For more information about server access
logging, see How Do I Enable Server Access Logging for an S3 Bucket? (p. 16).
126
Amazon Simple Storage Service Console User Guide
More Info
You can also set bucket permissions when you are creating a bucket. For more information about setting
permissions when creating a bucket, see How Do I Create an S3 Bucket? (p. 3).
More Info
• Setting Bucket and Object Access Permissions (p. 116)
• How Do I Set Permissions on an Object? (p. 121)
• How Do I Add an S3 Bucket Policy? (p. 127)
For examples of Amazon S3 bucket policies, see Bucket Policy Examples in the Amazon Simple Storage
Service Developer Guide.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to create a bucket policy for
or whose bucket policy you want to edit.
4. In the Bucket policy editor text box, type or copy and paste a new bucket policy, or edit an existing
policy. The bucket policy is a JSON file. The text you type in the editor must be valid JSON.
127
Amazon Simple Storage Service Console User Guide
More Info
5. Choose Save.
Note
Amazon S3 displays the Amazon Resource Name (ARN) for the bucket next to the Bucket
policy editor title. For more information about ARNs, see Amazon Resource Names (ARNs)
and AWS Service Namespaces in the Amazon Web Services General Reference.
Directly below the bucket policy editor text box is a link to the Policy Generator, which you
can use to create a bucket policy.
More Info
• Setting Bucket and Object Access Permissions (p. 116)
• How Do I Set ACL Bucket Permissions? (p. 124)
To configure your bucket to allow cross-origin requests, you add CORS configuration to the bucket. A
CORS configuration is an XML document that defines rules that identify the origins that you will allow
to access your bucket, the operations (HTTP methods) supported for each origin, and other operation-
specific information. For more information about CORS, see Cross-Origin Resource Sharing (CORS) in the
Amazon Simple Storage Service Developer Guide.
When you enable CORS on the bucket, the access control lists (ACLs) and other access permission policies
continue to apply.
128
Amazon Simple Storage Service Console User Guide
More Info
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the Bucket name list, choose the name of the bucket that you want to create a bucket policy for.
4. In the CORS configuration editor text box, type or copy and paste a new CORS configuration, or edit
an existing configuration. The CORS configuration is an XML file. The text that you type in the editor
must be valid XML. For more information, see How Do I Configure CORS on My Bucket?
5. Choose Save.
Note
Amazon S3 displays the Amazon Resource Name (ARN) for the bucket next to the CORS
configuration editor title. For more information about ARNs, see Amazon Resource Names
(ARNs) and AWS Service Namespaces in the Amazon Web Services General Reference.
More Info
• Setting Bucket and Object Access Permissions (p. 116)
• How Do I Set ACL Bucket Permissions? (p. 124)
• How Do I Add an S3 Bucket Policy? (p. 127)
When reviewing an at-risk bucket in Access Analyzer for S3, you can block all public access to the bucket
with a single click. We recommend that you block all access to your buckets unless you require public
access to support a specific use case. Before you block all public access, ensure that your applications
will continue to work correctly without public access. For more information, see Using Amazon S3 Block
Public Access in the Amazon Simple Storage Service Developer Guide.
You can also drill down into bucket-level permission settings to configure granular levels of access.
For specific and verified use cases that require public access, such as static website hosting, public
129
Amazon Simple Storage Service Console User Guide
What Information Does Access Analyzer for S3 Provide?
downloads, or cross-account sharing, you can acknowledge and record your intent for the bucket to
remain public or shared by archiving the findings for the bucket. You can revisit and modify these bucket
configurations at any time. You can also download your findings as a CSV report for auditing purposes.
Access Analyzer for S3 is available at no extra cost on the Amazon S3 console. Access Analyzer for S3
is powered by AWS Identity and Access Management (IAM) Access Analyzer. To use Access Analyzer for
S3 in the Amazon S3 console, you must visit the IAM console and enable IAM Access Analyzer on a per-
Region basis.
For more information about IAM Access Analyzer, see What is Access Analyzer? in the IAM User Guide. For
more information about Access Analyzer for S3, review the following sections.
Important
When a bucket policy or bucket ACL is added or modified, Access Analyzer generates and
updates findings based on the change within 30 minutes. Findings related to account level block
public access settings may not be generated or updated for up to 6 hours after you change the
settings.
Topics
• What Information Does Access Analyzer for S3 Provide? (p. 130)
• Enabling Access Analyzer for S3 (p. 131)
• Blocking All Public Access (p. 131)
• Reviewing and Changing a Bucket Policy or a Bucket ACL (p. 132)
• Archiving Bucket Findings (p. 132)
• Activating an Archived Bucket Finding (p. 133)
• Viewing Finding Details (p. 133)
• Downloading an Access Analyzer for S3 Report (p. 133)
For each bucket, Access Analyzer for S3 provides the following information:
• Bucket name
• Discovered by Access analyzer ‐ When Access Analyzer for S3 discovered the public or shared bucket
access.
• Shared through ‐ How the bucket is shared—through a bucket policy, a bucket ACL, or both. If you
want to find and review the source for your bucket access, you can use the information in this column
as a starting point for taking immediate and precise corrective action.
• Status ‐ The status of the bucket finding. Access Analyzer for S3 displays findings for all public and
shared buckets.
• Active ‐ Finding has not been reviewed.
• Archived ‐ Finding has been reviewed and confirmed as intended.
• All ‐ All findings for buckets that are public or shared with other AWS accounts, including AWS
accounts outside of your organization.
130
Amazon Simple Storage Service Console User Guide
Enabling Access Analyzer for S3
• Set permissions.
• Enable IAM Access Analyzer for each Region where you want to use it.
For more information, see Getting Started with Access Analyzer in the IAM User Guide.
If you don't want to block all public access to your bucket, you can edit your block public access settings
on the Amazon S3 console to configure granular levels of access to your buckets. For more information,
see Using Amazon S3 Block Public Access in the Amazon Simple Storage Service Developer Guide.
In rare events, Access Analyzer for S3 might report no findings for a bucket that an Amazon S3 block
public access evaluation reports as public. This happens because Amazon S3 block public access reviews
policies for current actions and any potential actions that might be added in the future, leading to a
bucket becoming public. On the other hand, Access Analyzer for S3 only analyzes the current actions
specified for the Amazon S3 service in the evaluation of access status.
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
2. In the navigation pane on the left, under Dashboards, choose Access analyzer for S3.
3. In Access Analyzer for S3, choose a bucket.
4. Choose Block all public access.
5. To confirm your intent to block all public access to the bucket, in Block all public access (bucket
settings), enter confirm.
Amazon S3 blocks all public access to your bucket. The status of the bucket finding updates to
resolved, and the bucket disappears from the Access Analyzer for S3 listing. If you want to review
resolved buckets, open IAM Access Analyzer on the IAM console.
131
Amazon Simple Storage Service Console User Guide
Reviewing and Changing a Bucket Policy or a Bucket ACL
If you edit or remove a bucket ACL or bucket policy to remove public or shared access, the status
for the bucket findings updates to resolved. The resolved bucket findings disappear from the Access
Analyzer for S3 listing, but you can view them in IAM Access Analyzer.
132
Amazon Simple Storage Service Console User Guide
Activating an Archived Bucket Finding
The finding details open in IAM Access Analyzer on the IAM console.
To download a report
133
Amazon Simple Storage Service Console User Guide
Downloading an Access Analyzer for S3 Report
Access Analyzer for S3 updates to shows buckets for the chosen Region.
4. Choose Download report.
134
Amazon Simple Storage Service Console User Guide
Earlier Updates
Document History
Latest documentation update: March 27, 2019
The following table describes the important changes in each release of the Amazon Simple Storage
Service Console User Guide from June 19, 2018, onward. For notification about updates to this
documentation, you can subscribe to an RSS feed.
New archive storage class Amazon S3 now offers a March 27, 2019
(p. 135) new archive storage class,
DEEP_ARCHIVE, for storing
rarely accessed objects. For
more information, see How Do
I Restore an S3 Object That Has
Been Archived? and Storage
Classes in the Amazon Simple
Storage Service Developer Guide.
Blocking public access to S3 Amazon S3 block public access November 15, 2018
buckets (p. 135) prevents the application of any
settings that allow public access
to data within S3 buckets. For
more information, see Blocking
Public Access to S3 Buckets.
Filtering enhancements in cross- In a CRR rule, you can specify an September 19, 2018
region replication (CRR) rules object filter to choose a subset
(p. 135) of objects to apply the rule
to. Previously, you could filter
only on an object key prefix.
In this release, you can filter
on an object key prefix, one or
more object tags, or both. For
more information, see How Do I
Add a Replication Rule to an S3
Bucket?.
Updates now available over You can now subscribe to an June 19, 2018
RSS (p. 135) RSS feed to receive notifications
about updates to the Amazon
Simple Storage Service Console
User Guide guide.
Earlier Updates
The following table describes the important changes in each release of the Amazon Simple Storage
Service Console User Guide before June 19, 2018.
135
Amazon Simple Storage Service Console User Guide
Earlier Updates
New storage class Amazon S3 now offers a new storage class, ONEZONE_IA (IA, for April 4, 2018
infrequent access) for storing objects. For more information, see
Storage Classes in the Amazon Simple Storage Service Developer
Guide.
Support for Amazon S3 now supports the Apache optimized row columnar November
ORC-formatted (ORC) format in addition to comma-separated values (CSV) file 17, 2017
Amazon S3 format for inventory output files. For more information, see How
inventory files Do I Configure Amazon S3 Inventory? (p. 106).
Bucket Bucket permissions check in the Amazon S3 console checks bucket November
permissions check policies and bucket access control lists (ACLs) to identify publicly 06, 2017
accessible buckets. Bucket permissions check makes it easier to
identify S3 buckets that provide public read and write access.
.
Default Amazon S3 default encryption provides a way to set the default November
encryption for S3 encryption behavior for an S3 bucket. You can set default 06, 2017
buckets encryption on a bucket so that all objects are encrypted when they
are stored in the bucket. The objects are encrypted using server-
side encryption with either Amazon S3-managed keys (SSE-S3)
or AWS KMS-managed keys (SSE-KMS). For more information,
see How Do I Enable Default Encryption for an Amazon S3
Bucket? (p. 13).
Encryption status Amazon S3 now supports including encryption status in Amazon November
in Amazon S3 S3 inventory so you can see how your objects are encrypted at rest 06, 2017
inventory for compliance auditing or other purposes. You can also configure
to encrypt Amazon S3 inventory with server-side encryption (SSE)
or SSE-KMS so that all inventory files are encrypted accordingly.
For more information, see How Do I Configure Amazon S3
Inventory? (p. 106).
Added The Amazon S3 console now supports enabling object-level October 19,
functionality and logging for an S3 bucket with AWS CloudTrail data events logging. 2017
documentation For more information, see How Do I Enable Object-Level Logging
for an S3 Bucket with AWS CloudTrail Data Events? (p. 19).
Old Amazon S3 The old version of the Amazon S3 console is no longer available August 31,
console no longer and the old user guide was removed from the Amazon S3 2017
available documentation site.
136
Amazon Simple Storage Service Console User Guide
Earlier Updates
General Announced the general availability of the new Amazon S3 console. May 15,
availability of 2017
New Amazon S3
console
137
Amazon Simple Storage Service Console User Guide
AWS Glossary
For the latest AWS terminology, see the AWS Glossary in the AWS General Reference.
138