AWS Best Practices For Storage Partners
AWS Best Practices For Storage Partners
Storage Partners
September 2019
© 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Notices
This document is provided for informational purposes only. It represents AWS’s current
product offerings and practices as of the date of issue of this document, which are subject to
change without notice. Customers are responsible for making their own independent
assessment of the information in this document and any use of AWS’s products or services,
each of which is provided “as is” without warranty of any kind, whether express or implied.
This document does not create any warranties, representations, contractual commitments,
conditions or assurances from AWS, its affiliates, suppliers or licensors. The responsibilities
and liabilities of AWS to its customers are controlled by AWS agreements, and this document
is not part of, nor does it modify, any agreement between AWS and its customers.
Contents
Introduction 1
General Solution Creation 1
When to use Amazon S3 1
Using the SDK 2
Add User Agent String Header 2
Use Amazon S3 Cross-Region Replication 3
Amazon S3 Object Lock 4
Performance 4
Align Object Naming 4
VPC Endpoints 5
Support AWS Snow Family for Data Transfer 5
Security 6
IAM and Least Privilege Model 6
Use AWS Key Management Service 7
Monitoring 8
Support for CloudWatch 8
Support for AWS CloudTrail and Config 9
Cost Optimization 9
Use Lifecycle Policies 9
Utilize All Amazon S3 Storage Class 9
Use Archive Storage Classes through the Amazon S3 API 12
Use AWS Lambda for Running Code 13
Conclusion 14
Contributors 14
Further Reading 14
Document Revisions 14
Abstract
Many storage partners support AWS storage services or provide storage services on premises
that work in a hybrid manner with AWS services. Some of the most common storage services
on which storage partners build their solutions are AWS object storage services, such as
Amazon Simple Storage Service (Amazon S3) and Amazon S3 Glacier. This paper outlines the
best practices for storage partners when adding support for AWS services to their offerings or
building new offerings on AWS.
Amazon Web Services – AWS Integration Guide for Storage Partners
Introduction
In order to have a well-supported and integrated product, you should follow this Best Practices
guide, which focuses primarily on support for Amazon S3, a highly durable object storage
service. This guide does not directly explore support for other AWS storage services such as
Amazon Elastic Block Store (Amazon EBS), Amazon Elastic File System (Amazon EFS), Amazon
FSx, and AWS Storage Gateway. If you are interested in learning more about creating solutions
that work with these services, please reach out to your AWS Partner Network (APN) contact.
The best way to have a successful solution with AWS is to follow some of our key tenets, which
include security of customer data, operational monitoring, as well as cost and performance
optimization, all of which are covered in this guide. While this guide does not comprehensively
cover every best practice or every type of application, it provides good directional guidance.
You should also follow our Well Architected Framework whenever supporting or running any
service on AWS.1 Follow these guidelines leads to better solutions and better customer
experiences. If you have any technical questions that go beyond the scope of this guide, engage
with an AWS Solutions Architect to get guidance on further best practices and optimized
architectures.
Amazon S3 is object storage and does not provide a file system, and in most cases is not
optimized when used as one. AWS provides several services that do offer file protocols, such as
Amazon EFS and Amazon FSx. Partners also can build or tie in their file systems to Amazon S3
for persistent storage, but partners should optimize the way they interact with Amazon S3 to
take advantage of the parallelism that Amazon S3 supports to maximize performance and
minimize impacts of latency. Because of the high durability and low cost of Amazon S3, it makes
a great persistence layer and can be combined with Amazon EBS or Amazon Elastic Compute
Cloud (Amazon EC2) Instance Store as a caching layer.
Page 1
Amazon Web Services – AWS Integration Guide for Storage Partners
When storing data on Amazon S3, you should try to make the data self-describing so all the
metadata needed to make the data useful also is persisted in the bucket alongside the data. You
should also build in your application the ability to recover from the data stored on Amazon S3.
With this design, even if you lose your entire compute layer you will still be able to recover the
data for the customer. Your whole application gains the durability benefits of Amazon S3 within
a region or across regions when combined with cross-region replication.
The benefit of using an SDK is it that it has many of the best practices and error handling built
in, thereby decreasing the development time and trial and error of trying to directly integrate
with the APIs, including things like retires and utilizing multiple endpoints.
You can go to SDK Tools to get the latest SDKs for the programming language used by your
product or integration code.2
For example, if your company name is Cloud Corp and you have a product called My Solution
that has a current version of 5.0 you can use the string below:
<PARTNER> - This should be the name of your company. Please be consistent with the name you
use across all products, including case. This can include letters and numbers, avoid spaces and
do not use any special characters.
Page 2
Amazon Web Services – AWS Integration Guide for Storage Partners
<SOLUTION> - The product/solution name. This can include letters and numbers, avoid spaces
and do not use any special characters.
<VERSION> - This should be the numeric version of the product/solution. Try to use just the
major and minor version, there is no need to go down to the level of a specific patch.
If needed, you can incorporate any additional data such as the normal agent string after the APN
user agent string. The APN string should always be first and you should separate it with a comma.
See example below.
To test your UA string, you should make sure it is able to match the regular expression below.
APN\/1\.0\s([^\/]+)\/[0-9]+\.[0.9]+\s([^\/]+)\/([^\/\,\s]+)
In addition to the User Agent String, which helps track usage of Amazon S3 and Amazon S3
Glacier, the APN storage team has begun tracking Amazon EC2 and Amazon EBS influenced
usage by partners. To get more information about participating in this program, talk to your APN
contact.
Objects also should be configured to replicate to a bucket in another region using the AWS
backend network - a feature called Cross-Region Replication (CRR).4 CRR can be configured at
the bucket level and is the preferred method of replicating objects across regions, rather than
requiring you to incorporate replication into your product and manage uploads to multiple
regions.
CRR replication requires Bucket Versioning to be enabled on the buckets on both sides of the
replication.5 Make sure to review how bucket versioning works before enabling this feature on
your bucket or requesting your customer to enable it on their bucket. Once bucket versioning is
enabled it cannot be disabled only suspended, make sure this is the right configuration for your
application before enabling it.
It is important to note that CRR only replicates new objects added to the bucket after being
enabled. Therefore, it is important to enable CRR before objects start being uploaded to a bucket
if it is going to be used as part of your product's design.
Page 3
Amazon Web Services – AWS Integration Guide for Storage Partners
Even when WORM is not needed for compliance reasons, partners should consider enabling
this feature in Governance Mode for any buckets where data is stored that is used by
applications that need protection from being deleted underneath your application without
your knowledge. By enabling this feature, partners reduce the risk of data that has been
persisted to Amazon S3 being removed by any user or application other than the one
intended.
For more information on this feature, see the Amazon S3 Object Lock development guide.6
Performance
Align Object Naming
Amazon S3 utilizes partitions on the backend to optimize performance. While partitioning is
transparent to many workflows, higher performance workloads benefit from optimizing to
ensure data exists in multiple partitions, thereby avoiding any IO contention of a single partition.
Since your product may be used by customers with various workloads and performance
requirements, optimizing to avoid possible performance bottlenecks is a best practice.
The best way to ensure content is spread between partitions is by not using sequential naming
in your Amazon S3 object key names. Remember, in Amazon S3 all objects are referenced by
key name, which can be one or more levels emulating a folder structure, but there are not really
folders. For example, an object may have a key name of myfile.txt, giving it an Amazon S3 path
of S3://mybucket/myfile.txt. Another object may have a key name of images/2017/myimage.jpg
which would give it an Amazon S3 path that incorporated the whole key
S3://mybucket/images/2017/myimage.jpg. Each level separated by a slash will present as a
virtual folder in the Amazon S3 console.
If all the objects in the bucket keys where prefixed with images/2017, then they would likely fall
in the same partition, which could limit performance. Consider using a random hash or other
methodology to create a non-sequential distribution in the beginning of your key names.
For additional information, please review Request Rate and Performance Considerations.7
Page 4
Amazon Web Services – AWS Integration Guide for Storage Partners
VPC Endpoints
For components that are part of an Amazon Virtual Private Cloud (VPC), communication should
remain within the VPC. This has many benefits, including providing a secure connection to
Amazon S3 that does not require a gateway or network access translation (NAT). In addition
performance is improved by using Amazon VPC endpoints for services such as Amazon S3. This
allows services running within the customer's VPC, such as Amazon EC2 and AWS Lambda, to
communicate on a more direct path to an internal Amazon S3 endpoint that exists within your
Amazon VPC. After configuring Amazon S3 VPC endpoints, your services that run outside the
VPC or outside of AWS can continue to access the bucket in the same way with no changes.
Please note that this works by using the default AWS DNS server that generally runs at .2 on
your subnet. If the client is pointing instances to only a custom DNS server, there may be
additional configuration required to implement this feature.
Jeff Barr has a good blog post about setting up VPC endpoint for Amazon S3.8
The AWS Snowball Edge comes in two options: compute optimized and storage optimized.
Unlike the original AWS Snowball, AWS Snowball Edge has the ability to run also compute
rather than just storage. The AWS Snowball Edge also has both an Amazon S3 interface and a
Network File System (NFS) interface and can also run one or more Amazon EC2 instances,
thereby enabling partners to run their software on the device. Running your software on AWS
Snowball Edge enables many different workflows for customers, including data pre-processing
as well as running software at dark sites. Both options of AWS Snowball Edge offer both
compute and storage, but at different ratios. For more information on the latest AWS Snowball
Edge offerings, visit the AWS Snowball Device Differences page. 9
Talk to your APN team contact to discuss the possibility of credits, which may be available in
certain cases for new solutions that support AWS Snowball. When testing AWS Snowball for
your application, you should ensure that you do not only functional testing but scale testing
with both many files and large amounts of data, as this is the most common reason customers
use AWS Snowball. Make sure to review the AWS Snowball best practices guide.10
Page 5
Amazon Web Services – AWS Integration Guide for Storage Partners
Security
IAM and Least Privilege Model
AWS Identity and Access Management (IAM) roles should always be used if your application is
running on Amazon EC2, Lambda, or other supported AWS services.11 AWS access key and secret
access keys should never be stored on your Amazon EC2 instances or within your application if
your application runs on Amazon EC2. The only time you should be storing access keys is if you
need to access AWS services from outside AWS, such as from code running on premises.
IAM roles should be assigned to your instances at time of creation and should be incorporated
as part of your Cloud Formation Template. You can check for more information on Create an
IAM role in your template12 and Creating a IAM Managed Policy in your template13. The
combination of IAM Roles associated to your instances and managed policies associated to your
roles allow a scalable and secure way to allow your instances to access other AWS resources
such as Amazon S3.
Please take a look at the IAM best practices guide for other best practices that you should
incorporate into your application, if applicable to your configuration.14
Another important best practice is the least privilege access policy, which you should always
follow when creating your solutions access to Amazon S3 or any other AWS services. See below
for examples of good and bad policies.
Suppose your solution needs to upload objects and read back those objects in a bucket named
appbucket. Example 1 below shows a bad policy that would allow access to all S3 buckets with
all access privileges. Example 2 below shows a good policy that just grants the privileges needed
to get the job done and nothing extra.
"Version": "2012-10-17",
"Statement": [
"Sid": "BadPolicy",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
Page 6
Amazon Web Services – AWS Integration Guide for Storage Partners
"Version": "2012-10-17",
"Statement": [
"Sid": "GoodPolicy1099",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::appbucket/*"
Notice in Example 2 that access is granted to only get and put object API calls. This grants access
to only read and write objects. You may need additional privileges depending on what your
application is doing, but you should only grant what you need. Notice that it also only allows
access to objects under the bucket called “appbucket.” You can further lock it down to a key
prefix as well if needed.
Page 7
Amazon Web Services – AWS Integration Guide for Storage Partners
When encrypting data that is going to be stored in AWS you should utilize AWS Key Management
Service (KMS) for all key management for encryption. You should make use of Data Keys for each
object using envelope encryption. Check out the AWS KMS Guide for more details on using AWS
KMS.15
Amazon S3 directly integrates with AWS KMS for data-at-rest protection. Your application
should always strive to keep customers' data protected and use both data-in-transit and data-
at-rest encryption to provide an end-to-end, always protected data security strategy. For more
information on Amazon S3 data encryption with KMS see How Amazon Simple Storage Service
(Amazon S3) Uses AWS KMS16 and Protecting Data Using Server-Side Encryption with AWS KMS–
Managed Keys (SSE-KMS)17.
It also is an AWS best practice to keep all data related to the customer, including metadata, on
the AWS Cloud or in the customer’s location. If there is a separate control plane, metadata store
that hosts data for AWS customers, or any other external dependency that might prevent
availability or durability of the customers data set, there should be a variant of that control
plane, metadata store, or other dependencies that runs on AWS for AWS customers, even if it
run in a partners AWS account.
Monitoring
Support for CloudWatch
Monitoring application, service and component health is critical to AWS customers.
Amazon CloudWatch is the main service AWS uses to collect and monitor various AWS and
custom metrics.18 You should incorporate any metrics relevant to your solution into
Amazon CloudWatch, this will enable customers to monitor health on a dashboard that they are
accustomed to using and are likely utilizing for all their other AWS metrics.
It is easy to incorporate custom metrics from your solution, see Publishing Custom Metrics for
more details.19 In addition to just metrics, Amazon CloudWatch also has many other features.
One feature of CloudWatch is Amazon CloudWatch Logs which enables you to store logs for your
solution in an easily accessible centralized location.20 Amazon CloudWatch Logs help
customers save time searching for logs across appliances or within your application.
Additionally, Amazon CloudWatch Logs enables logs to exist beyond the life of instances.
Additionally, you should publish all important events in your application to Amazon CloudWatch
Events.21 This will allow customers to build or utilize their existing event-driven actions to
respond to changes in your application, whether that is component failure, security events,
thresholds being hit, or any other event your application generates.
Page 8
Amazon Web Services – AWS Integration Guide for Storage Partners
Cost Optimization
Use Lifecycle Policies
Lifecycle management of data is critical for optimizing customers' costs. AWS offers several
classes of storage that have different price points, including but not limited to Amazon S3-
Standard, which is the default Amazon S3 tier, and Standard-Infrequent Access (Amazon S3
Standard-IA) and Amazon S3 Glacier. Similar to an on-premises multi-tiered storage array or an
hierarchical storage management system, Amazon S3 is able to seamlessly tier content to
different backend storage systems while being virtually transparent*.
*If objects are moved to Amazon S3 Glacier, the move itself is transparent, but upon being read,
the object will need to be restored from Amazon S3 Glacier. This is an automatic process when
tiered from Amazon S3 using the Amazon S3 API but will be subject to Glacier Retrievals, which
can take anywhere from minutes to hours to access the content depending on the retrieval
options selected.24
On Amazon S3, this automated movement is handled by Lifecycle Policies.25 These policies are
XML documents that specify a set of one or more rules regarding what content is moved, where
it is moved and in what timeframe it is moved. You can manage lifecycle policies through
the Amazon S3 API Lifecycle APIs26 or you can use one of AWS SDKs to set lifecycle policy27.
Page 9
Amazon Web Services – AWS Integration Guide for Storage Partners
The Amazon S3 Standard class is the default class and ideal for data that is small, transient,
and/or frequently accessed data. There are no minimum size requirements of minimum time
requirements. The same as most other Amazon S3 storage classes, it provides 11 nine’s of
durability and stores data across three of more AZs.
The Amazon S3 Standard-IA storage class may be a good fit for many use cases. It offers the
same durability as Amazon S3 Standard but a lower storage cost. However, this class is designed
for use cases where, as the name suggests, the objects are not accessed frequently. Because
data is infrequently accessed, this storage class has access costs that need to be taken into
account. There are also other considerations when using Amazon S3 Standard-IA, like minimum
object size and minimum duration of storage. However, for the right workflow and use case,
Amazon S3 Standard-IA can make a huge cost difference to your customers.
The Amazon S3 Intelligent-Tiering storage class is designed to move data between a frequently
accessed and infrequently accessed tier. It is important to note that this does not move objects
between Amazon S3 Standard and Amazon S3 Standard-IA storage classes, but instead the
object stays on the Amazon S3 Intelligent-Tiering storage class and moves transparently on the
backend between storage tiers. Unlike with Amazon S3 lifecycle policies, object movement on
this storage class is based on when the objects are accessed and not when the objects are
created. The movement is handled automatically by AWS so the customer or partner doesn’t
need to be aware of the usage pattern ahead of time. Since AWS is doing the data movement
unlike Amazon S3 Standard-IA there is no additional access charge even if the data is on an
infrequently accessed tier. However, there is a small per object monitoring change that should
be taken into account when selecting this storage class. This class is ideal for deduplicated data
sets where using lifecycle policies is not practical because objects are shared by new and old
data and it is hard to predict what data will need to be accessed at any given time.
The Amazon S3 One Zone-IA storage class shares most of the same characteristics as the Amazon
S3 Standard-IA storage class including minimum object size and duration characteristics. The
difference which sets Amazon S3 One Zone-IA apart from Amazon S3 Standard-IA as well as all
other Amazon S3 storage classes is the fact that objects are only stored in a single AZ. Even
though data is stored in a single AZ, the storage class is still designed to provide 11 nine’s of
durability with one less nine of availability compared to Amazon S3 Standard-IA and is not
protected against AZ failure. The storage class provides customers with a cost- effective solution
for secondary copies of data or data that can be recreated easily.
There is a seventh storage class intentionally not mentioned above, which is Reduced
Redundancy Storage (RRS). We do not recommend using this storage class any longer. If RRS is
listed as an option or used in some way by your application we recommend removing it and
replacing with Amazon S3 One Zone-IA or another appropriate storage class.
Page 10
Amazon Web Services – AWS Integration Guide for Storage Partners
(Not recommended)
* Availability for archive storage classes based on after objects are restored
Page 11
Amazon Web Services – AWS Integration Guide for Storage Partners
Please note that while minimum durations and minimum object sizes do apply to certain storage
classes, they are not hard constraints and are simply the minimum that would be billed for
objects in that storage class. For Example, if you store data in Amazon S3 Standard-IA storage
class for 20 days, you will still be able to delete it but the customer will be billed for 30 days for
that object regardless. The same is the case for the minimum storage size with the IA storage
classes the customer will be billed for a minimum of 128KB per object regardless of object size.
When it comes to Amazon S3 lifecycle policies, they do enforce these rules so you will not be
able to configure a policy to move objects out of IA storage classes before 30 days. You also can
not move objects to the IA storage classes before being on Amazon S3 Standard for 30 days, so
it is important to specify the correct storage class as part of the initial upload to avoid these
constraints or having customers pay for 30 days of the Amazon S3 Standard class and lifecycle
costs when they really just need their data on one of the IA storage classes.
There are two ways to interact with Amazon S3 storage classes. You can either utilize the
lifecycle policies detailed in the previous section or you can use the Amazon S3 API or SDKs to
specify the storage class directly in the request so the object is uploaded directly to the storage
class and does not require first being put on Amazon S3 standard class and then transitioned to
the other storage class.
The archive storage classes such as Amazon S3 Glacier and Amazon S3 Glacier Deep Archive have
slightly different characteristics than the other Amazon S3 storage classes, more details about
these storage classes can be found in the next section.
For more information on Amazon S3 storage classes, read the Storage Class Intro. 29
Page 12
Amazon Web Services – AWS Integration Guide for Storage Partners
your application is currently using the Glacier API that you discontinue that support for new
data and migrate to using only the Amazon S3 API. The Amazon S3 Glacier Deep Archive
storage class is only available through the Amazon S3 API.
Using the Amazon S3 API also will make it easier to support these storage classes from a
development perspective, because partners only will have to support one API and sending
data to an archive storage class is virtually identical to sending to any other Amazon S3 storage
class. The archive storage classes differ from other Amazon S3 storage classes in how objects
are read. The archive storage classes are asynchronous, which means that objects are not
available for reading immediately.
There is a two-step process in which the object must first be restored and then it can be
accessed like any other object on Amazon S3. Restoration times depend on the object size and
the retrieval option selected. Amazon S3 Glacier support three different retrieval options:
Expedited, which typically makes objects available in 1 – 5 minutes for all but the largest
objects (250MB+); Standard, which typically makes objects available in 3 -5 hours; and Bulk,
which typically makes objects available within 5 -12 hours. Amazon S3 Glacier Deep Archive
supports two different retrieval options: Standard, which typically makes objects available in
12 hours; and Bulk, which typically makes objects available within 48 hours. It is recommend
that you support all the retrieval options in your application as customers may need them to
make the most cost efficient storage class choice for their use case. For example, supporting
only the standard retrieval option will limit your applications use case for customers who want
to use Amazon S3 Glacier but require the ability to have access to data within minutes on
occasion.
An AWS Lambda function also can be run on a schedule or invoked manually, which can be of
benefit for running certain backend functions that you might normally run on your on-premises
system. You can move processes like verification, garbage collection, and general maintenance
to run on the cloud and prevent any unnecessary costs for outbound data transfers or time to
read over the network. As part of your integrations, your on-premises system can invoke an
AWS Lambda function for execution closer to Amazon S3.
An important consideration when using AWS Lambda functions is that functions can only run for
a maximum of 15 minutes per function. However, with AWS step function support, you can have
Page 13
Amazon Web Services – AWS Integration Guide for Storage Partners
a much longer execution time by using state machine logic between functions. AWS Lambda
also has two powerful functions for partners: Lambda Layers,31 which allows you to package up
shared code and libraries; and custom runtimes,32which allows you to write your functions in
any language.
Please read using AWS Lambda with Amazon S3 for additional information on this powerful
integration.33
Conclusion
Following these best practices is critical for a highly successful Amazon S3 integration.
Additionally, there are many details about each service that may be beneficial to your particular
integration. It is beyond the scope of this document to provide details of every feature and every
possible integration point. AWS services also are constantly being improved, so it is
recommended Partners regularly review documentation and check out the What’s New with
AWS site.34 Also review the APN site and blogs for the latest details on new features and contact
your APN team if you have any questions.
Contributors
The following individuals and organizations contributed to this document:
Further Reading
For additional information, see the following:
Document Revisions
Date Description
Page 14
Amazon Web Services – AWS Integration Guide for Storage Partners
Date Description
Notes
1
https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
2
https://aws.amazon.com/tools/
3
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-
zones.html
4
https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
5
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
6
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
7
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
8
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
9
https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html
10
https://docs.aws.amazon.com/snowball/latest/developer-guide/BestPractices.html
11
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
12
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-
role.html
13
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-iam-
managedpolicy.html
14
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
15
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html
16
https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html
17
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
18
https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/Welcome.html
Page 15
Amazon Web Services – AWS Integration Guide for Storage Partners
19
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/publishingMetrics.ht
ml
20
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.htm
l
21
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.h
tml
22
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html
23
https://aws.amazon.com/config/
24
https://aws.amazon.com/glacier/faqs/
25
https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-lifecycle-rules.html
26
https://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-rest.html
27
https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-set-lifecycle-configuration-
intro.html
28
https://aws.amazon.com/s3/storage-classes/
29
https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
30
https://docs.aws.amazon.com/AmazonS3/latest/dev/restoring-objects.html
31
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
32
https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html
33
https://docs.aws.amazon.com/lambda/latest/dg/with-s3.html
34
https://aws.amazon.com/new
35
https://aws.amazon.com/whitepapers/storage-options-aws-cloud/
36
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
37
https://d1.awsstatic.com/whitepapers/Storage/Backup_and_Recovery_Approaches_Using_
AWS.pdf
38
https://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html
Page 16