0% found this document useful (0 votes)
245 views6 pages

Using Amazon S3

The document discusses how to create and configure an Amazon S3 bucket. It outlines steps to name a bucket, set permissions and access controls, enable versioning, add tags, upload and work with objects, and delete objects. Additional features covered include lifecycle rules, replication between buckets, and security considerations.

Uploaded by

88awscloud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
245 views6 pages

Using Amazon S3

The document discusses how to create and configure an Amazon S3 bucket. It outlines steps to name a bucket, set permissions and access controls, enable versioning, add tags, upload and work with objects, and delete objects. Additional features covered include lifecycle rules, replication between buckets, and security considerations.

Uploaded by

88awscloud
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Using Amazon S3

a. Create a bucket.
1. The first step to create a bucket is to give the bucket a name.
2. The name that you assign to your bucket must be globally unique even
though the bucket is created inside a region.
3. Bucket names can be 3 – 63 characters and must include only lowercase
letters, numbers, dots (.) and hyphens (-).
4. Bucket names must begin and end with a letter or number.
5. When you store an object in a bucket, the combination of a bucket name,
key and version ID uniquely identifies the object. For example, if you
store an object called ‘anobject.zip’ in a bucket called ‘abucket’, the URL
would be as follows: https://abucket.s3.amazonaws.com/1111-11-
11/anobject.zip. The key is 1111-11-11/anobject.zip.
6. You use this URL to reference objects within the bucket.
b. Configure the bucket.
1. After giving the bucket a name, you can make a series of configurations.
2. If you leave all the defaults, the bucket will be secure and accessible by
only you.
3. First, you choose a Region for the bucket.
4. The Region will default to the Region that’s currently selected for your
AWS account.
5. Choose a region close to you to minimize latency and costs.
6. You can control who has access to your bucket and your objects by
setting ownership and access controls.
7. Object ownership is controlled by Access Control Lists (ACLs).
8. ACLs are the lists that you use to say who can communicate with what.
In other words, ACLs control who can specify access to objects in your
bucket and whether other AWS accounts can write to and use the
objects.
9. The default is to have access control lists turned off.
10. Public access allows people and applications outside your AWS account
to view and use your objects.
11. This access will be needed when you are hosting a static website, but
usually you will want to keep your objects private.
12. You have multiple controls for blocking and granting public access.
13. By default, new buckets, access points and objects don’t allow public
access. The default settings keep your bucket and objects private
and owned by only you.
14. However, users can modify bucket policies, access point policies or
object permissions to allow public access.
15. Versioning gives you the ability to keep multiple variants of an object in
the same bucket.
16. You can use versioning to preserve, retrieve and restore every version
of every object stored in your S3 bucket.
17. With versioning, you can recover from both unintended user actions and
application failures.
18. The versioning state applies to all of the objects in that bucket.
19. When you enable versioning in a bucket, all new objects are versioned
and given a unique version ID.
20. Objects that already existed in the bucket at the time versioning is
enabled will thereafter always be versioned.
21. They will be given a unique version ID when they are modified by future
requests.
22. Versioning is disabled by default and can be enabled later.
23. You enable and suspend versioning at the bucket level.
24. After you version-enable a bucket, it can never return to an un-versioned
state.
25. But you can suspend versioning on that bucket.
26. Object Tags – You can add tags to your bucket to track your storage
costs or criteria for individual projects.
27. The tag key is the name of the tag and must be unique.
28. The tag value is optional and does not need to be unique. For example,
in the tag project/Alpha, project is the key and Alpha is the value.
Another tag could be cost-center/Alpha.
29. You can add tags to a bucket from the properties page at any point.
30. After you have configured your bucket, choose Create bucket and your
bucket will be created.
31. Upload objects – Now to upload objects, choose the Upload button or
drag and drop objects into the bucket.
32. Amazon S3 will store an unlimited amount of data per bucket, but single
objects can be no larger than 5 TB.
33. Object size for objects that are uploaded by using the console is limited
to 160 GB.
34. You can upload larger objects by using the command line interface (CLI).
35. You will receive a notification with the status of the upload so you can
retry any items that failed to load.
36. Storage class – When you upload objects, you can choose the storage
class for the objects that are part of the upload.
https://aws.amazon.com/s3/storage-classes-infographic/.
37. The default class is S3 standard.
38. After the upload, you will see the storage class of each object in the
console.
39. You can change the storage class of objects at any time by choosing the
objects and then choosing Edit storage class from the Actions menu.
40. This menu gives you the list of storage classes to choose from.
41. Multipart upload – You can use multipart upload to upload a single
object as a set of parts.
42. Each part is a contiguous portion of the object’s data.
43. You can upload these object parts independently and in any order.
44. If transmission of any part fails, you can transmit that part without
affecting other parts.
45. After all parts of your object are uploaded, Amazon S3 assembles these
parts and creates the object.
46. Consider using multipart upload when you have objects with a size
over 100 MB.
47. Work with objects – As soon as your objects are uploaded, you can
work with them.
48. You can copy an object for the following reasons:
• To create additional copies of objects.
• To rename objects by copying them and deleting the original
ones.
• To move objects across Amazon S3 locations.
• To change object metadata.
49. You can create a copy of your object by choosing the object and then
choosing Copy from the Actions menu.
50. To reference an object in a bucket, you can copy the URL of the object,
which is the unique identifier for the object.
51. You can download individual objects in the console by choosing the
object and then choosing Download.
52. Delete Objects – Because all objects on your S3 bucket incur storage
costs, you should delete objects that you no longer need.
53. You can delete objects from the console by choosing the object and
choosing ‘Delete’.
54. In a bucket with S3 versioning enabled, you can protect your object from
accidental or malicious deletion by enabling multi-factor authentication
(MFA) delete.
55. MFA delete requires two forms of authentication to change a bucket
versioning state or permanently delete an object version.

56. The following are the two forms of authentication:


• Your security credentials
• A valid serial number, a space and the six-digit code displayed on
an approved authentication device.
57. Additional Features: Amazon S3 has additional features to consider:
• Lifecycle policy rules
• Replication rules
• Security
58. Lifecycle rules:
• Transition actions define when objects transition from one
storage class to another.
• Expiration actions define when objects expire and are deleted.
59. Replication rules:
• Replication offers automatic copying of objects across S3 buckets.
• You can setup replication rules to replicate objects to the
following:
o Your AWS account
o Another AWS account
o A single bucket
o Multiple buckets
o Within the same region
o Into a different region
60. Using replication can help you to efficiently maintain copies of objects in
multiple Regions, in different storage classes, and under different
ownership.
61. You can create replication rules from the console by selecting the
Management tab inside your bucket.
62. Cross-Region and Same-Region Replication: When considering
replication, you must decide whether you want to replicate objects within
the same Region or across Regions. Both strategies have benefits, and
you should base your decisions on the constraints of your workloads
business needs:
• Cross-Region Replication: You can use Cross-Region replication
(CRR) to copy objects across Amazon S3 buckets in different AWS
Regions. Consider this type of replication if you need to:
o Meet compliance requirements that dictate that you store
data at even greater distances for disaster recovery.
o Minimize latency by maintaining object copies in AWS
Regions that are geographically closer to your users.
o Increase operational efficiency for compute clusters in two
different AWS Regions by maintaining object copies in
those regions.
• Same-Region Replication: Same-Region Replication (SRR) is
used to copy objects across Amazon S3 buckets in the same AWS
Region. SRR can help you do the following:
o Aggregate logs into a single bucket for processing of logs
in a single location.
o Configure live replication between production and test
accounts that use the same data. You can replicate objects
between those accounts while maintaining object
metadata.
o Abide by data sovereignty laws by storing multiple copies
of your data in separate AWS accounts within a certain
region.
63. Bucket Security – Cloud security at AWS is the highest priority. Security
is a shared responsibility between AWS and you.
https://aws.amazon.com/compliance/shared-responsibility-model/.
64. As the customer, you are responsible for security in the cloud as
outlined by the shared responsibility model.
65. Amazon S3 default settings create a bucket that is secure. However, as
you upload and work with objects, you might want to employ other
security features.
66. Bucket Security – IAM Policy – By default, all Amazon S3 resources are
private. Only you can access the resource.
67. You can optionally grant access permissions to others by writing an IAM
access policy.
68. You can grant users, groups and roles - controlled access to Amazon S3
and your objects.
69. Bucket Security – Bucket Policy – Another way to grant access to an S3
bucket is to use a bucket policy.
70. This policy is attached to the bucket and can grant other AWS accounts
or users access to the objects that are stored within the bucket.
71. You can specify the type of access within the policy. Policies are written
in JSON (JavaScript Object Notation).
72. Bucket Security: You can protect your data while in transit and while at
rest by using different types of encryption.
73. When you upload and download objects, you can protect your data by
using client-side encryption.
74. Data is encrypted before you upload it, and you hold all the encryption
keys.
75. You can also protect your Amazon S3 data at rest with client-side
encryption. You can also use server-side encryption.
76. Amazon S3 encrypts data at the object level when you upload it and
when you download the object, Amazon S3 decrypts the data.
77. You can configure default encryption for an Amazon S3 bucket.
78. You can use server-side encryption with Amazon S3 managed keys (SSE-
S3) (the default), server-side encryption with AWS Key Management
Service (AWS KMS) keys (SSE-KMS), or dual-layer server-side encryption
with AWS KMS keys (DSSE-KMS).
79. With the default option (SSE-S3), Amazon S3 uses one of the strongest
block ciphers—256-bit Advanced Encryption Standard (AES-256) to
encrypt each object uploaded to the bucket.
80. With SSE-KMS, you have more control over your key.
81. If you use SSE-KMS, you can choose an AWS KMS customer managed
key or use the default AWS managed key (aws/s3).
82. SSE-KMS also provides you with an audit trail that shows when your KMS
key was used and by whom.
83. With DSSE-KMS, Amazon S3 applies two individual layers of object-level
encryption to satisfy compliance requirements for highly regulated
customers.

***

You might also like