AWS Free Tier
AWS Free Tier
o The following screen appears after clicking on the "Complete Sign Up" button. If you are an already existing
user of an AWS account, then enter the email address of your AWS account otherwise "create an aws
account".
o On clicking on the "create an aws account" button, the following screen appears that requires some fields to
be filled by the user.
o Now, fill your contact information.
o After providing the contact information, fill your payment information.
o After providing your payment information, confirm your identity by entering your phone number and security
check code, and then click on the "Contact me" button.
o AWS will contact you to verify whether the provided contact number is correct or not.
o When number is verified, then the following message appears on the screen.
o The final step is the confirmation step. Click on the link to log in again; it redirects you to the "Management
Console".
AWS account ID
AWS account ID is a 12-digit number such as 123456780123 which can be used to construct Amazon Resource
Names (ARNs). Suppose we refer to resources such as an IAM user, the AWS account ID distinguishes the resources
from resources in other AWS accounts.
We can find the AWS account ID from AWS Management Console. The following steps are taken to view your account
ID:
o Login to the aws account by entering your email address and password, and then you will move to the
management console.
o Now, click on the account name, a dropdown menu appears.
o Click on "My Account" in the dropdown menu of account name to view your account ID.
Canonical User ID
o A Canonical user ID is 64-digit hexadecimal encoded a 256-bit number.
o A canonical user ID is used in an Amazon S3 bucket policy for cross-account access means that AWS account
can access the resources in another AWS account. For example, if you want AWS account access to your
bucket, you need to specify the canonical user ID to your bucket's policy.
o Firstly, visit the website https://aws.amazon.com, and log in to the aws account by entering your email
address and password.
o From the right side of the management console, click on the account name.
o Click on the "My Security Credentials" from the dropdown menu of the account name. The screen appears
which is shown below:
Click on the Account identifiers to view the Canonical user ID. S3-101
o S3 is one of the first services that has been produced by aws.
o S3 stands for Simple Storage Service.
o S3 provides developers and IT teams with secure, durable, highly scalable object storage.
o It is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere
on the web.
What is S3?
o S3 is a safe place to store the files.
o It is Object-based storage, i.e., you can store the images, word files, pdf files, etc.
o The files which are stored in S3 can be from 0 Bytes to 5 TB.
o It has unlimited storage means that you can store the data as much you want.
o Files are stored in Bucket. A bucket is like a folder available in S3 that stores the files.
o S3 is a universal namespace, i.e., the names must be unique globally. Bucket contains a DNS address.
Therefore, the bucket must contain a unique name to generate a unique DNS address.
o If you upload a file to S3 bucket, then you will receive an HTTP 200 code means that the uploading of a file is
successful.
Advantages of Amazon S3
o Create Buckets: Firstly, we create a bucket and provide a name to the bucket. Buckets are the containers in
S3 that stores the data. Buckets must have a unique name to generate a unique DNS address.
o Storing data in buckets: Bucket can be used to store an infinite amount of data. You can upload the files as
much you want into an Amazon S3 bucket, i.e., there is no maximum limit to store the files. Each object can
contain upto 5 TB of data. Each object can be stored and retrieved by using a unique developer assigned-key.
o Download data: You can also download your data from a bucket and can also give permission to others to
download the same data. You can download the data at any time whenever you want.
o Permissions: You can also grant or deny access to others who want to download or upload the data from
your Amazon S3 bucket. Authentication mechanism keeps the data secure from unauthorized access.
o Standard interfaces: S3 is used with the standard interfaces REST and SOAP interfaces which are designed
in such a way that they can work with any development toolkit.
o Security: Amazon S3 offers security features by protecting unauthorized users from accessing your data.
o Key: It is simply the name of the object. For example, hello.txt, spreadsheet.xlsx, etc. You can use the key to
retrieve the object.
o Value: It is simply the data which is made up of a sequence of bytes. It is actually a data inside the file.
o Version ID: Version ID uniquely identifies the object. It is a string generated by S3 when you add an object to
the S3 bucket.
o Metadata: It is the data about data that you are storing. A set of a name-value pair with which you can store
the information regarding an object. Metadata can be assigned to the objects in Amazon S3 bucket.
o Subresources: Subresource mechanism is used to store object-specific information.
o Access control information: You can put the permissions individually on your files.
Amazon S3 Concepts
o Buckets
o Objects
o Keys
o Regions
o Data Consistency Model
o Buckets
o A bucket is a container used for storing the objects.
o Every object is incorporated in a bucket.
o For example, if the object named photos/tree.jpg is stored in the treeimage bucket, then it can be
addressed by using the URL http://treeimage.s3.amazonaws.com/photos/tree.jpg.
o A bucket has no limit to the amount of objects that it can store. No bucket can exist inside of other
buckets.
o S3 performance remains the same regardless of how many buckets have been created.
o The AWS user that creates a bucket owns it, and no other AWS user cannot own it. Therefore, we can
say that the ownership of a bucket is not transferrable.
o The AWS account that creates a bucket can delete a bucket, but no other AWS user can delete the
bucket.
o Objects
o Objects are the entities which are stored in an S3 bucket.
o An object consists of object data and metadata where metadata is a set of name-value pair that
describes the data.
o An object consists of some default metadata such as date last modified, and standard HTTP metadata,
such as Content type. Custom metadata can also be specified at the time of storing an object.
o It is uniquely identified within a bucket by key and version ID.
o Key
o A key is a unique identifier for an object.
o Every object in a bucket is associated with one key.
o An object can be uniquely identified by using a combination of bucket name, the key, and optionally
version ID.
o For example, in the URL http://jtp.s3.amazonaws.com/2019-01-31/Amazons3.wsdl where "jtp" is the
bucket name, and key is "2019-01-31/Amazons3.wsdl"
o Regions
o You can choose a geographical region in which you want to store the buckets that you have created.
o A region is chosen in such a way that it optimizes the latency, minimize costs or address regulatory
requirements.
o Objects will not leave the region unless you explicitly transfer the objects to another region.
Creating an S3 Bucket
o Sign in to the AWS Management console. After sign in, the screen appears is shown below:
o Move to the S3 services. After clicking on S3, the screen appears is shown below:
o To create an S3 bucket, click on the "Create bucket". On clicking the "Create bucket" button, the screen
appears is shown below:
o Enter the bucket name which should look like DNS address, and it should be resolvable. A bucket is like a
folder that stores the objects. A bucket name should be unique. A bucket name should start with the lowercase
letter, must not contain any invalid characters. It should be 3 to 63 characters long.
o Click on the "Create" button. Now, the bucket is created.
We have seen from the above screen that bucket and its objects are not public as by default, all the objects are
private.
o Now, click on the "javatpointbucket" to upload a file in this bucket. On clicking, the screen appears is shown
below:
o Click on the "Upload" button to add the files to your bucket.
o Click on the "Add files" button.
o Add the jtp.jpg file.
o Click on the "upload" button.
From the above screen, we observe that the "jtp.jpg" has been successfully uploaded to the bucket "javatpoint".
o Move to the properties of the object "jtp.jpg" and click on the object URL to run the file appearing on the
right side of the screen
o On clicking the object URL, the screen appears is shown below:
From the above screen, we observe that we are not allowed to access the objects of the bucket.
o To overcome from the above problems, we need to set the permissions of a bucket, i.e., "javatpointbucket"
and unchecked all of them.
o Save these permissions.
o Enter "confirm" in a textbox, then click on the "confirm" button.
o S3 Standard
o S3 Standard IA
o S3 one zone-infrequent access
o S3 Glacier
S3 Standard
o Standard storage class stores the data redundantly across multiple devices in multiple facilities.
o It is designed to sustain the loss of 2 facilities concurrently.
o Standard is a default storage class if none of the storage class is specified during upload.
o It provides low latency and high throughput performance.
o It designed for 99.99% availability and 99.999999999% durability
S3 Standard IA
o IA stands for infrequently accessed.
o Standard IA storage class is used when data is accessed less frequently but requires rapid access when
needed.
o It has a lower fee than S3, but you will be charged for a retrieval fee.
o It is designed to sustain the loss of 2 facilities concurrently.
o It is mainly used for larger objects greater than 128 KB kept for atleast 30 days.
o It provides low latency and high throughput performance.
o It designed for 99.99% availability and 99.999999999% durability
S3 Glacier
o S3 Glacier storage class is the cheapest storage class, but it can be used for archive only.
o You can store any amount of data at a lower cost than other storage classes.
o S3 Glacier provides three types of models:
o Expedited: In this model, data is stored for a few minutes, and it has a very higher fee.
o Standard: The retrieval time of the standard model is 3 to 5 hours.
o Bulk: The retrieval time of the bulk model is 5 to 12 hours.
o You can upload the objects directly to the S3 Glacier.
o It is designed for 99.999999999% durability of objects across multiple availability zones.
Versioning
Versioning is a means of keeping the multiple forms of an object in the same S3 bucket. Versioning can be used to
retrieve, preserve and restore every version of an object in S3 bucket.
For example, bucket consists of two objects with the same key but with different version ID's such as photo.jpg
(version ID is 11) and photo.jpg (version ID is 12).
Versioning-enabled buckets allow you to recover the objects from the deletion or overwrite. It serves two purposes:
o If you delete an object, instead of deleting the object permanently, it creates a delete marker which becomes
a current version of an object.
o If you overwrite an object, it creates a new version of the object and also restores the previous version of the
object.
Note: Once you enable the versioning of a bucket, then it cannot be disabled. You can suspend the
versioning.
Versioning state can be applied to all the objects in a bucket. Once the versioning state is enabled, all the objects in a
bucket will remain versioned, and they are provided with the unique version ID. Following are the important
points:
o If the versioning state is not enabled, then the version ID of the objects is set to null. When the versioning is
not enabled, existing objects are not changed or are not affected.
o The bucket owner can suspend the versioning to stop the object versions. When you suspend the versioning,
existing objects are not affected.
o Now, I want to see some objects in a bucket; we need to make a bucket public. Move to the "Edit public
access settings", uncheck all the settings, and then save the settings.
o After saving the settings, the screen appears is shown below:
Type the "confirm" in a textbox to confirm the settings. Click on the "confirm" button.
o When the settings are confirmed, the screen appears as shown below:
The above screen shows that the objects in a bucket have become public.
o Now, we add the versioning to our bucket. Move to the properties of a bucket, i.e., jtpbucket and click on the
versioning.
o On clicking on the versioning, the screen appears as shown below:
We can either enable or suspend the versioning. Suppose we enable the versioning and save this setting, this adds
the versioning to our bucket.
o Now, we create the second version of the file. Suppose I change the content of the file and re-upload it, then it
becomes the second version of the file.
In the above screen, we change the content from "version 1" to "version 2" and then save the file.
o When we click on the "show", we can see all the versions of a file.
From the above screen, we can see both the versions of a file and currently uploaded file become the latest version.
Both the files are of same size, i.e., 18.0 B and storage class, i.e., Standard.
o Now, we delete an object. Move to the Actions dropdown menu and click on the Delete.
o On deleting the object, the screen appears as shown below:
We observe that the bucket becomes empty.
o However, when we click on the Show Version, we can see all the versions of a file, i.e., Delete marker and
other two versions of a file.
We observe from the above screen that the object is not permanently deleted; it has been restored. Therefore, the
versioning concept is used to restore the objects.
o If you want to restore the object, delete the "delete marker" by clicking on the Actions dropdown menu
and click on the Delete.
o Click on the "Hide" Versions, we will observe that the file has been restored.
Important points to be remembered:M
o It stores all versions of an object (including all writes and even if you delete an object).
o It is a great backup tool.
o Once the versioning enabled, it cannot be disabled, only suspended.
o It is integrated with lifecycle rules.
o Versioning's MFA Delete capability uses multi-factor authentication that can be used to provide the additional
layer of security.
o Create two buckets: Create two buckets within AWS Management Console, where one bucket is a source
bucket, and other is a destination bucket.
o Enable versioning: Cross Region Replication can be implemented only when the versioning of both the
buckets is enabled.
o Amazon S3 encrypts the data in transit across AWS regions using SSL: It also provides security when
data traverse across the different regions.
o Already uploaded objects will not be replicated: If any kind of data already exists in the bucket, then that
data will not be replicated when you perform the cross region replication.
o Compliance Requirements
By default, Amazon S3 stores the data across different geographical regions or availability zone to have the
availability of data. Sometimes there could be compliance requirements that you want to store the data in
some specific region. Cross Region Replication allows you to replicate the data at some specific region to
satisfy the requirements.
o Minimize Latency
Suppose your customers are in two geographical regions. To minimize latency, you need to maintain the
copies of data in AWS region that are geographically closer to your users.
o Maintain object copies under different ownership: Regardless of who owns the source bucket, you can
tell to Amazon S3 to change the ownership to AWS account user that owns the destination bucket. This is
referred to as an owner override option.
o Now, we will see whether the files have been replicated from jtpbucket to the jtp1bucket. Click on
the jtp1bucket.
The above screen shows that the bucket is empty. Therefore, we can say that the objects do not replicate from one
bucket to another bucket automatically, we can replicate only by using AWS CLI (Command Line Interface). To use
the AWS CLI, you need to install the CLI tool.
o Copy the access key and secret access key to the cmd.
o To view the S3 buckets, run the command aws s3 ls.
o To copy the objects of jtpbucket to jtp1bucket, run the command aws s3 cp?recursive s3://jtpbucket
s3://jtp1bucket.
The above screen shows that the objects of jtpbucket have been copied to the jtp1bucket.
o Click on the "jtp1bucket".
From the above screen, we observed that all the files in the original bucket have been replicated to another bucket,
i.e., jtp1bucket.
Note: If any further changes made in the original bucket will always be copied to its replicated bucket.
EC2
o EC2 stands for Amazon Elastic Compute Cloud.
o Amazon EC2 is a web service that provides resizable compute capacity in the cloud.
o Amazon EC2 reduces the time required to obtain and boot new user instances to minutes rather than in older
days, if you need a server then you had to put a purchase order, and cabling is done to get a new server which
is a very time-consuming process. Now, Amazon has provided an EC2 which is a virtual machine in the cloud
that completely changes the industry.
o You can scale the compute capacity up and down as per the computing requirement changes.
o Amazon EC2 changes the economics of computing by allowing you to pay only for the resources that you
actually use. Rather than you previously buy physical servers, you would look for a server that has more CPU
capacity, RAM capacity and you buy a server over 5 year term, so you have to plan for 5 years in advance.
People spend a lot of capital in such investments. EC2 allows you to pay for the capacity that you actually use.
o Amazon EC2 provides the developers with the tools to build resilient applications that isolate themselves from
some common scenarios.
On Demand
o It allows you to pay a fixed rate by the hour or even by the second with no commitment.
o Linux instance is by the second and windows instance is by the hour.
o On Demand is perfect for the users who want low cost and flexibility of Amazon EC2 without any up-front
investment or long-term commitment.
o It is suitable for the applications with short term, spiky or unpredictable workloads that cannot be interrupted.
o It is useful for the applications that have been developed or tested on Amazon EC2 for the first time.
o On Demand instance is recommended when you are not sure which instance type is required for your
performance needs.
Reserved
o It is a way of making a reservation with Amazon or we can say that we make a contract with Amazon. The
contract can be for 1 or 3 years in length.
o In a Reserved instance, you are making a contract means you are paying some upfront, so it gives you a
significant discount on the hourly charge for an instance.
o It is useful for applications with steady state or predictable usage.
o It is used for those applications that require reserved capacity.
o Users can make up-front payments to reduce their total computing costs. For example, if you pay all your
upfronts and you do 3 years contract, then only you can get a maximum discount, and if you do not pay all
upfronts and do one year contract then you will not be able to get as much discount as you can get If you do 3
year contract and pay all the upfronts.
Spot Instances
o It allows you to bid for a price whatever price that you want for instance capacity, and providing better savings
if your applications have flexible start and end times.
o Spot Instances are useful for those applications that have flexible start and end times.
o It is useful for those applications that are feasible at very low compute prices.
o It is useful for those users who have an urgent need for large amounts of additional computing capacity.
o EC2 Spot Instances provide less discounts as compared to On Demand prices.
o Spot Instances are used to optimize your costs on the AWS cloud and scale your application's throughput up to
10X.
o EC2 Spot Instances will continue to exist until you terminate these instances.
Dedicated Hosts
o A dedicated host is a physical server with EC2 instance capacity which is fully dedicated to your use.
o The physical EC2 server is the dedicated host that can help you to reduce costs by allowing you to use your
existing server-bound software licenses. For example, Vmware, Oracle, SQL Server depending on the licenses
that you can bring over to AWS and then they can use the Dedicated host.
o Dedicated hosts are used to address compliance requirements and reduces host by allowing to use your
existing server-bound server licenses.
o It can be purchased as a Reservation for up to 70% off On-Demand price.
Number of Instances: It defines how many EC2 instances you want to create. I leave it as 1 as I want to create
only one instance.
Purchasing Option: In the purchasing option, you need to set the price, request from, request to, and persistent
request. Right now, I leave it as unchecked.
Tenancy: Click on the Shared-Run a shared hardware instance from the dropdown menu as we are sharing
hardware.
Network: Choose your network, set it as default, i.e., vpc-dacbc4b2 (default) where vpc is a virtual private cloud
where we can launch the AWS resources such as EC2 instances in a virtual cloud.
Subnet: It is a range of IP addresses in a virtual cloud. In a specified subnet, you can add new AWS resources.
Shutdown behavior: It defines the behavior of the instance type. You can either stop or terminate the instance
when you shut down the Linux machine. Now, I leave it as Stop.
Enable Termination Protection: It allows the people to protect against the accidental termination.
Monitoring: We can monitor things such as CPU utilization. Right now, I uncheck the Monitoring.
User data: In Advanced details, you can pass the bootstrap scripts to EC2 instance. You can tell them to download
PHP, Apache, install the Apache, etc.
o Now, add the EBS volume and attach it to the EC2 instance. Root is the default EBS volume. Click on
the Next.
Volume Type: We select the Magnetic (standard) as it is the only disk which is bootable.
Delete on termination: It is checked means that the termination of an EC2 instance will also delete EBS volume.
o Configure Security Group. The security group allows some specific traffic to access your instance.
o Review an EC2 instance that you have just configured, and then click on the Launch button.
o Create a new key pair and enter the name of the key pair. Download the Key pair.
o Click on the Launch Instances button.
o To use an EC2 instance in Windows, you need to install both Putty and PuttyKeyGen.
o Download the Putty and PuttyKeyGen.
o Download the putty.exe and puttygen.exe file.
o In order to use the key-pair which we have downloaded previously, we need to convert the pem file to ppk file.
Puttygen is used to convert the pem file to ppk file.
o Open the Puttygen software.
o Click on the Load.
o Open the key-pair file, i.e., ec2instance.pem.
o Click on the OK button.
o Click on the Save private key. Change the file extension from pem to ppk.
o Click on the Save button.
o Move to the download directory where the ppk file is downloaded.
o Open the Putty.
o Move to the EC2 instance that you have created and copy its IP address.
o Now, move to the Putty configuration and type ec2user@, and then paste the IP address that you have copied
in a previous step. Copy the Host Name in Saved Sessions.
Now, your Host Name is saved in the default settings.
o Click on the SSH category appearing on the left side of the Putty, then click on the Auth.
o Click on the Browse to open the ppk file.
o Move to the Session category, click on the Save to save the settings.
o Click on the Open button to open the Putty window.
The above screen shows that we are connected to the EC2 instance.
o Run the command sudo su, and then run the command yum update -y to update the EC2 instance.
Note: sudo su is a command which is used to provide the privileges to the root level.
o Now, we install the apache web server to ensure that an EC2 instance becomes a web server by running a
command yum install httpd -y.
o We create a text editor, and the text editor is created by running the command nano index.html where
index.html is the name of the web page.
o The text editor is shown below where we write the HTML code.
After writing the HTML code, press ctrl+X to exit, then press 'Y' to save the page and press Enter.