0% found this document useful (0 votes)
219 views105 pages

AWS Free Tier

The document provides instructions for signing up for an AWS account and describes some key AWS concepts. It outlines the steps to create an AWS account, including filling out contact and payment information and verifying your identity by phone. It then defines the AWS account ID and canonical user ID that are assigned to each user. Finally, it provides an overview of the S3 storage service, describing what S3 is, how buckets and objects work, and some key S3 concepts like data consistency models.

Uploaded by

Sridhar Rayakam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
219 views105 pages

AWS Free Tier

The document provides instructions for signing up for an AWS account and describes some key AWS concepts. It outlines the steps to create an AWS account, including filling out contact and payment information and verifying your identity by phone. It then defines the AWS account ID and canonical user ID that are assigned to each user. Finally, it provides an overview of the S3 storage service, describing what S3 is, how buckets and objects work, and some key S3 concepts like data consistency models.

Uploaded by

Sridhar Rayakam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 105

AWS Free Tier

How to SignUp to the AWS platform

o Firstly visit the website https://aws.amazon.com.


o The following screen appears after opening the website, then click on the Complete Sign Up to create an
account and fill the required details.

o The following screen appears after clicking on the "Complete Sign Up" button. If you are an already existing
user of an AWS account, then enter the email address of your AWS account otherwise "create an aws
account".
o On clicking on the "create an aws account" button, the following screen appears that requires some fields to
be filled by the user.
o Now, fill your contact information.
o After providing the contact information, fill your payment information.
o After providing your payment information, confirm your identity by entering your phone number and security
check code, and then click on the "Contact me" button.

o AWS will contact you to verify whether the provided contact number is correct or not.
o When number is verified, then the following message appears on the screen.

o The final step is the confirmation step. Click on the link to log in again; it redirects you to the "Management
Console".

AWS Account Identifiers


AWS assigns two types of unique ID to each user's account:
o An AWS account ID
o A canonical user ID

AWS account ID
AWS account ID is a 12-digit number such as 123456780123 which can be used to construct Amazon Resource
Names (ARNs). Suppose we refer to resources such as an IAM user, the AWS account ID distinguishes the resources
from resources in other AWS accounts.

Finding the AWS account ID

We can find the AWS account ID from AWS Management Console. The following steps are taken to view your account
ID:

o Login to the aws account by entering your email address and password, and then you will move to the
management console.
o Now, click on the account name, a dropdown menu appears.

o Click on "My Account" in the dropdown menu of account name to view your account ID.

Canonical User ID
o A Canonical user ID is 64-digit hexadecimal encoded a 256-bit number.
o A canonical user ID is used in an Amazon S3 bucket policy for cross-account access means that AWS account
can access the resources in another AWS account. For example, if you want AWS account access to your
bucket, you need to specify the canonical user ID to your bucket's policy.

Finding the canonical user ID

o Firstly, visit the website https://aws.amazon.com, and log in to the aws account by entering your email
address and password.
o From the right side of the management console, click on the account name.
o Click on the "My Security Credentials" from the dropdown menu of the account name. The screen appears
which is shown below:
 Click on the Account identifiers to view the Canonical user ID. S3-101
o S3 is one of the first services that has been produced by aws.
o S3 stands for Simple Storage Service.
o S3 provides developers and IT teams with secure, durable, highly scalable object storage.
o It is easy to use with a simple web services interface to store and retrieve any amount of data from anywhere
on the web.

What is S3?
o S3 is a safe place to store the files.
o It is Object-based storage, i.e., you can store the images, word files, pdf files, etc.
o The files which are stored in S3 can be from 0 Bytes to 5 TB.
o It has unlimited storage means that you can store the data as much you want.
o Files are stored in Bucket. A bucket is like a folder available in S3 that stores the files.
o S3 is a universal namespace, i.e., the names must be unique globally. Bucket contains a DNS address.
Therefore, the bucket must contain a unique name to generate a unique DNS address.

If you create a bucket, URL look like:

o If you upload a file to S3 bucket, then you will receive an HTTP 200 code means that the uploading of a file is
successful.

Advantages of Amazon S3
o Create Buckets: Firstly, we create a bucket and provide a name to the bucket. Buckets are the containers in
S3 that stores the data. Buckets must have a unique name to generate a unique DNS address.
o Storing data in buckets: Bucket can be used to store an infinite amount of data. You can upload the files as
much you want into an Amazon S3 bucket, i.e., there is no maximum limit to store the files. Each object can
contain upto 5 TB of data. Each object can be stored and retrieved by using a unique developer assigned-key.
o Download data: You can also download your data from a bucket and can also give permission to others to
download the same data. You can download the data at any time whenever you want.
o Permissions: You can also grant or deny access to others who want to download or upload the data from
your Amazon S3 bucket. Authentication mechanism keeps the data secure from unauthorized access.
o Standard interfaces: S3 is used with the standard interfaces REST and SOAP interfaces which are designed
in such a way that they can work with any development toolkit.
o Security: Amazon S3 offers security features by protecting unauthorized users from accessing your data.

S3 is a simple key-value store


S3 is object-based. Objects consist of the following:

o Key: It is simply the name of the object. For example, hello.txt, spreadsheet.xlsx, etc. You can use the key to
retrieve the object.
o Value: It is simply the data which is made up of a sequence of bytes. It is actually a data inside the file.
o Version ID: Version ID uniquely identifies the object. It is a string generated by S3 when you add an object to
the S3 bucket.
o Metadata: It is the data about data that you are storing. A set of a name-value pair with which you can store
the information regarding an object. Metadata can be assigned to the objects in Amazon S3 bucket.
o Subresources: Subresource mechanism is used to store object-specific information.
o Access control information: You can put the permissions individually on your files.

Amazon S3 Concepts

o Buckets
o Objects
o Keys
o Regions
o Data Consistency Model

o Buckets
o A bucket is a container used for storing the objects.
o Every object is incorporated in a bucket.
o For example, if the object named photos/tree.jpg is stored in the treeimage bucket, then it can be
addressed by using the URL http://treeimage.s3.amazonaws.com/photos/tree.jpg.
o A bucket has no limit to the amount of objects that it can store. No bucket can exist inside of other
buckets.
o S3 performance remains the same regardless of how many buckets have been created.
o The AWS user that creates a bucket owns it, and no other AWS user cannot own it. Therefore, we can
say that the ownership of a bucket is not transferrable.
o The AWS account that creates a bucket can delete a bucket, but no other AWS user can delete the
bucket.

o Objects
o Objects are the entities which are stored in an S3 bucket.
o An object consists of object data and metadata where metadata is a set of name-value pair that
describes the data.
o An object consists of some default metadata such as date last modified, and standard HTTP metadata,
such as Content type. Custom metadata can also be specified at the time of storing an object.
o It is uniquely identified within a bucket by key and version ID.

o Key
o A key is a unique identifier for an object.
o Every object in a bucket is associated with one key.
o An object can be uniquely identified by using a combination of bucket name, the key, and optionally
version ID.
o For example, in the URL http://jtp.s3.amazonaws.com/2019-01-31/Amazons3.wsdl where "jtp" is the
bucket name, and key is "2019-01-31/Amazons3.wsdl"

o Regions
o You can choose a geographical region in which you want to store the buckets that you have created.
o A region is chosen in such a way that it optimizes the latency, minimize costs or address regulatory
requirements.
o Objects will not leave the region unless you explicitly transfer the objects to another region.

o Data Consistency Model


Amazon S3 replicates the data to multiple servers to achieve high availability.
Two types of model:
o Read-after-write consistency for PUTS of new objects.
o For a PUT request, S3 stores the data across multiple servers to achieve high availability.
o A process stores an object to S3 and will be immediately available to read the object.
o A process stores a new object to S3, it will immediately list the keys within the bucket.
o It does not take time for propagation, the changes are reflected immediately.
o Eventual consistency for overwrite PUTS and DELETES
o For PUTS and DELETES to objects, the changes are reflected eventually, and they are not
available immediately.
o If the process replaces an existing object with the new object, you try to read it immediately.
Until the change is fully propagated, the S3 might return prior data.
o If the process deletes an existing object, immediately try to read it. Until the change is fully
propagated, the S3 might return the deleted data.
o If the process deletes an existing object, immediately list all the keys within the bucket. Until the
change is fully propagated, the S3 might return the list of the deleted key.

Creating an S3 Bucket
o Sign in to the AWS Management console. After sign in, the screen appears is shown below:

o Move to the S3 services. After clicking on S3, the screen appears is shown below:
o To create an S3 bucket, click on the "Create bucket". On clicking the "Create bucket" button, the screen
appears is shown below:
o Enter the bucket name which should look like DNS address, and it should be resolvable. A bucket is like a
folder that stores the objects. A bucket name should be unique. A bucket name should start with the lowercase
letter, must not contain any invalid characters. It should be 3 to 63 characters long.
o Click on the "Create" button. Now, the bucket is created.
We have seen from the above screen that bucket and its objects are not public as by default, all the objects are
private.

o Now, click on the "javatpointbucket" to upload a file in this bucket. On clicking, the screen appears is shown
below:
o Click on the "Upload" button to add the files to your bucket.
o Click on the "Add files" button.
o Add the jtp.jpg file.
o Click on the "upload" button.
From the above screen, we observe that the "jtp.jpg" has been successfully uploaded to the bucket "javatpoint".

o Move to the properties of the object "jtp.jpg" and click on the object URL to run the file appearing on the
right side of the screen
o On clicking the object URL, the screen appears is shown below:
From the above screen, we observe that we are not allowed to access the objects of the bucket.

o To overcome from the above problems, we need to set the permissions of a bucket, i.e., "javatpointbucket"
and unchecked all of them.
o Save these permissions.
o Enter "confirm" in a textbox, then click on the "confirm" button.

o Click on the "Actions" dropdown and then click on the "Make public".


o Now, click on the Object URL of an object to run the file.
Important points to remember
o Buckets are a universal namespace, i.e., the bucket names must be unique.
o If uploading of an object to S3 bucket is successful, we receive a HTTP 200 code.
o S3, S3-IA, S3 Reduced Redundancy Storage are the storage classes.
o Encryption is of two types, i.e., Client Side Encryption and Server Side Encryption
o Access to the buckets can be controlled by using either ACL (Access Control List) or bucket policies.
o By default buckets are private and all the objects stored in a bucket are also private.

AWS Storage Classes


o S3 storage classes are used to assist the concurrent loss of data in one or two facilities.
o S3 storage classes maintain the integrity of the data using checksums.
o S3 provides lifecycle management for the automatic migration of objects for cost savings.

S3 contains four types of storage classes:

o S3 Standard
o S3 Standard IA
o S3 one zone-infrequent access
o S3 Glacier

S3 Standard
o Standard storage class stores the data redundantly across multiple devices in multiple facilities.
o It is designed to sustain the loss of 2 facilities concurrently.
o Standard is a default storage class if none of the storage class is specified during upload.
o It provides low latency and high throughput performance.
o It designed for 99.99% availability and 99.999999999% durability

S3 Standard IA
o IA stands for infrequently accessed.
o Standard IA storage class is used when data is accessed less frequently but requires rapid access when
needed.
o It has a lower fee than S3, but you will be charged for a retrieval fee.
o It is designed to sustain the loss of 2 facilities concurrently.
o It is mainly used for larger objects greater than 128 KB kept for atleast 30 days.
o It provides low latency and high throughput performance.
o It designed for 99.99% availability and 99.999999999% durability

S3 one zone-infrequent access


o S3 one zone-infrequent access storage class is used when data is accessed less frequently but requires rapid
access when needed.
o It stores the data in a single availability zone while other storage classes store the data in a minimum of three
availability zones. Due to this reason, its cost is 20% less than Standard IA storage class.
o It is an optimal choice for the less frequently accessed data but does not require the availability of Standard or
Standard IA storage class.
o It is a good choice for storing the backup data.
o It is cost-effective storage which is replicated from other AWS region using S3 Cross Region replication.
o It has the same durability, high performance, and low latency, with a low storage price and low retrieval fee.
o It designed for 99.5% availability and 99.999999999% durability of objects in a single availability zone.
o It provides lifecycle management for the automatic migration of objects to other S3 storage classes.
o The data can be lost at the time of the destruction of an availability zone as it stores the data in a single
availability zone.

S3 Glacier
o S3 Glacier storage class is the cheapest storage class, but it can be used for archive only.
o You can store any amount of data at a lower cost than other storage classes.
o S3 Glacier provides three types of models:
o Expedited: In this model, data is stored for a few minutes, and it has a very higher fee.
o Standard: The retrieval time of the standard model is 3 to 5 hours.
o Bulk: The retrieval time of the bulk model is 5 to 12 hours.
o You can upload the objects directly to the S3 Glacier.
o It is designed for 99.999999999% durability of objects across multiple availability zones.

Performance across the Storage classes


S3 Standard S3 Standard IA S3 One Zone-IA S3 Glacier

Designed for durability 99.99999999 99.99999999% 99.99999999% 99.99999999%


%

Designed for 99.99% 99.9% 99.5% N/A


availability

Availability SLA 99.9% 99% 99% N/A

Availability zones >=3 >=3 1 >=3

Minimum capacity N/A 128KB 128KB 40KB


charge per object

Minimum storage N/A 30 days 30 days 90 days


duration charge

Retrieval fee N/A per GB per GB per GB retrieved


retrieved retrieved

First byte latency milliseconds milliseconds milliseconds Select minutes or


hours

Storage type Object Object Object Object

Lifecycle transitions Yes Yes Yes Yes

Versioning
Versioning is a means of keeping the multiple forms of an object in the same S3 bucket. Versioning can be used to
retrieve, preserve and restore every version of an object in S3 bucket.

For example, bucket consists of two objects with the same key but with different version ID's such as photo.jpg
(version ID is 11) and photo.jpg (version ID is 12).

Versioning-enabled buckets allow you to recover the objects from the deletion or overwrite. It serves two purposes:
o If you delete an object, instead of deleting the object permanently, it creates a delete marker which becomes
a current version of an object.
o If you overwrite an object, it creates a new version of the object and also restores the previous version of the
object.

Note: Once you enable the versioning of a bucket, then it cannot be disabled. You can suspend the
versioning.

Versioning state can be applied to all the objects in a bucket. Once the versioning state is enabled, all the objects in a
bucket will remain versioned, and they are provided with the unique version ID. Following are the important
points:

o If the versioning state is not enabled, then the version ID of the objects is set to null. When the versioning is
not enabled, existing objects are not changed or are not affected.
o The bucket owner can suspend the versioning to stop the object versions. When you suspend the versioning,
existing objects are not affected.

Let's understand the concept of versioning through an example.

o Sign in to the AWS Management Console.


o Move to the S3 services.
o No, click on the "Create bucket" to create a new bucket.
o Enter the bucket name which must be unique.
o Click on the "create" button.
In the above screen, we observe that the bucket "jtpbucket" is created with the default settings, i.e., bucket and
objects are not public.

o Now, I want to see some objects in a bucket; we need to make a bucket public. Move to the "Edit public
access settings", uncheck all the settings, and then save the settings.
o After saving the settings, the screen appears is shown below:
Type the "confirm" in a textbox to confirm the settings. Click on the "confirm" button.

o When the settings are confirmed, the screen appears as shown below:

The above screen shows that the objects in a bucket have become public.

o Now, we add the versioning to our bucket. Move to the properties of a bucket, i.e., jtpbucket and click on the
versioning.
o On clicking on the versioning, the screen appears as shown below:
We can either enable or suspend the versioning. Suppose we enable the versioning and save this setting, this adds
the versioning to our bucket.

o We now upload the files to our bucket.


o Now, we click on the "Add files" to add the files in our bucket. When a file is uploaded, the screen appears as
shown below:
In the above screen, we observe that version.txt file is uploaded.

o To run the version.txt file, we have to make it public from the Actions dropdown menu.


o When a file becomes public, we can run the file by clicking on its object URL. On clicking on the object URL, the
screen appears as shown below:

o Now, we create the second version of the file. Suppose I change the content of the file and re-upload it, then it
becomes the second version of the file.
In the above screen, we change the content from "version 1" to "version 2" and then save the file.

o Now, we upload the above file to our bucket.


o After uploading the file, two versions of a file are created.
From the above screen, we observe that either we can hide or show the versions.

o When we click on the "show", we can see all the versions of a file.
From the above screen, we can see both the versions of a file and currently uploaded file become the latest version.
Both the files are of same size, i.e., 18.0 B and storage class, i.e., Standard.

o To run the version.txt file, we have to make it public from the Actions dropdown menu.


o Now, move to the properties of a file and click on the object URL.
Click on the Object URL.
o On clicking on the Object URL, we can see the output, i.e., the content of the currently uploaded file.

o Now, we delete an object. Move to the Actions dropdown menu and click on the Delete.
o On deleting the object, the screen appears as shown below:
We observe that the bucket becomes empty.

o However, when we click on the Show Version, we can see all the versions of a file, i.e., Delete marker and
other two versions of a file.
We observe from the above screen that the object is not permanently deleted; it has been restored. Therefore, the
versioning concept is used to restore the objects.

o If you want to restore the object, delete the "delete marker" by clicking on the Actions dropdown menu
and click on the Delete.
o Click on the "Hide" Versions, we will observe that the file has been restored.
Important points to be remembered:M
o It stores all versions of an object (including all writes and even if you delete an object).
o It is a great backup tool.
o Once the versioning enabled, it cannot be disabled, only suspended.
o It is integrated with lifecycle rules.
o Versioning's MFA Delete capability uses multi-factor authentication that can be used to provide the additional
layer of security.

Cross Region Replication


o Cross Region Replication is a feature that replicates the data from one bucket to another bucket which could
be in a different region.
o It provides asynchronous copying of objects across buckets. Suppose X is a source bucket and Y is a
destination bucket. If X wants to copy its objects to Y bucket, then the objects are not copied immediately.

Some points to be remembered for Cross Region Replication

o Create two buckets: Create two buckets within AWS Management Console, where one bucket is a source
bucket, and other is a destination bucket.
o Enable versioning: Cross Region Replication can be implemented only when the versioning of both the
buckets is enabled.
o Amazon S3 encrypts the data in transit across AWS regions using SSL: It also provides security when
data traverse across the different regions.
o Already uploaded objects will not be replicated: If any kind of data already exists in the bucket, then that
data will not be replicated when you perform the cross region replication.

Use cases of Cross Region Replication

o Compliance Requirements
By default, Amazon S3 stores the data across different geographical regions or availability zone to have the
availability of data. Sometimes there could be compliance requirements that you want to store the data in
some specific region. Cross Region Replication allows you to replicate the data at some specific region to
satisfy the requirements.
o Minimize Latency
Suppose your customers are in two geographical regions. To minimize latency, you need to maintain the
copies of data in AWS region that are geographically closer to your users.
o Maintain object copies under different ownership: Regardless of who owns the source bucket, you can
tell to Amazon S3 to change the ownership to AWS account user that owns the destination bucket. This is
referred to as an owner override option.

Let's understand the concept of Cross Region Replication through an example.

o Sign in to the AWS Management Console.


o Now, we upload the files in a jtpbucket. The jtpbucket is an s3 bucket created by us.
o Add the files in a bucket.
o Now, we add two files in a bucket, i.e., version.txt and download.jpg.
o Now, we create a new bucket whose name is jtp1bucket with a different region.
Now, we have two buckets, i.e., jtpbucket and jtp1bucket in s3.

o Click on the jtpbucket and then move to the Management of the jtpbucket.


o Click on the Replication. On clicking, the screen appears as shown below:
o Click on the Get started button.
o Enable the versioning of both the buckets.
o You can either replicate the entire bucket or tags to the destination bucket. Suppose you want to replicate the
entire bucket and then click on the Next.
o Enter your destination bucket, i.e., jtp1bucket.
o Create a new IAM role, and the role name is S3CRR and then click on the Next.
o After saving the settings, the screen appears as shown below:
The above screen shows that the Cross region replication has been updated successfully. We can also see the source
bucket and destination with their associated permissions.

o Now, we will see whether the files have been replicated from jtpbucket to the jtp1bucket. Click on
the jtp1bucket.
The above screen shows that the bucket is empty. Therefore, we can say that the objects do not replicate from one
bucket to another bucket automatically, we can replicate only by using AWS CLI (Command Line Interface). To use
the AWS CLI, you need to install the CLI tool.

o After installation, open the cmd and type aws configure.


o Now, we need to generate the Access Key ID which is a user name and secret access key which is a password.
To achieve this, we first need to create an IAM Group.
o Set the Group Name, i.e., javatpoint.

o Check the AdministratorAccess policy to access the AWS console through AWS CLI.


o Now, create an IAM User.
o Add the user name with programmatic access.
o Add the user to a group, i.e., javatpoint.
o Finally, the user is created.
From the above screen, we observe that access key and scret access key have been generated.

o Copy the access key and secret access key to the cmd.
o To view the S3 buckets, run the command aws s3 ls.
o To copy the objects of jtpbucket to jtp1bucket, run the command aws s3 cp?recursive s3://jtpbucket
s3://jtp1bucket.
The above screen shows that the objects of jtpbucket have been copied to the jtp1bucket.

o Click on the "jtp1bucket".
From the above screen, we observed that all the files in the original bucket have been replicated to another bucket,
i.e., jtp1bucket.

Note: If any further changes made in the original bucket will always be copied to its replicated bucket.

Important points to be remembered:


o Versioning must be enabled on both the source and destination buckets.
o The regions of both the buckets must be unique.
o All the files in an original bucket are not replicated automatically, and they can be replicated through AWS CLI.
All the subsequent files are replicated automatically.
o Files in a file cannot be replicated to multiple buckets.
o Delete markers are not replicated.
o Delete versions or Delete markers are not replicated.

EC2
o EC2 stands for Amazon Elastic Compute Cloud.
o Amazon EC2 is a web service that provides resizable compute capacity in the cloud.
o Amazon EC2 reduces the time required to obtain and boot new user instances to minutes rather than in older
days, if you need a server then you had to put a purchase order, and cabling is done to get a new server which
is a very time-consuming process. Now, Amazon has provided an EC2 which is a virtual machine in the cloud
that completely changes the industry.
o You can scale the compute capacity up and down as per the computing requirement changes.
o Amazon EC2 changes the economics of computing by allowing you to pay only for the resources that you
actually use. Rather than you previously buy physical servers, you would look for a server that has more CPU
capacity, RAM capacity and you buy a server over 5 year term, so you have to plan for 5 years in advance.
People spend a lot of capital in such investments. EC2 allows you to pay for the capacity that you actually use.
o Amazon EC2 provides the developers with the tools to build resilient applications that isolate themselves from
some common scenarios.

EC2 Pricing Options

On Demand
o It allows you to pay a fixed rate by the hour or even by the second with no commitment.
o Linux instance is by the second and windows instance is by the hour.
o On Demand is perfect for the users who want low cost and flexibility of Amazon EC2 without any up-front
investment or long-term commitment.
o It is suitable for the applications with short term, spiky or unpredictable workloads that cannot be interrupted.
o It is useful for the applications that have been developed or tested on Amazon EC2 for the first time.
o On Demand instance is recommended when you are not sure which instance type is required for your
performance needs.

Reserved
o It is a way of making a reservation with Amazon or we can say that we make a contract with Amazon. The
contract can be for 1 or 3 years in length.
o In a Reserved instance, you are making a contract means you are paying some upfront, so it gives you a
significant discount on the hourly charge for an instance.
o It is useful for applications with steady state or predictable usage.
o It is used for those applications that require reserved capacity.
o Users can make up-front payments to reduce their total computing costs. For example, if you pay all your
upfronts and you do 3 years contract, then only you can get a maximum discount, and if you do not pay all
upfronts and do one year contract then you will not be able to get as much discount as you can get If you do 3
year contract and pay all the upfronts.

Types of Reserved Instances:


o Standard Reserved Instances
o Convertible Reserved Instances
o Scheduled Reserved Instances

Standard Reserved Instances


o It provides a discount of up to 75% off on demand. For example, you are paying all up-fronts for 3 year
contract.
o It is useful when your Application is at the steady-state.

Convertible Reserved Instances


o It provides a discount of up to 54% off on demand.
o It provides the feature that has the capability to change the attributes of RI as long as the exchange results in
the creation of Reserved Instances of equal or greater value.
o Like Standard Reserved Instances, it is also useful for the steady state applications.

Scheduled Reserved Instances


o Scheduled Reserved Instances are available to launch within the specified time window you reserve.
o It allows you to match your capacity reservation to a predictable recurring schedule that only requires a
fraction of a day, a week, or a month.

Spot Instances
o It allows you to bid for a price whatever price that you want for instance capacity, and providing better savings
if your applications have flexible start and end times.
o Spot Instances are useful for those applications that have flexible start and end times.
o It is useful for those applications that are feasible at very low compute prices.
o It is useful for those users who have an urgent need for large amounts of additional computing capacity.
o EC2 Spot Instances provide less discounts as compared to On Demand prices.
o Spot Instances are used to optimize your costs on the AWS cloud and scale your application's throughput up to
10X.
o EC2 Spot Instances will continue to exist until you terminate these instances.

Dedicated Hosts
o A dedicated host is a physical server with EC2 instance capacity which is fully dedicated to your use.
o The physical EC2 server is the dedicated host that can help you to reduce costs by allowing you to use your
existing server-bound software licenses. For example, Vmware, Oracle, SQL Server depending on the licenses
that you can bring over to AWS and then they can use the Dedicated host.
o Dedicated hosts are used to address compliance requirements and reduces host by allowing to use your
existing server-bound server licenses.
o It can be purchased as a Reservation for up to 70% off On-Demand price.

Creating an EC2 instance


o Sign in to the AWS Management Console.
o Click on the EC2 service.
o Click on the Launch Instance button to create a new instance.
o Now, we have different Amazon Machine Images. These are the snapshots of different virtual machines. We
will be using Amazon Linux AMI 2018.03.0 (HVM) as it has built-in tools such as java, python, ruby, perl, and
especially AWS command line tools.
o Choose an Instance Type, and then click on the Next. Suppose I choose a t2.micro as an instance type.
o The main setup page of EC2 is shown below where we define setup configuration.
Where,

Number of Instances: It defines how many EC2 instances you want to create. I leave it as 1 as I want to create
only one instance.

Purchasing Option: In the purchasing option, you need to set the price, request from, request to, and persistent
request. Right now, I leave it as unchecked.

Tenancy: Click on the Shared-Run a shared hardware instance from the dropdown menu as we are sharing
hardware.

Network: Choose your network, set it as default, i.e., vpc-dacbc4b2 (default) where vpc is a virtual private cloud
where we can launch the AWS resources such as EC2 instances in a virtual cloud.

Subnet: It is a range of IP addresses in a virtual cloud. In a specified subnet, you can add new AWS resources.

Shutdown behavior: It defines the behavior of the instance type. You can either stop or terminate the instance
when you shut down the Linux machine. Now, I leave it as Stop.

Enable Termination Protection: It allows the people to protect against the accidental termination.

Monitoring: We can monitor things such as CPU utilization. Right now, I uncheck the Monitoring.

User data: In Advanced details, you can pass the bootstrap scripts to EC2 instance. You can tell them to download
PHP, Apache, install the Apache, etc.
o Now, add the EBS volume and attach it to the EC2 instance. Root is the default EBS volume. Click on
the Next.

Volume Type: We select the Magnetic (standard) as it is the only disk which is bootable.

Delete on termination: It is checked means that the termination of an EC2 instance will also delete EBS volume.

o Now, Add the Tags and then click on the Next.


In the above screen, we observe that we add two tags, i.e., the name of the server and department. Create as many
tags as you can as it reduces the overall cost.

o Configure Security Group. The security group allows some specific traffic to access your instance.
o Review an EC2 instance that you have just configured, and then click on the Launch button.
o Create a new key pair and enter the name of the key pair. Download the Key pair.
o Click on the Launch Instances button.
o To use an EC2 instance in Windows, you need to install both Putty and PuttyKeyGen.
o Download the Putty and PuttyKeyGen.
o Download the putty.exe and puttygen.exe file.
o In order to use the key-pair which we have downloaded previously, we need to convert the pem file to ppk file.
Puttygen is used to convert the pem file to ppk file.
o Open the Puttygen software.
o Click on the Load.
o Open the key-pair file, i.e., ec2instance.pem.
o Click on the OK button.

o Click on the Save private key. Change the file extension from pem to ppk.
o Click on the Save button.
o Move to the download directory where the ppk file is downloaded.
o Open the Putty.

o Move to the EC2 instance that you have created and copy its IP address.

o Now, move to the Putty configuration and type ec2user@, and then paste the IP address that you have copied
in a previous step. Copy the Host Name in Saved Sessions.
Now, your Host Name is saved in the default settings.

o Click on the SSH category appearing on the left side of the Putty, then click on the Auth.
o Click on the Browse to open the ppk file.
o Move to the Session category, click on the Save to save the settings.
o Click on the Open button to open the Putty window.
The above screen shows that we are connected to the EC2 instance.

o Run the command sudo su, and then run the command yum update -y to update the EC2 instance.
Note: sudo su is a command which is used to provide the privileges to the root level.

o Now, we install the apache web server to ensure that an EC2 instance becomes a web server by running a
command yum install httpd -y.

o Run the command cd /var/www/html.


o To list the files available in the html folder, run the command ls.
We observe that on running the command ls, we do not get any output. It means that it does not contain any file.

o We create a text editor, and the text editor is created by running the command nano index.html where
index.html is the name of the web page.
o The text editor is shown below where we write the HTML code.
After writing the HTML code, press ctrl+X to exit, then press 'Y' to save the page and press Enter.

o Start the Apache server by running the command service httpd start.


o Go to the web browser and paste the IP address which is used to connect to your EC2 instance. You will see
the web page that you have created.

You might also like