Aws Amazon Web Services
Aws Amazon Web Services
Using cloud computing, organizations can use shared computing and storage
resources rather than building, operating, and improving infrastructure on
their own.
There are three types of clouds − Public, Private, and Hybrid cloud.
Public Cloud
Private Cloud
A private cloud also provides almost similar features as public cloud, but the
data and services are managed by the organization or by the third party only
for the customer’s organization. In this type of cloud, major control is over
the infrastructure so security related issues are minimized.
Hybrid Cloud
A hybrid cloud is the combination of both private and public cloud. The
decision to run on private or public cloud usually depends on various
parameters like sensitivity of data and applications, industry certifications
and required standards, regulations, etc.
There are three types of service models in cloud − IaaS, PaaS, and SaaS.
IaaS
PaaS stands for Platform as a Service. Here, the service provider provides
various services like databases, queues, workflow engines, e-mails, etc. to
their customers. The customer can then use these components for building
their own applications. The services, availability of resources and data
backup are handled by the service provider that helps the customers to
focus more on their application's functionality.
SaaS
SaaS stands for Software as a Service. As the name suggests, here the
third-party providers provide end-user applications to their customers with
some administrative capability at the application level, such as the ability to
create and manage their users. Also some level of customizability is possible
such as the customers can use their own corporate logos, colors, etc.
Security issues
Security is the major issue in cloud computing. The cloud service providers
implement the best security standards and industry certifications, however,
storing data and important files on external service providers always bears a
risk.
Technical issues
Cloud service providers promises vendors that the cloud will be flexible to
use and integrate, however switching cloud services is not easy. Most
organizations may find it difficult to host and integrate current cloud
applications on another platform. Interoperability and support issues may
arise such as applications developed on Linux platform may not work
properly on Microsoft Development Framework .Net.Net.
This is the basic structure of AWS EC2, where EC2 stands for Elastic
Compute Cloud. EC2 allow users to use virtual machines of different
configurations as per their requirement. It allows various configuration
options, mapping of individual server, various pricing options, etc. We will
discuss these in detail in AWS Products section. Following is the
diagrammatic representation of the architecture.
Note − In the above diagram S3 stands for Simple Storage Service. It
allows the users to store and retrieve various types of data using API calls. It
doesn’t contain any computing element. We will discuss this topic in detail in
AWS products section.
Load Balancing
AWS provides the Elastic Load Balancing service, it distributes the traffic to
EC2 instances across multiple available sources, and dynamic addition and
removal of Amazon EC2 hosts from the load-balancing rotation.
Elastic Load Balancing can dynamically grow and shrink the load-
balancing capacity to adjust to traffic demands and also support sticky
sessions to address more advanced routing needs.
Amazon Cloud-front
Elastic Load Balancing can dynamically grow and shrink the load-balancing
capacity as per the traffic conditions.
Security Management
Each EC2 instance can be assigned one or more security groups, each of
which routes the appropriate traffic to each instance. Security groups can be
configured using specific subnets or IP addresses which limits access to EC2
instances.
Elastic Caches
Amazon Elastic Cache is a web service that manages the memory cache in
the cloud. In memory management, cache has a very important role and
helps to reduce the load on the services, improves the performance and
scalability on the database tier by caching frequently used information.
Amazon RDS
Amazon EC2 uses Amazon EBS Elastic Block Storage Elastic Block
Storage similar to network-attached storage. All data and logs running on
EC2 instances should be placed on Amazon EBS volumes, which will be
available even if the database host fails.
Using Amazon RDS, the service provider manages the storage and we only
focus on managing the data.
AWS cloud provides various options for storing, accessing, and backing up
web application data and assets. The Amazon S3 Simple Storage Service
provides a simple web-services interface that can be used to store and
retrieve any amount of data, at any time, from anywhere on the web.
Amazon S3 stores data as objects within resources called buckets. The user
can store as many objects as per requirement within the bucket, and can
read, write and delete objects from the bucket.
Amazon EBS is effective for data that needs to be accessed as block storage
and requires persistence beyond the life of the running instance, such as
database partitions and application logs.
Amazon EBS volumes can be maximized up to 1 TB, and these volumes can
be striped for larger volumes and increased performance. Provisioned IOPS
volumes are designed to meet the needs of database workloads that are
sensitive to storage performance and consistency.
Amazon EBS currently supports up to 1,000 IOPS per volume. We can stripe
multiple volumes together to deliver thousands of IOPS per instance to an
application.
Auto Scaling
The difference between AWS cloud architecture and the traditional hosting
model is that AWS can dynamically scale the web application fleet on
demand to handle changes in traffic.
In AWS, network devices like firewalls, routers, and load-balancers for AWS
applications no longer reside on physical devices and are replaced with
software solutions.
Multiple options are available to ensure quality software solutions. For load
balancing choose Zeus, HAProxy, Nginx, Pound, etc. For establishing a VPN
connection choose OpenVPN, OpenSwan, Vyatta, etc.
No security concerns
AWS provides a more secured model, in which every host is locked down. In
Amazon EC2, security groups are designed for each type of host in the
architecture, and a large variety of simple and tiered security models can be
created to enable minimum access among hosts within your architecture as
per requirement.
Availability of data centers
EC2 instances are easily available at most of the availability zones in AWS
region and provides model for deploying your application across data centers
for both high availability and reliability.
This console provides an inbuilt user interface to perform AWS tasks like
working with Amazon S3 buckets, launching and connecting to Amazon EC2
instances, setting Amazon CloudWatch alarms, etc.
Step 3 − Select the service of your choice and the console of that service
will open.
Customizing the Dashboard
Click the Edit menu on the navigation bar and a list of services appears. We
can create their shortcuts by simply dragging them from the menu bar to
the navigation bar.
When we drag the service from the menu bar to the navigation bar, the
shortcut will be created and added. We can also arrange them in any order.
In the following screenshot we have created shortcut for S3, EMR and
DynamoDB services.
Deleting Services Shortcuts
To delete the shortcut, click the edit menu and drag the shortcut from the
navigation bar to the service menu. The shortcut will be removed. In the
following screenshot, we have removed the shortcut for EMR services.
Selecting a Region
Many of the services are region specific and we need to select a region so
that resources can be managed. Some of the services do not require a
region to be selected like AWS Identity and Access Management IAMIAM.
Step 1 − Click the account name on the left side of the navigation bar.
Step 2 − Choose Security Credentials and a new page will open having
various options. Select the password option to change the password and
follow the instructions.
Click the account name in the navigation bar and select the 'Billing & Cost
Management' option.
Now a new page will open having all the information related to money
section. Using this service, we can pay AWS bills, monitor our usage and
budget estimation.
The AWS Console mobile app, provided by Amazon Web Services, allows its
users to view resources for select services and also supports a limited set of
management functions for select resource types.
Following are the various services and supported functions that can be
accessed using the mobile app.
S3
Route 53
Auto Scaling
Elastic Beanstalk
DynamoDB
View tables and their details like metrics, index, alarms, etc.
CloudFormation
OpsWorks
CloudWatch
Services Dashboard
To have access to the AWS Mobile App, we must have an existing AWS
account. Simply create an identity using the account credentials and select
the region in the menu. This app allows us to stay signed in to multiple
identities at the same time.
Root accounts cannot be deactivated via mobile console. While using AWS
Multi-Factor Authentication MFAMFA, it is recommended to use either a
hardware MFA device or a virtual MFA on a separate mobile device for
account security reasons.
The latest version is 1.14. There is a feedback link in the App's menu to
share our experiences and for any queries.
Amazon provides a fully functional free account for one year for users to use
and learn the different components of AWS. You get access to AWS services
like EC2, S3, DynamoDB, etc. for free. However, there are certain limitations
based on the resources consumed.
If we already have an account, then we can sign-in using the existing AWS
password.
Step 2 − After providing an email-address, complete this form. Amazon
uses this information for billing, invoicing and identifying the account. After
creating the account, sign-up for the services needed.
Step 3 − To sign-up for the services, enter the payment information.
Amazon executes a minimal amount transaction against the card on the file
to check that it is valid. This charge varies with the region.
Step 4 − Next, is the identity verification. Amazon does a call back to verify
the provided contact number.
Step 5 − Choose a support plan. Subscribe to one of the plans like Basic,
Developer, Business, or Enterprise. The basic plan costs nothing and has
limited resources, which is good to get familiar with AWS.
Step 6 − The final step is confirmation. Click the link to login again and it
redirects to AWS management console.
Now the account is created and can be used to avail AWS services.
An AWS account ID
A conical user ID
AWS Account ID
To know the AWS account number, click Support on the upper right side of
the navigation bar in AWS management console as shown in the following
screenshot.
Conical String User ID
Account Alias
Account alias is the URL for your sign-in page and contains the account ID
by default. We can customize this URL with the company name and even
overwrite the previous one.
Step 1 − Sign in to the AWS management console and open the IAM
console using the following link https://console.aws.amazon.com/iam/
Step 3 − To delete the alias, click the customize link, then click the Yes,
Delete button. This deletes the alias and it reverts to the Account ID.
Multi Factor Authentication
Requirements
To use MFA services, the user has to assign a device hardware or virtual
hardware or virtual to IAM user or AWS root account. Each MFA device
assigned to the user must be unique, i.e. the user cannot enter a code from
another user's device to authenticate.
Step 2 − On the web page, choose users from the navigation pane on the
right side to view the list of user name.
Step 3 − Scroll down to security credentials and choose MFA. Click activate
MFA.
Step 4 − Follow the instructions and the MFA device will get activated with
the account.
There are 3 ways to enable a MFA device −
In this method, MFA requires us to configure the IAM user with the phone
number of the user's SMS-compatible mobile device. When the user signs in,
AWS sends a six-digit code by SMS text message to the user's mobile
device. The user is required to enter the same code on a second web page
during sign-in to authenticate the right user. This SMS-based MFA cannot be
used with AWS root account.
In this method, MFA requires us to assign an MFA device virtual to the IAM
user or the AWS root account. A virtual device is a software application
mobile app running on a mobile device that emulates a physical device. The
device generates a six-digit numeric code based upon a time-synchronized
one-time password algorithm. The user has to enter the same code from the
device on a second web page during sign-in to authenticate the right user.
IAM is a user entity which we create in AWS to represent a person that uses
it with limited access to resources. Hence, we do not have to use the root
account in our day-to-day activities as the root account has unrestricted
access to our AWS resources.
How to Create Users in IAM?
Step 2 − Select the Users option on the left navigation pane to open the list
of all users.
Step 3 − We can also create New Users using the Create New Users option,
a new window will open. Enter the user-name which we want to create.
Select the create option and a new user will be created.
Step 4 − We can also see Access Key IDs and secret keys by selecting Show
Users Security Credentials link. We can also save these details on the
computer using the Download Credentials option.
Step 5 − We can manage the user’s own security credentials like creating
password, managing MFA devices, managing security certificates,
creating/deleting access keys, adding user to groups, etc.
There are many more features that are optional and are available on the
web page.
AWS - ELASTIC COMPUTE CLOUD
Amazon EC2 Elastic Compute Cloud is a web service interface that provides
resizable compute capacity in the AWS cloud. It is designed for developers to
have complete control over web-scaling and computing resources.
EC2 instances can be resized and the number of instances scaled up or down
as per our requirement. These instances can be launched in one or more
geographical locations or regions, and Availability Zones AZsAZs. Each
region comprises of several AZs at distinct locations, connected by low
latency networks in the same region.
EC2 Components
In AWS EC2, the users must be aware about the EC2 components, their
operating systems support, security measures, pricing structures, etc.
Operating System Support
Security
Users have complete control over the visibility of their AWS account. In AWS
EC2, the security systems allow create groups and place running instances
into it as per the requirement. You can specify the groups with which other
groups may communicate, as well as the groups with which IP subnets on
the Internet may talk.
Pricing
Fault tolerance
Amazon EC2 allows the users to access its resources to design fault-tolerant
applications. EC2 also comprises geographic regions and isolated locations
known as availability zones for fault tolerance and stability. It doesn’t share
the exact locations of regional data centers for security reasons.
When the users launch an instance, they must select an AMI that's in the
same region where the instance will run. Instances are distributed across
multiple availability zones to provide continuous services in failures, and
Elastic IP EIPsEIPs addresses are used to quickly map failed instance
addresses to concurrent running instances in other zones to avoid delay in
services.
Migration
This service allows the users to move existing applications into EC2. It
costs 80.00perstoragedeviceand80.00perstoragedeviceand2.49 per hour for
data loading. This service suits those users having large amount of data to
move.
Features of EC2
Step 1 − Sign-in to AWS account and open IAM console by using the
following link https://console.aws.amazon.com/iam/.
Step 3 − Create IAM user. Choose users in the navigation pane. Then create
new users and add users to the groups.
Step 4 − Create a Virtual Private Cloud using the following instructions.
Step 5 − Create WebServerSG security groups and add rules using the
following instructions.
Step 6 − Launch EC2 instance into VPC using the following instructions.
Step 7 − On the Tag Instances page, provide a tag with a name to the
instances. Select Next: Configure Security Group.
Step 8 − On the Configure Security Group page, choose the Select an
existing security group option. Select the WebServerSG group that we
created previously, and then choose Review and Launch.
Step 9 − Check Instance details on Review Instance Launch page then click
the Launch button.
Step 10 − A pop up dialog box will open. Select an existing key pair or
create a new key pair. Then select the acknowledgement check box and click
the Launch Instances button.
As the name suggests, auto scaling allows you to scale your Amazon EC2
instances up or down automatically as per the instructions set by the user.
Parameters like minimum and maximum number of instances are set by the
user. Using this, the number of Amazon EC2 instances you’re using
increases automatically as the demand rises to maintain the performance,
and decreases automatically as the demand decreases to minimize the cost.
Load Balancer
This includes monitoring and handling the requests incoming through the
Internet/intranet and distributes them to EC2 instances registered with it.
Control Service
SSL Termination
ELB provides SSL termination that saves precious CPU cycles, encoding and
decoding SSL within your EC2 instances attached to the ELB. An X.509
certificate is required to be configured within the ELB. This SSL connection in
the EC2 instance is optional, we can also terminate it.
Features of ELB
Step 2 − Select your load balancer region from the region menu on the
right side.
Step 3 − Select Load Balancers from the navigation pane and choose Create
Load Balancer option. A pop-up window will open and we need to provide
the required details.
Step 4 − In load Balancer name box: Enter name of your load balancer.
Step 5 − In create LB inside box: Select the same network which you have
selected for instances.
Step 7 − Click the Add button and a new pop-up will appear to select
subnets from the list of available subnets as shown in the following
screenshot. Select only one subnet per availability zone. This window will not
appear if we do not select Enable advanced VPC configuration.
Step 8 − Choose Next; a pop-up window will open. After selecting a VPC as
your network, assign security groups to Load Balancers.
Step 12 − Adding tags to your load balancer is optional. To add tags click
the Add Tags Page and fill the details such as key, value to the tag. Then
choose Create Tag option. Click Review and Create button.
A review page opens on which we can verify the setting. We can even
change the settings by choosing the edit link.
Step 13 − Click Create to create your load balancer and then click the Close
button.
Step 4 − Click the Delete button. An alert window will appear, click the Yes,
Delete button.
AMAZON WEB SERVICES - WORKSPACES
How It Works?
User Requirements
An Internet connection with TCP and UDP open ports is required at the
user’s end. They have to download a free Amazon WorkSpaces client
application for their device.
There will be a message to confirm the account, after which we can use
WorkSpaces.
Step 4 − Test your WorkSpaces using the following steps.
Download and install the Amazon WorkSpaces client application using the
following link − https://clients.amazonworkspaces.com/.
Run the application. For the first time, we need to enter the
registration code received in email and click Register.
Connect to the WorkSpace by entering the user name and password
for the user. Select Sign In.
Now WorkSpace desktop is displayed. Open this
link http://aws.amazon.com/workspaces/on THE web browser.
Navigate and verify that the page can be viewed.
A message saying “Congratulations! Your Amazon WorkSpaces cloud
directory has been created, and your first WorkSpace is working
correctly and has Internet access” will be received.
This AWS WorkSpaces feature allows the users to access to their WorkSpace
without entering their credentials every time when they disconnect. The
application installed at the client’s device saves an access token in a secure
store, which is valid for 12 hours and uses to authenticate the right user.
Users click on the Reconnect button on the application to get access on their
WorkSpace. Users can disable this feature any time.
Auto Resume Session
This AWS WorkSpaces feature allows the client to resume a session that was
disconnected due to any reason in network connectivity within 20
minutes bydefaultandcanbeextendedfor4hoursbydefaultandcanbeextendedfor
4hours. Users can disable this feature any time in group policy section.
Console Search
AWS Lambda is a responsive cloud service that inspects actions within the
application and responds by deploying the user-defined codes, known
as functions. It automatically manages the compute resources across
multiple availability zones and scales them when new actions are triggered.
AWS Lambda supports the code written in Java, Python and Node.js, and the
service can launch processes in languages supported by Amazon
Linux includes Bash, Go & Ruby includes Bash, Go & Ruby.
Following are some recommended tips while using AWS Lambda.
Follow these steps to configure AWS Lambda for the first time.
Now, when we select the Lambda service and select the Event Sources tab,
there will be no records. Add at least one source to the Lambda function to
work. Here, we are adding DynamoDB Table to it.
Step 7 − Select the stream tab and associate it with the Lambda function.
You will see this entry in Event Sources Tab of Lambda Service page.
Step 8 − Add some entries into the table. When the entry gets added and
saved, then Lambda service should trigger the function. It can be verified
using the Lambda logs.
Step 9 − To view logs, select the Lambda service and click the Monitoring
tab. Then click the View Logs in CloudWatch.
Throttle Limit
The throttle limit is 100 concurrent Lambda function executions per account
and is applied to the total concurrent executions across all functions within a
same region.
The formula to calculate the number of concurrent executions for a function
= averagedurationofthefunctionexecutionaveragedurationofthefunctionexecu
tion X numberofrequestsoreventsprocessedbyAWSLambdanumberofrequests
oreventsprocessedbyAWSLambda.
Resources Limit
The following table shows the list of resources limits for a Lambda function.
Resource Default
Limit
Service Limit
The following table shows the list of services limits for deploying a Lambda
function.
Item Default
Limit
Total size of all the deployment packages that can be uploaded 1.5 GB
per region
Amazon Virtual Private Cloud VPCVPC allows the users to use AWS
resources in a virtual network. The users can customize their virtual
networking environment as they like, such as selecting own IP address
range, creating subnets, and configuring route tables and network gateways.
The list of AWS services that can be used with Amazon VPC are −
Amazon EC2
Amazon Route 53
Amazon WorkSpaces
Auto Scaling
Elastic Beanstalk
Amazon EMR
Amazon OpsWorks
Amazon RDS
Amazon Redshift
Create VPC
Step 1 − Open the Amazon VPC console by using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select creating the VPC option on the right side of the navigation
bar. Make sure that the same region is selected as for other services.
Step 3 − Click the start VPC wizard option, then click VPC with single public
subnet option on the left side.
Step 4 − A configuration page will open. Fill in the details like VPC name,
subnet name and leave the other fields as default. Click the Create VPC
button.
Step 5 − A dialog box will open, showing the work in progress. When it is
completed, select the OK button.
The Your VPCs page opens which shows a list of available VPCs. The setting
of VPC can be changed here.
Select/Create VPC Group
Step 1 − Open the Amazon VPC console by using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select the security groups option in the navigation bar, then
choose create security group option.
Step 3 − A form will open, enter the details like group name, name tag, etc.
Select ID of your VPC from VPC menu, then select the Yes, create button.
Step 4 − The list of groups opens. Select the group name from the list and
set rules. Then click the Save button.
Launch Instance into VPC
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 2 − Select the same region as while creating VPC and security group.
Step 3 − Now select the Launch Instance option in the navigation bar.
Step 5 − A new page opens. Choose an Instance Type and select the
hardware configuration. Then select Next: Configure Instance Details.
Step 6 − Select the recently created VPC from the Network list, and the
subnet from the Subnet list. Leave the other settings as default and click
Next till the Tag Instance page.
Step 7 − On the Tag Instance page, tag the instance with the Name tag.
This helps to identify your instance from the list of multiple instances. Click
Next: Configure Security Group.
Step 8 − On the Configure Security Group page, select the recently created
group from the list. Then, select Review and Launch button.
Step 9 − On the Review Instance Launch page, check your instance details,
then select Launch.
Step 10 − A dialog box appears. Choose the option Select an existing key
pair or create a new key pair, then click the Launch Instances button.
Step 11 − The confirmation page open which shows all the details related to
instances.
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 3 − Select Allocate New Address. Then select Yes, Allocate button.
Step 4 − Select your Elastic IP address from the list, then select Actions,
and then click the Associate Address button.
Step 5 − A dialog box will open. First select the Instance from the Associate
with list. Then select your instance from the Instance list. Finally click the
Yes, Associate button.
Delete a VPC
There are several steps to delete VPC without losing any resources
associated with it. Following are the steps to delete a VPC.
Step 1 − Open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 3 − Select the Instance from the list, then select the Actions →
Instance State → Terminate button.
Step 4 − A new dialog box opens. Expand the Release attached Elastic IPs
section, and select the checkbox next to the Elastic IP address. Click the Yes,
Terminate button.
Step 5 − Again open the Amazon VPC console using the following link
− https://console.aws.amazon.com/vpc/
Step 6 − Select the VPC from the navigation bar. Then select Actions &
finally click the Delete VPC button.
Step 7 − A confirmation message appears. Click the Yes, Delete button.
Features of VPC
Step 2 − Click create hosted zone option on the top left corner of the
navigation bar.
Step 3 − A form page opens. Provide the required details such as domain
name and comments, then click the Create button.
Step 4 − Hosted zone for the domain will be created. There will be four DNS
endpoints called delegation set and these endpoints must be updated in the
domain names Nameserver settings.
Step 7 − To create your record set, select the create record set option. Fill
the required details such as: Name, Type, Alias, TTL seconds, Value, Routing
policy, etc. Click the Save record set button.
Step 8 − Create one more record set for some other region so that there
are two record sets with the same domain name pointing to different IP
addresses with your selected routing policy.
Once completed, the user requests will be routed based on the network
policy.
Features of Route 53
AWS Direct Connect permits to create a private network connection from our
network to AWS location. It uses 802.1q VLANs, which can be partitioned
into multiple virtual interfaces to access public resources using the same
connection. This results in reduced network cost and increased bandwidth.
Virtual interfaces can be reconfigured at any time as per the requirement.
Requirements to Use AWS Direct Connect
Our network must meet one of the following conditions to use AWS Direct
Connect −
Our network should be in the AWS Direct Connect location. Visit this
link to know about the available AWS Direct Connect
locations https://aws.amazon.com/directconnect/.
We should be working with an AWS Direct Connect partner who is a
member of the AWS Partner Network APNAPN. Visit this link to know
the list of AWS Direct Connect partners
− https://aws.amazon.com/directconnect/
Our service provider must be portable to connect to AWS Direct
Connect.
Step 1 − Open the AWS Direct Connect console using this link
− https://console.aws.amazon.com/directconnect/
step 2 − Select AWS Direct Connect region from the navigation bar.
step 3 − Welcome page of AWS Direct Connect opens. Select Get Started
with Direct Connect.
step 4 − Create a Connection dialog box opens up. Fill the required details
and click the Create button.
AWS will send an confirmation email within 72 hours to the authorized user.
When an instance is running, get its private IP address and ping the IP
address to get a response.
Reduces bandwidth costs − The cost gets reduced in both ways, i.e.
it transfers the data to and from AWS directly. The data transferred
over your dedicated connection is charged at reduced AWS Direct
Connect data transfer rate rather than Internet data transfer rates.
Compatible with all AWS services − AWS Direct Connect is a
network service, supports all the AWS services that are accessible over
the Internet, like Amazon S3, Amazon EC2, Amazon VPC, etc.
Private connectivity to Amazon VPC − AWS Direct Connect can be
used to establish a private virtual interface from our home-network to
Amazon VPC directly with high bandwidth.
Elastic − AWS Direct Connect provides 1 Gbps and 10 Gbps
connections, having provision to make multiple connections as per
requirement.
Easy and simple − Easy to sign up on AWS Direct Connect using the
AWS Management Console. Using this console, all the connections and
virtual interfaces can be managed.
A prompt window will open. Click the Create Bucket button at the
bottom of the page.
Create a Bucket dialog box will open. Fill the required details and click
the Create button.
Click the start upload button. The files will get uploaded into the
bucket.
step 2 − Select the files & folders option in the panel. Right-click on the
object that is to be moved and click the Cut option.
step 3 − Open the location where we want this object. Right-click on the
folder/bucket where the object is to be moved and click the Paste into
option.
Step 2 − Select the files & folders option in the panel. Right-click on the
object that is to be deleted. Select the delete option.
Step 2 − Right-click on the bucket that is to be emptied and click the empty
bucket option.
Low cost and Easy to Use − Using Amazon S3, the user can store a
large amount of data at very low charges.
Secure − Amazon S3 supports data transfer over SSL and the data
gets encrypted automatically once it is uploaded. The user has
complete control over their data by configuring bucket policies using
AWS IAM.
Scalable − Using Amazon S3, there need not be any worry about
storage concerns. We can store as much data as we have and access it
anytime.
Higher performance − Amazon S3 is integrated with Amazon
CloudFront, that distributes content to the end users with low latency
and provides high data transfer speeds without any minimum usage
commitments.
Integrated with AWS services − Amazon S3 integrated with AWS
services include Amazon CloudFront, Amazon CLoudWatch, Amazon
Kinesis, Amazon RDS, Amazon Route 53, Amazon VPC, AWS Lambda,
Amazon EBS, Amazon Dynamo DB, etc.
This volume type is suitable for small and medium workloads like Root disk
EC2 volumes, small and medium database workloads, frequently logs
accessing workloads, etc. By default, SSD supports 3
IOPS InputOutputOperationsperSecondInputOutputOperationsperSecond/GB
means 1 GB volume will give 3 IOPS, and 10 GB volume will give 30 IOPS.
Its storage capacity of one volume ranges from 1 GB to 1 TB. The cost of
one volume is $0.10 per GB for one month.
This volume type is suitable for the most demanding I/O intensive,
transactional workloads and large relational, EMR and Hadoop workloads,
etc. By default, IOPS SSD supports 30 IOPS/GB means 10GB volume will
give 300 IOPS. Its storage capacity of one volume ranges from 10GB to 1TB.
The cost of one volume
is 0.125perGBforonemonthforprovisionedstorageand0.125perGBforonemonth
forprovisionedstorageand0.10 per provisioned IOPS for one month.
It was formerly known as standard volumes. This volume type is suitable for
ideal workloads like infrequently accessing data, i.e. data backups for
recovery, logs storage, etc. Its storage capacity of one volume ranges from
10GB to 1TB. The cost of one volume
is 0.05perGBforonemonthforprovisionedstorageand0.05perGBforonemonthfor
provisionedstorageand0. 05 per million I/O requests.
In EC2 instances, we store data in local storage which is available till the
instance is running. However, when we shut down the instance, the data
gets lost. Thus, when we need to save anything, it is advised to save it on
Amazon EBS, as we can access and read the EBS volumes anytime, once we
attach the file to an EC2 instance.
Step 2 − Store EBS Volume from a snapshot using the following steps.
An Attach Volume dialog box will open. Enter the name/ID of instance
to attach the volume in the Instance field or select it from the list of
suggestion options.
Click the Attach button.
Connect to instance and make the volume available.
AWS Gateway offers two types of storage, i.e. volume based and tape
based.
Volume Gateways
Gateway-cached Volumes
Every application requires storage volumes to store their data. This storage
type is used to initially store data when it is to be written to the storage
volumes in AWS. The data from the cache storage disk is waiting to be
uploaded to Amazon S3 from the upload buffer. The cache storage disk
keeps the most recently accessed data for low-latency access. When the
application needs data, the cache storage disk is first checked before
checking Amazon S3.
Upload buffer disk − This type of storage disk is used to store the data
before it is uploaded to Amazon S3 over SSL connection. The storage
gateway uploads the data from the upload buffer over an SSL connection to
AWS.
Gateway-stored Volumes
When the Virtual Machine VMVM is activated, gateway volumes are created
and mapped to the on-premises direct-attached storage disks. Hence, when
the applications write/read the data from the gateway storage volumes, it
reads and writes the data from the mapped on-premises disk.
Virtual Tape Library VTLVTL − Each gateway-VTL comes with one VTL.
VTL is similar to a physical tape library available on-premises with tape
drives. The gateway first stores data locally, then asynchronously uploads it
to virtual tapes of VTL.
Tape Drive − A VTL tape drive is similar to a physical tape drive that can
perform I/O operations on tape. Each VTL consists of 10 tape drives that are
used for backup applications as iSCSI devices.
Virtual Tape Shelf VTSVTS − A VTS is used to archive tapes from gateway
VTL to VTS and vice-a-versa.
Archiving Tapes − When the backup software ejects a tape, the gateway
moves the tape to the VTS for storage. It is used data archiving and
backups.
Retrieving Tapes − Tapes archived to the VTS cannot be read directly so
to read an archived tape, we need to retrieve the tape from gateway VTL
either by using the AWS Storage Gateway console or by using the AWS
Storage Gateway API.
Step 2 − DNS routes your request to the nearest CloudFront edge location
to serve the user request.
Step 3 − At edge location, CloudFront checks its cache for the requested
files. If found, then returns it to the user otherwise does the following −
CloudFront forwards the next request for the object to the user’s origin
to check the edge location version is updated or not.
If the edge location version is updated, then CloudFront delivers it to
the user.
If the edge location version is not updated, then origin sends the latest
version to CloudFront. CloudFront delivers the object to the user and
stores the latest version in the cache at that edge location.
Features of CloudFront
Fast − The broad network of edge locations and CloudFront caches copies of
content close to the end users that results in lowering latency, high data
transfer rates and low network traffic. All these make CloudFront fast.
Step 1 − Copy the following HTML code to a new file and write the domain-
name that CloudFront assigned to the distribution in the place of domain
name. Write a file name of Amazon S3 bucket in the place of object-name.
<html>
<body>
<p>My Cludfront.</p>
<p><img src = "http://domain-name/object-name" alt = "test
image"/>
</body>
</html>
Step 3 − Open the web page in a browser to test the links to see if they are
working correctly. If not, then crosscheck the settings.
Step 1 − Login to AWS management console. Use the following link to open
Amazon RDS console − https://console.aws.amazon.com/rds/
Step 4 − The Launch DB Instance Wizard opens. Select the type of instance
as required to launch and click the Select button.
Step 5 − On the Specify DB Details page, provide the required details and
click the Continue button.
Step 6 − On the Additional configuration page, provide the additional
information required to launch the MySQL DB instance and click the Continue
button.
Step 7 − On Management options page, make the choices and click the
Continue button.
Step 8 − On the Review page, verify the details and click the Launch DB
Instance button.
Now DB instance shows in the list of DB instances.
After completing the task, we should delete the DB instance so will not be
charged for it. Follow these steps to delete a DB instance −
Step 1 − Sign in to the AWS Management Console and use the following link
to open the Amazon RDS console.
https://console.aws.amazon.com/rds/
Step 3 − Click the Instance Actions button and then select the Delete option
from the dropdown menu.
When using Amazon RDS, pay only for only the usage without any minimum
and setup charges. Billing is based on the following criteria −
For latest updated price structure and other details, visit the following link
− https://aws.amazon.com/rds/pricing/
Now the Table-name is visible in the in-to the list and Dynamo Table is
ready to use.
Benefits of Amazon DynamoDB
Flexible: Amazon DynamoDB allows creation of dynamic tables, i.e. the table
can have any number of attributes, including multi-valued attributes.
Step 1 − Sign in and launch a Redshift Cluster using the following steps.
Sign in to AWS Management console and use the following link to open
Amazon Redshift console − https://console.aws.amazon.com/redshift/
Select the region where the cluster is to be created using the Region
menu on the top right side corner of the screen.
Click the Launch Cluster button.
The Cluster Details page opens. Provide the required details and click
the Continue button till the review page.
A confirmation page opens. Click the Close button to finish so that
cluster is visible in the Clusters list.
Select the cluster in the list and review the Cluster Status information.
The page will show Cluster status.
Step 2 − Configure security group to authorize client connections to the
cluster. The authorizing access to Redshift depends on whether the client
authorizes an EC2 instance or not.
Click the Edit button. Set the fields as shown below and click the Save
button.
o Type − Custom TCP Rule.
o Protocol − TCP.
o Port Range − Type the same port number used while launching
the cluster. By-default port for Amazon Redshift is 5439.
o Source − Select Custom IP, then type 0.0.0.0/0.
There are two ways to connect to Redshift Cluster − Directly or via SSL.
Connect the cluster by using a SQL client tool. It supports SQL client
tools that are compatible with PostgreSQL JDBC or ODBC drivers.
ODBC https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_08_
04_0200.zip or
http://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_09_00_010
1x64.zip for 64 bit machines
Click the folder icon and navigate to the driver location. Finally, click
the Open button.
Leave the Classname box and Sample URL box blank. Click OK.
Choose the Driver from the list.
In the URL field, paste the JDBC URL copied.
Enter the username and password to their respective fields.
Select the Autocommit box and click Save profile list.
Features of Amazon Redshift
Supports VPC − The users can launch Redshift within VPC and control
access to the cluster through the virtual networking environment.
Encryption − Data stored in Redshift can be encrypted and
configured while creating tables in Redshift.
SSL − SSL encryption is used to encrypt connections between clients
and Redshift.
Scalable − With a few simple clicks, the number of nodes can be
easily scaled in your Redshift data warehouse as per requirement. It
also allows to scale over storage capacity without any loss in
performance.
Cost-effective − Amazon Redshift is a cost-effective alternative to
traditional data warehousing practices. There are no up-front costs, no
long-term commitments and on-demand pricing structure.
It is used to capture, store, and process data from large, distributed streams
such as event logs and social media feeds. After processing the data, Kinesis
distributes it to multiple consumers simultaneously.
How to Use Amazon KCL?
Data log and data feed intake − We need not wait to batch up the
data, we can push data to an Amazon Kinesis stream as soon as the
data is produced. It also protects data loss in case of data producer
fails. For example: System and application logs can be continuously
added to a stream and can be available in seconds when required.
Real-time graphs − We can extract graphs/metrics using Amazon
Kinesis stream to create report results. We need not wait for data
batches.
Real-time data analytics − We can run real-time streaming data
analytics by using Amazon Kinesis.
Following are certain limits that should be kept in mind while using Amazon
Kinesis Streams −
Step 2 − Set up users on Kinesis stream. Create New Users & assign a
policy to each
user.WehavediscussedtheprocedureabovetocreateUsersandassigningpolicytot
hemWehavediscussedtheprocedureabovetocreateUsersandassigningpolicytot
hem
Open the Amazon EMR console and select the desired cluster.
Move to the Steps section and expand it. Then click the Add step
button.
The Add Step dialog box opens. Fill the required fields, then click the
Add button.
To view the output of Hive script, use the following steps −
o Open the Amazon S3 console and select S3 bucket used for the
output data.
o Select the output folder.
o The query writes the results into a separate folder.
Select os_requests.
o The output is stored in a text file. This file can be downloaded.
AWS Data Pipeline is a web service, designed to make it easier for users to
integrate data spread across multiple AWS services and analyze it from a
single location.
Using AWS Data Pipeline, data can be accessed from the source, processed,
and then the results can be efficiently transferred to the respective AWS
services.
How to Set Up Data Pipeline?
Amazon Machine Learning reads data through Amazon S3, Redshift and
RDS, then visualizes the data through the AWS Management Console and
the Amazon Machine Learning API. This data can be imported or exported to
other AWS services via S3 buckets.
Step 1 − Sign in to AWS account and select Machine Learning. Click the Get
Started button.
Step 2 − Select Standard Setup and then click Launch.
Step 3 − In the Input data section, fill the required details and select the
choice for data storage, either S3 or Redshift. Click the Verify button.
Step 4 − After S3 location verification is completed, Schema section opens.
Fill the fields as per requirement and proceed to the next step.
Cost-efficient − Pay only for what we use without any setup charges and
no upfront commitments.
Amazon CloudSearch
Amazon SWF
Step 3 − Run a Sample Workflow window opens. Click the Get Started
button.
Step 4 − In the Create Domain section, click the Create a new Domain radio
button and then click the Continue button.
Step 7 − In the Run an Execution section, choose the desired option and
click the Run this Execution button.
Finally, SWF will be created and will be available in the list.
Its migration tool allows to move mailboxes from on-premises email servers
to the service, and works with any device that supports the Microsoft
Exchange ActiveSync protocol, such as Apple’s iPad and iPhone, Google
Android, and Windows Phone.
Step 1 − Sign in to AWS account and open the Amazon WorkMail console
using the following link − https://console.aws.amazon.com/workmail/
Step 3 − Select the desired option and choose the Region from the top right
side of the navigation bar.
Step 4 − Fill the required details and proceed to the next step to configure
an account. Follow the instructions. Finally, the mailbox will look like as
shown in the following screenshot.
Features of Amazon WorkMail
Managed − Amazon WorkMail offers complete control over email and there
is no need to worry about installing a software, maintaining and managing
hardware. Amazon WorkMail automatically handles all these needs.