0% found this document useful (0 votes)
176 views105 pages

Amazon Web Services - Clean

The document provides an overview of cloud computing and Amazon Web Services. It defines cloud computing and the different types of cloud models including public, private and hybrid clouds. It also gives examples of popular cloud services and describes the different cloud service models of SaaS, PaaS and IaaS.

Uploaded by

Arpit Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
176 views105 pages

Amazon Web Services - Clean

The document provides an overview of cloud computing and Amazon Web Services. It defines cloud computing and the different types of cloud models including public, private and hybrid clouds. It also gives examples of popular cloud services and describes the different cloud service models of SaaS, PaaS and IaaS.

Uploaded by

Arpit Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 105

Amazon Web Services

Introduction

Cloud computing is a new means of storing as well as accessing programs and information
on the internet rather than the drive of your computer. The cloud can be viewed as an
alternate to the internet. However, cloud computing does not concern your disk drive. After
you are done storing knowledge over or operate programs from the computing and storage,
all that you had ever hoped for is on the verge of coming true, which basically means that it
is going to get faster and easier to access knowledge and information, regardless of where it
is situated. Operating your disk drive is however, the pc traded for decades; for this reason,
some will contend that it continues to be better than cloud computing, for reasons that shall
soon be justified.

What Is Cloud Computing?

Despite being a buzzword, what it is all about, how it impacts you and makes your
life easier is not a new phenomenon.

Cloud computing denotes the availability and accessibility of resources on the computer system on a
real-time basis, particularly computing power and storage of data with the added advantage that the
user does not need to manage this availability actively. Typically, this term elucidates data centers
which can be accessed by a number of users on the web Internet. In today’s day and age, the
functionality of large clouds is distributed across more than one place from a centralized server.
.

Types of Clouds

Below is the list of the three categories under which clouds fall.

Public Cloud

Public cloud entails the availability of services/resources on the part of third-party


operators to their clients on the internet. Here, the infrastructure owned by the
operator helps secure the customers’ data.
Private Cloud

It is very similar to public cloud; however, in this case, the third-party operator or the
organization manages services and data for the concerned customer. Under this
category, security risks are drastically reduced because most of the control pertains
to the aforementioned infrastructure.

Hybrid Cloud

This category combines the best of public as well as private cloud, which is why it is
considered to be the optimal choice as compared to the other two.

Common Examples of Cloud Computing

Google Drive: Known as the epitome of cloud computing, it is ideal for the purpose
of working alongside cloud apps such as Google Slides/Docs/Sheets. It can be used
on both laptops and desktops, as well as on smart devices such as smartphones. As
a matter of fact, the majority of Google's offerings can be viewed as an extension of
cloud: Google Maps, Google Calendar and of course, the ubiquitous Gmail.
Apple iCloud: Apple is here to stay and rule, mainly for its efficacy in backup,
internet storage and getting your calendar, contacts and email synched. Every bit of
information that you have a liking for is available, be it Mac OS, Windows device, or
iOS (users of Windows would do well to deploy iCloud). Understandable, Apple
refuses to be play the second fiddle and offers its own versions on the cloud in the
form of computer programs, applications and a lot more in the form of iCloud. This
feature is also a major hit among users of iPhone as soon as they realize that the
telephone is as good as redundant.
Hybrid services: Examples include Sugar Sync and Dropbox, which are capable of
adding the cloud due to the fact their ability to get your files synced on the internet;
having said that, they also synchronize such files using storage. Similarly, it can be
equated to cloud computing in case there are some individuals who possess
separate devices but need to sync identical data, whether it is for collaborative
purposes or staying connected with your family and loved ones. This is a great
example of the efficacy and use of cloud computing.
Cloud Hardware
Currently, Chromebook comes to mind straight away when you are asked to think
about a completely cloud-centered tool. Currently, design laptops only possess
sufficient storage and power. Essentially, this converts Google Chrome to a software.
However, Chromebook allows you to do just about anything on the internet: play
games, connect with friends, listen to music and avail exciting apps.
Amazon Cloud Drive: Amazon’s storage is particularly useful as far as music is
concerned preferably the MP3s bought by you from the behemoth. If you happen to
be a subscriber of Amazon Prime, you’re in for a treat in the form of unrestricted
storage of images. It is also able to retain the things that you purchase for and from
the Kindle.
Amazon Web Services forayed into IT offerings back in 2006 by launching web
services, or cloud computing. Thanks to this friendly cloud, we no longer need to worry
about infrastructure and servers that are time and effort-intensive. On the contrary,
such services are capable of immediately spinning up countless servers within minutes,
thus delivering the results at a faster pace. What adds to the cost efficiency of AWS is that you
only pay for what you use.

Cloud service model

When it comes to comparing cloud service models, there are three options:
1) software as a Service (SaaS)
2) Platform as a Service (PaaS)
3) Infrastructure as a Service (IaaS).

Each model has its share of plusses and minuses as well as unique features, which
is why it is a useful idea to develop a better understanding of its features for choosing
the optimal option based on what your company needs.
A large number of businesses tend to use SaaS when they are required to access
AN application on the internet (such as Salesforce.com.) PaaS comes into the fray
whenever a company or a business will prepare its customized applications for
corporate purposes. Then of course there is the increasingly important IaaS
wherever companies such as Google, Rackspace as well as Amazon come up with
something called backbone, lent by several firms. As a case in point, Netflix offers its
services for the reason that you avail a client pertaining to Amazon’s cloud services.

Overview of SaaS
as Also called cloud application, SaaS denotes the best ordinarily used possibility for
corporations in the cloud segment. SaaS makes use of the internet for providing
applications, that a third-party entity manages for customers. The majority of SaaS
applications are directly operated via the programmer of applications, thus
suggesting that customers do not need to undergo any installations or downloads.
Delivery of SaaS
SaaS obviates the need for owning transfer of IT workers and deploy apps on each
computer owing to its delivery model. SaaS, allows vendors to surmount all
impending technical challenges such as middleware, information, storage, and thus
paving the way for efficacious maintenance.

Advantages:
SaaS is known to prove employers as well as business with several benefits by
significantly lowering the money and time wasted on performing pedantic tasks such
as installing, running and updating systems of software. This enables technical
experts to spend their time usefully on solving problems faced by the corporation
they work for.

Key traits:

 Run from one centralized place


 Foreign server hosts SaaS
 It can be availed on the internet
 Users do not need to be accountable for updates in software/hardware
When should SaaS be used?
Small organizations or startups that need to speedily unveil ecommerce quickly
Those who are in need of easy, quick and meaningful collaborative efforts
Applications which are not required very often such as tax software
Applications which need access to mobile and the internet

Examples
 Google
 Dropbox
 GSuite (Apps),
 Cisco
 Salesforce
 , SAP Concur
 WebEx
 GoToMeeting.

Overview of PaaS
PaaS impart elements related to cloud to package while remaining active when it
comes to applications. PaaS provide developers with a dependable framework that
can be utilized for making customized applications. The third-party player or
enterprise will undertake the management of various networking, storage, as well as
servers while application management will be taken care of by the developers.
Delivery of PaaS
Its model of delivery is similar to that of SaaS, with the exception that instead of using
the internet to deliver the package, PaaS offers a completely new platform for
creating packages viand delivers it on the web. This enables developers to have
more time for ruminating on structuring packages without needing to fret about
infrastructure, package updates, functioning systems or storage.
Advantages
PaaS imparts several benefits to a company, regardless of its size. These include:
 Simplicity
 Efficacious preparation and development of applications
 Highly accessible and scalable
 Can be customized by the developers
 It has the ability of automating business policies
 Can be easily migrated to hybrid models
Key traits:
It leverages virtualization technology; this means that it is feasible to scale up or
down your business with the passage of time
Offers a wide range of offerings to aid app preparing/testing/events
Users can easily access it in wake of constant development
Integrates databases into internet services
When should PaaS be used?
PaaS is known to outline workflows after several developers work on a development-
based project. In case it is necessary to encompass alternate vendors, PaaS would
be advantageous to this approach by lending adaptability and speed. It is particularly
helpful for preparing customized apps.
Examples
 AWS Elastic stalk
 Heroku
 Windows Azure
 Force.com, Open Shift
 Google App Engine

Overview of IaaS
Also referred to as cloud infrastructure services, IaaS refers to a self-service model
that can be used to access and observe storage, computer systems, networking and
other offerings. IaaS enables corporations to gain access to on-demand resources
depending on the need and does not force customers to buy hardware directly from
outside and pay hefty payment.
Delivery of IaaS
IaaS is known to provide servers, computer-related infrastructure, networks, storage,
and operation systems via virtualization. Typically, organizations are provided with
IaaS over API or a dashboard. This is ana advantageous approach in that it provides
buyers of IaaS with the ability to manage the entire infrastructure with great efficacy.
The level, scale and quality of IaaS is similar to that of knowledge centers, but it also
offers the added benefit of not needing to ensue physical maintenance of it in its
entirety. Buyers of IaaS would still be able to directly avail their storage and servers;
however, all of it would be outsourced over the cloud on a “virtual knowledge center.
Another notable fact is that IaaS suppliers manage networking, laborious drives,
servers, storage as well as virtualization with a great deal of efficacy. Certain
suppliers also provide other offerings such as message queuing or appropriate
handling of databases.
Advantages:
 Easily the most flexible and versatile model of cloud computing
 It can be easily modified to deal with requirements pertaining to process
power, servers, networking and storage
 Purchase of hardware supports a good level of consumption
 Customers can keep complete control over infrastructure
 Climbable
 Resource can be bought on as-need and on-demand basis
Key traits of IaaS:
Unit of resources area is provided in the form of a service
There is variation of cost depending on usage
Very scalable resource area unit
Single hardware can be used by many users
Versatile, flexible and dynamic
Company or corporation is able to keep control over the concerned infrastructure
for management
When should IaaS be used?
Like its above counterparts, IaaS offers a great deal of benefits on specific aspects.
Small firms and startups may prefer IaaS for averting reimbursement on time and
money for making computer codes and/or hardware. On the other hand, bigger firms
can find IaaS advantageous for ensuring absolute management on infrastructure and
applications. Corporations that are currently witnessing a great deal of growth are
fond of its measurability and flexibility. Basically, those who ae not sure of the
demands of a new application do not need to look further than IaaS, which is
primarily attributed to its measurability as well as flexibility.

Examples
 DigitalOcean
 Rackspace
 , Linode,
 Cisco Metacloud
 AWS
 Google calculate Engine (GCE)
 Microsoft Azure

AWS - Basic Architecture

AWS’s basic structure is as follows: Elastic Compute Cloud (or EC2) makes it possible to
make use of virtual systems with varying configurations depending on subjective
requirements. It Via EC2, you can access new options for pricing, map individual
servers, andvariousconfigurations, etc. Each of these would be elaborated upon in the section
on AWS Products. The architecture’s diagrammatic representation is as follows:
As shown in the diagram above, S3 is the acronym of Simple Storage Service It which
makes it possible for strong and retrieving several data types via API. It S2 is bereft of
any computing element.

Load Balancing

In simple terms, it entails loading software or hardware across internet servers, which
help enhance the application/server’s efficacy.

A frequent appliance (of network) utilized on conventional internet application


architectures is hardware load balancer.

Elastic Load Balancing service, provided by the AWS directs traffic toward instances of
EC2 across several available sources, also facilitating EC2 hosts’ addition as well as
extrication from the rotation of load balancing.

ELB is capable of growing and reducing (dynamically) the capacity of load-balancing


to cope with varying traffic-related demands. In addition, it lends support to sticky
sessions for accommodating advanced routing requirements.

Amazon Cloud-front

Responsible for delivery of content; in other words, it is used for delivering website.
Amazon Cloud-front may comprise of streaming, static as well as dynamic content
through the usage of a worldwide network encompassing edge locations. When users
make a request to seek content from their end, these requests get routed automatically
to the closest location, thus enhancing the performance.

Its optimization makes Amazon Cloud font compatible with AWS, such as Amazon EC2
and Amazon S3. It is also compatible with any server of non-AWS origin and saves
original files similarly.

Amazon Web Services, are advantageous in that they do not entail any commitments or
contracts on a monthly basis. This means that you can use for as much content as you
need via this offering.

Security Management

Security groups is one of the features of EC2. It bears similarity to that of an inbound
network firewall wherein the user is required to pinpoint the ports/protocols and
retrieve IP ranges which are subsequently permitted to reach EC2 instances.

It is possible to assign all EC2 instances multiple security groups, all of which route
adequate traffic toward every single instance. It is also possible to configure security
groups via specific IP addresses that curtails access of EC2 instances.

Amazon Elastic Caches

This web service undertakes the management of memory cache over the cloud. When
it comes to managing memory, cache role in lowering the load on services, augmenting
the scalability as well as performance across databases by caching information use
frequently.

Amazon Relational Database Service

The access provided by RDS is similar to that of Microsoft SQL Server database
engine, Oracle or MySQL. It is possible to use the same tools, applications and queries
with Amazon RDS.

In addition to patching the database software automaticallyand managing backups based


on the instructions of the users, it also lends support to speedy recovery. It does not
entail ant investments to be made upfront; in addition, users are only required to pay
based on the resources they intend to use.

Hosting RDMS

Via Amazon RDS, users can deploy their preferred Relational Database Management
System such as SQL Server, Oracle, MySQL and DB2 on EC2 instances.

Amazon EC2 makes use of Amazon EBS in a similar manner to that of storage attached
to networks. The entire data operating on EC2 instances must be situated across
Amazon EBS volumes. This would be made available even in case there is a failure in
database host.

Amazon EBS volumes are also helpful in that they deliver redundancy automatically in the
zone of availability, thus increasing simple disks’ availability. Moreover, in case the
volume is insufficient to meet database needs, it is possible to add it for enhancing
database performance. The operator uses Amazon RDS for managing the storage.

Backups and Storage

AWS cloud offers a number of options to save, access as well as back up internet
application assets and data. Amazon S3 delivers a simple internet-based interface
which could be used for storing and retrieving any quantum of information anywhere,
anytime time, on the internet.

Via Amazon S3, data is stored in the form of objects within buckets. The user is capable of
storing objects depending on the need based within these resources, and is capable
of writing/reading and erasing objects as well.

Amazon EBS remains efficacious for data which requires access for block storage
requiring persistence, including application logs as well as database partitions.

Volumes of Amazon EBS could be raised till 1 TB; it is possible to stripe these volumes for
improved performance and higher volumes. As of now, EBS supports 1,000
IOPS/volume. multiple volumes can be striped together for delivering thousands of
IOPS for each instance in an application.

The difference between a conventional hosting model and AWS cloud architecture is
that it is possible for the latter to dynamically adjust the fleet of web application on a
demand basis in order to adjust with traffic-related changes.

Under a conventional hosting model, models of traffic forecasting are used for
provisioning hosts before projecting traffic. However, AWS allows instances to be
provisioned on the move via some triggers that can scale the fleet in and out. Amazon
Auto Scaling is capable of creating servers’ capacity groups that are capable of
lowering or growing demand.

in AWS Web Hosting – Key Considerations


With regard to web hosting, the important considerations are as follows:

No need for physical network devices

One of the key benefits of AWS is that network devices such as routers, firewalls and
load-balancers related to AWS applications do not need to be placed on physical
devices. Instead, they can be substituted using software-based solutions.

In order to ensure the quality of software solutions, several options can be depended
upon. For the purpose of load balancing, one can select Pound, Nginx, Zeus, etc. On
the other hand, one can select Vyatta, OpenSwan and Open VPN to setup VPN
connections.

Foolproof security
The model of AWS is very secure wherein all hosts are locked down. Security groups
get designed for all host types in the architecture of Amazon EC2. In addition, it is
possible to create a broad array of simple as well as tiered security models in order to
provide minimum access level amongst hosts depending on the requirement.

Data centers

One can easily access EC2 instances at the majority of availability zones across the
AWS region, thus providing a model to deploy applications over data centers to ensure
reliability and high availability.

AWS - Management Console

This internet-based application helps facilitate the management of AWS. It comprises of


a number of services from which the selection can be made. Additionally, it provides
all information pertaining to our account such as billing. AWS Management Console
also comes up with a built-in user interface that can be used for performing tasks
such as collaborating withS3 buckets, unveiling and establishing connections with
Amazon EC2 instances, andsetting up Amazon CloudWatch alarms, among others.

Below is a screenshot of this console with regard to Amazon EC2 offering.


Steps for gaining access to AWS
1) After clicking on access, you will come across a list of services
2) Choose the option from the list of available categories. As you do that, you would
be able to avail sub-categories like Computer/Database category as shown in the
screenshot below.

3) Choose your preferred service, after which its console would open.

How to Customize the Dashboard

Create Services Shortcuts

To begin with click on the Edit menu that can be seen on the navigation bar. As you
do that, a list of options would show up. It is possible to create shortcuts by merely
dragging to the navigation bar from the menu bar.
Add Services Shortcuts

After you take the aforementioned steps, you would have added and created the
shortcut. They can also be arranged in any order. The below screenshot shows the
created shortcut for DynamoDB, EMR and S3 services.

Delete Services Shortcuts

Flor deleting the shortcut, click on edit before dragging the desired shortcut to the service
menu from the navigation bar as shown in the screenshot below.
Selecting a Region

Since many services are specific to one region, we are required to choose one in order
to manage resources. However, services such as AWS Identity and Access
Management (IAM) do not require such a selection.

You first are required to choose a service for selecting the region. For example, click on USWest
Oregon (on the console’s left side) before choosing a region.

How to Change the Password

Take the steps mentioned below to get the AWS account’s password changed.

1) On the left-hand side of the navigation bar, click on the name of account, which in
this case is ‘Narayan.’
2) Select Security Credentials after which you will see a new page opening up with
several options. Click on the option to get the password changed. Then, ensure
compliance with the below instructions.

3) After logging in, a page would re-open with many options for altering the
password. After that, please follow the instructions below.

You will see a confirmation message after your attempt to change passwords is
successful

How to Know Billing-Related Information

On the navigation bar, select the name of account before choosing the option
'Billing & Cost Management.’
You will be led to a page that contains all the information concerning the section on
money. This service allows you to track usage, pay AWS bills and estimate budgets.

AWS - Console Mobile App

Amazon Web Services provides the AWS Console Mobile app that enables users to access
resources for choosing services while also providing support to a chosen few
management functions.

This mobile app allows you to access the following functions and services

EC2
See details of configurations; search, filter and browse instances
Know the status of CloudWatch alarms and metrics.
Carry out operations across instances such as stop,start,termination and reboot. EC2 also
facilitates management of security group guidelines and Elastic IP address
View devices that are blocked.

Elastic Load Balancing


Search, filter and browse load balancers.
See attached instances’ configuration details. It also allows addition and removal of
instances from load balancers.

S3
See properties of buckets after browsing them. View objects’ properties as well.

Route 53
Browse and see hosted zones as well as various details relating to record sets.

RDS
Search/reboot, filter and browse instances
View network/security settings along with details of configuration.

Auto Scaling
View alarms, policies, group-related details and metrics. Depending on the
situation, get the number of instances managed.

Elastic Beanstalk
See events as well as applications.
Restart app servers. Swap environment CNAMEs and view environment
configurations.

DynamoDB
See details of tables such as alarms, index and metrics, among others.

CloudFormation
See tags, stack status, resources, events/ output and parameters.

OpsWorks
See detailsof applications, instances, layers and stacks. Reboot instances after
viewing them and its logs

CloudWatch
View graphs of resources.
List alarms by time and status. Set configurations for various alarms.

Services Dashboard
This dashboard contains all information about existing services along with their
status as well as about the user’s billing.
Change users to view resources in more than one account.

AWS Mobile App Features


In order to access this app, it is mandatory to have a current AWS account. All you need
to do is to get an identity created via credentials and choosing the region from the
menu. Using AWS Mobile App, you can avail multiple simultaneously.

For the purpose of security, users are suggested to get the device secured using ah a
passcode and log into this app using the credentials of IAM users. If the device
happens to get lost for some reason, the user could be deactivated so that no one is
unauthorized gains access

You cannot use the mobile console to activate root accounts. Those who are making
use of Multi-Factor Authentication are suggested to use a virtual one or a hardware
device (MFA) on another device to ensure the account’s security.

In the menu of this app, there is a link of feedback that enables users to ask questions
or to share experiences.

Amazon Web Services - Account

Steps for Using AWS Account


1) After creating an account, sign-up for AWS offerings.

2) Set your password after which you are ready to use your account details.
Services can be activated in the credits section.

Steps for Creating an AWS Account

Users of Amazon are provided a functional account free of cost for a span of one year
so as to know about the different features and elements of for AWS. More specifically,
users would be able to access AWS services such S3, EC2 and DynamoDB, among
others, without having to pay a fee. Having said that, certain limitations do exist on
the resources to be consumed.

1) Open https://aws.amazon.com and sign-up to create a new AWS account before


entering in the necessary details.

Existing AWS account holders can directly sign in via their password.

2) Fill up the form after entering your email address. This information is used by
Amazon for invoicing/billing purposes as well as to get the account identified. Sign-up
for the necessary services after you create the account
3) Entering payment information is the next step for signing up for the services. To ensure the
card’s validity, Amazon engages in a transaction of minimal amount against it. This charge
changes depending on the region.

4) Verifying identity is the next step under which Amazon calls back for verifying the
contact number given.

5) Select a support plan such as Basic, Enterprise, Business or Developer. If you want
to get acquainted with AWS, choose the basic plan which is free of cost, albeit with
curtailed resources.

6) Confirmation is the final step. Click on the link for relogging after which you would
be directed to management console.
The account has now been created, which means that the users can start accessing
AWS services.

Account Identifiers
A couple of unique IDs are assigned to each account of AWS, as listed below.

AWS Account ID

This 12-digit account ID is used for constructing Amazon Resource Names (ARN). The
number distinguishes our resources from others in different AWS accounts.

As the below screenshot shows, click on support on the navigation bar (upper right side) in
management console.

Conical String User ID

This ID comprises of alphanumeric characters such as 1274abcdef3234 and gets


utilized as part of S3 bucket policy concerning access to resources that are part of
another AWS account.

Account Alias
This is basically the URL related to the user’s sign-in page which has the account ID.
The URL can be customized with the name of company and even get the previous one
overwritten.

Steps for Creating and Deleting Your AWS Account Alias

1) Use https://console.aws.amazon.com/iam/ to open the IAM console after signing


in to the AWS management console
2) Choose the customize link before creating a preferred account alias.

3) For deleting this alias, click the customize link, and choose the button that reads
‘Yes, Delete.’

Multi Factor Authentication


This adds a layer of security by getting the users authenticated in order to put in an
authentication code or SMS message upon gaining access to AWS services/websites.
The only way in which users can gain access to these offerings is by ensuring that
the code is correct.

Requirements

For the purpose of using MFA, a device (virtual or hardware) needs to be assigned AWS
root account or a user of IAM. However, a user is prevented from entering a code from
the device of another user, which means that the MFS device must be uniquely
assigned.
Enabling MFA Device
1) Visit the URL: https:// console.aws.amazon.com/iam/

2) Select users on the right side of the navigation pane to see the list of users.

3) Scroll below to find your way to security credentials and then select MFA. Then,
click on activate MFA.

4) Follow all the instructions shown on the screen.

Three ways of enabling such devices include:

SMS

This method needs users to get the IAM user configured with the contact number of the
SMS- compatible mobile device of the user. Upon signing in, AWS would be sending a
six-digit long code via SCM to the mobile device of the user who then must enter it on
another internet page while signing in so that the correct user is authenticated. An
AWS root account is mandatory for availing this method.

Hardware

Here, an MFA device (which is hardware) gets assigned to the AWS root account or
the user of IAM. Subsequently, this device creates a six-digit long numeric code
premised on a single-time password algorithm. However, the user must put in the
same code on another internet page while signing in so that the correct user is
authenticated.
Virtual

As per this method, an MFA device (which is virtual) gets assigned. This device is
essentially a software app that runs on a cellular device mirroring a physical device.
Subsequently, this device creates a six-digit long numeric code premised on a single-
time password algorithm
However, the user must put in the same code on another internet page while signing in
so that the correct user is authenticated.

AWS IAM

IAM denotes a user entity generated in AWS for representing an individual who makes

use of it without completely access the resources. For this reason, root account does
not need to be used in everyday activities as the root account offers completely
access to the resources of AWS.
Steps for Creating Users

1) Visit the URL: https://console.aws.amazon.com/iam/ for logging into AWS


Management console.

2) Choose the option of ‘Users’ on the navigation pane (on the left side) to view the
list of users.

3) Creating New Users is also possible via the Create New Users option. When the
new window opens up, put in the intended user name. Choose the ‘create’ option to
create a new user.
4) It is possible to view Access Key IDs by choosing the link of ‘Show Users Security
Credentials.’ If you want, you can get the details saved on your PC via the option of
‘Download Credentials.’

5) You can now manage the security credentials of the user, such as like ensuring
management of MFA devices, generating passwords, creating access keys and/or
deleting them, and getting the users added to new groups, among others.
AWS - EC2

This internet service interface offers a e compute capacity (resizable) on the AWS cloud.
Using this interface, devices can get complete control on computing resources and
internet scaling.

Depending on your requirement, you can increase or lower the number of instances.
It is possible to launch such instances in multiple geographical regions. All regions
consist of many Availability Zones across specific locations linked by networks (low
latency) across the same regions.
Components of EC2
It is important for users to know more about the components of EC2, security
measures, support for operating systems and pricing structures, among others.

Security Measures

Under AWS EC2, security systems allow for the creation of groups and situate running
instances accordingly based on the requirement. The groups where others could
communicate, in addition to the ones where online IP subnets could talk needs to be
specified.

OS Support

Amazon EC2 allows users to gain access to multiple OS for which additional licensing
fees must be paid: this includes SUSE Enterprise, Red Hat Enterprise, UNIX, Windows
Server, and Oracle Enterprise Linux. It is necessary to deploy these OSS alongside
Amazon’s Virtual Private Cloud (VPC).

Pricing Features

AWS provides a wide range of pricing options, based on the kinds of database,
applications, and resources. Users are allowed to configure resources while
computing charges accordingly.

Tolerance for Faults

Via Amazon EC2, users can avail the resources for preparing applications that are
fault-tolerant. Additionally, EC2 consist of both isolated locations and geographic
regions for stability and fault tolerance. For security purposes, it does not pinpoint
where local data centers are located.

Upon launching the instance, users are required to choose an AMI which is situated in
the same area wherein the instance would operate.

Migration

By gaining access to this service, users can migrate current applications to EC2. Its
cost is $80.00/storage device as well as $2.49 an hour to load data. The service is
particularly suited to users who need to migrate copious amounts of data.

Features of EC2
On-Demand – You can access it from anywhere, regardless of your location
Resource pulling – Put succinctly, there will be a massive data center that would be
offered via different channels.
Elasticity – This is another great feature of EC2
Flexibility - It is capable of accommodating many OS. Additionally, it is quite secure
thanks to proactive elements such as private key files. Amazon EC2 works in VPC
for offering a secure network when it comes to accessing resources
Affordable - Users can pay for what they want. Options for purchase include Reserved
Instances, On-Demand Instances, etc.
Using AWS EC2

1) Upon signing in to AWS account, visit the following URL for opening the
IAM console: https://console.aws.amazon.com/iam/.

2) Create and see groups in the navigation panel.

3) − Select new IAM users on the navigation pane before creating new users to the
groups.

4) − Follow these steps to create a VPC.

Visit the following URL for opening the VPC console −


https://console.aws.amazon.com/vpc/

In the panel, select VPC. After that, choose the same area for which the key-
pair has been created

Choose the start VPC wizard.

Choose the configuration page of VPC and select VPC that has only one
subnet. Subsequently, choose the Select option.

After that, VPC that has only one public subnet page would open up. Put the
name of VPC in the corresponding field but make sure that other
configurations are left untouched.

Choose create VPC, before selecting Ok.

5) Create security groups of WebServerSG and then add the concerned rules by
following these instructions.

On the navigation panel, choose Security groups.

Click on ‘create security group’ before entering the necessary fields on your

screen. From the menu, select your VPC ID, after which choose yes, ‘create

button’.
After the creation of a new group, choose the edit option for creating rules
(this option is on the inbound rules tab).

6) Follow the instructions after launching EC2 instances to the VPC.

Visit the following UL for opening the EC2 console −

https://console.aws.amazon.com/ec2/ and then launch the instance option.

On the new page, select Instance Type before providing the desired
configuration. Thereafter, choose next:

After a new page opens, choose VPC from the list of networks. Choose
subnet from the list of subnets and leave out other settings.

Choose next until you see h Tag Instances page.

7) On the page of Tag Instances, give a tag along with the name to the instances.
Click on Configure Security Group.

8) When the next page opens, choose a present option of existing security group.
Choose the previously created WebServerSG group before selecting Review as well
as Launch.

9) Click on “Instance details” on Review Instance Launch before selecting the


Launch button.

10) After a pop-up dialog box shows up, either choose a current g key pair, else form
a new one.

Then click on the Launch Instances button.

Elastic Load Balancing


ELB distributes incoming traffic over multiple instances ofAmazon EC2 and helps achieve
greater fault tolerance. After automatically tracing unfit instances, it reroutes traffic till
the point instances have got restored appropriately. However, you may want to select
offerings such as Amazon Route53 if the need of the hour is more complicated routing
algorithms.

Three components of ELB include:

Load Balancer

Inclusive of tracking/overseeing requests incoming directed via the Internet/ or intranet


and disseminating them to registered EC2 instances.

Control Service

Includes automatic adjustment of handling capacity as a response to incoming traffic


through the addition as well as removal of load balancers. Ascertaining fitness of
instances is another feature of Control Service.

SSL Termination
This helps save CPU cycles, as well as decoding/encoding SSL in the EC2 instances
linked with ELB. You need an X.509 certificate to get it configured with the ELB. You
can also terminate the optional SSL connection in the EC2 instance.
ELB Attributes

ELS tackles unlimited requests each second with constantly rising load pattern.

EC2 instances as well as load balancers can be countered for the purpose of

accepting traffic.

Based on the requirement, load balancers can be added or removed without


impeding information flow

It is not ideally suited to tackle sudden spikes in requests suc as online


trading and online exams.

ELB can be enabled in a single AZ or across more than one zone to ensure
consistency in application performance.

Creating Load Balancers?


1) Use the following URL to access Amazon AC2 console −
https://console.aws.amazon.com/ec2/

2) Choose your load balancer region on the hand right side on the region menu.

3) Choose Load Balancers and then Create Load Balancer option. Enter the necessary
details after a pop-up window opens up.

4) Enter your load balancer’s name in the LB field.

5) Choose the network you have for instances in the box ‘create LB inside’.

6)Choose “Enable advanced VPC configuration”


7) After you click on the add button, a new pop-up would emerge where you can
choose subnets from a list of subnets. Choose a single subnet for each AZ.

8) Select Next. After choosing a VPC as the network, you can assign groups to LB.

9) Ensure compliance with the instructions for assigning security groups to LB. Now,
click on Next.

10) This should open up a new pop-up box showing configuration details of health checkup
along with default values. You can set your own values, although doing so is completely
option. Now, click on “Next” and add EC2 Instances.

11) A pop-up box would open up with information pertaining to instances such as
registered instances. Now is the time to get instances added to LB by choosing theoption
of ADD EC2 Instance and entering the required information. Click on Add Tags.

12) You can add tags to LB, if you want to. In order to do so, click on “Add Tags” Page
before entering details like value to the tag and key. Subsequently, select the Create
Tag option followed by “Click Review and Create” button.
13) Click on Create for setting up your LB and then then click on Clos.
Deleting a Load Balancer

1) Visit this URL to open Amazon EC2 console: https://console.aws.amazon.com/ec2/

2) On the navigation pane, select Load Balancers option.

3) Choose Load balancer before clicking on Action.

4) Finally, click on the Delete button. After you see an alert window seeking your
confirmation, choose Yes, Delete button.

AWS - WorkSpaces
This completely on-cloud managed desktop service lets customers deliverdesktops based
onthecloudto end-users so that the latter can get access to various resources using their
preferred device like Android tablets, laptops, Kindle Fire or iPads. This offering was
intended to meet the growing demand for DaaS or Desktop as a Service. Desktops are
streamed access to users through PCoIP whereas by default, the backed-up data is taken
on at a gap of every 12 hours.

Requirements

You need a web connection with UDP and TCP open ports at your end. You are
required to download a free app - Amazon WorkSpaces client.

Creating Amazon Workspaces


1) Create as well as configure the VPC. (We shall discuss this in detail in the VPC
chapter.)

2) Take the steps mentioned below to create an AD Directory.

Open Amazon WorkSpace Console by visiting this URL:


https://console.aws.amazon.com/workspaces/

On the navigation panel, choose Directories, followed by Setup Directory

After you see a new page, choose Create Simple AD button before entering
the necessary details.
Enter the VPC details in the VPC section before clicking on “Next step”.

You will come across a review page option where the entire information can
be reviewed. Incorporate changes, if incorrect and then select the Simple
AD button.
3) Take the steps mentioned below for creating a WorkSpace.

For accessing the Amazon WorkSpace Console,


visit https://console.aws.amazon.com/workspaces/

Choose Workspaces before launching the WorkSpaces option.

Click on the cloud directory. In this directory, enable and disable WorkDocs
concerningallthe users, before clicking on “Yes, Next.”

On the new page, enter all details that are required for a new user before
choosing Create Users. Select Next after ensuring that a user has been
added to the list.

Put in the number of required bundles on the WorkSpaces Bundles page,


before clicking on Next.

On the newly opned review page, ascertain all the details. If necessary,
incorporate the changes and click on Launch WorkSpaces.

After you will be shown a message for confirming the account, you can start using
WorkSpaces.

4) Examine WorkSpaces by following the instructions below.

Download as well as install the Amazon WorkSpaces client app


from here: https://clients .amazonworkspaces.com/ .
Start off by running the app. Those doing it for the first time would be
required to put in the registration code that they will receive via email. Now,
click on Register.

Get connected to WorkSpace by putting in the user name as well as


password. Click on Sign In.
After you see the WorkSpace desktop on your screen, visit this URL:
http://aws.amazon.com/workspaces/

Navigate and confirm that you can see this page.

You will see the following message: “Congratulations! Your Amazon


WorkSpaces cloud directory has been created, and your first WorkSpace is
working correctly and has Internet access.”

Features of Amazon WorkSpaces

Health Check-Up of Network

This this attribute of AWS WorkSpaces helps ascertain whether the Internet and
network connections are functioning, as well as whether WorkSpaces/ their related
registration offerings can be accessed. It also helps determine whether or not the port
4172 is available for accessing TCP or UDP.

Client Reconnect

Notably, this attribute of AWS WorkSpaces lets users access WorkSpace without
having to put in their credentials each time they get disconnected. It is noteworthy that
the app deployed at the device owned by the client saves a token in a robust store. that
remains valid for a period of 12 hours and helps authenticate the correct user. Users can
access their WorkSpace by clicking on the Reconnect button. This attribute can be
disabled anytime.

Auto Resume

This attribute of AWS WorkSpaces can help the client restore a disconnected session
caused by any reason within a span of 20 minutes (this is the default time-span and can
be extended up to four hours). This feature can also be disbanded anytime in the
section on group policy.

Console Search

This attribute of AWS WorkSpaces lets Administrators seek WorkSpaces on the basis
of directory, bundle type or name.
Amazon WorkSpaces - Advantages

Provides Data Security


Amazon VPN facilitates the deployment of Amazon Workspace. Users are able
to gain access to storage volumes on the cloud and get integrated with
important management services of AWS.

Better Utilization of Resources


AWS Workspaces helps in saving the time necessary to ascertain the number
of desktops to be implemented. Additionally, it helps in configuring the
desktop that you need. More specifically, it is cable of providing CPU and
storage configurations which can be modified so as to optimize your
application’s performance. Finally, it lowers the price of hardware that is
needed to be bought.

Compatibility with Multiple OS


It is compatible with Linux 2, Windows 7 as well as Windows 10. It is also
possible to select from several productivity application bundle.

Management of User Access


Users ae able to ensure easy management of user access control by
leveraging IP access control groups. In turn, this makes it easier for the
purpose of controlling. Additionally, it enables users to manage access of
users on their own Workspaces via current tools.

Remote Management
From a single AWS console, it is possible to manage the launch of several
Workspaces. Since it can be availed across 11 regions and delivers top-quality
cloud computing anytime and anywhere, it is also possible to augment the
deployment of global desktops.
AWS Lambda

This cloud service investigates actions inside the application before responding
through the deployment of functions, also referred to as user-defined codes. It helps
ensure automatic management of computingresources on several AZs.

AWS Lambda is compatible with codes in Node.js, Python and Java. Additionally, the
service is cable of launching processes in all languages that Amazon Linux supports

When it comes to utilizing AWS Lambda, follow these recommended steps.

Considering entring Lambda function code as a stateless style.

Do not declare any variable (function) that is not within the handler’s scope.

Retain certain +rx permissions in your files (uploaded ZIP) to make sure that
Lambda is capable of executing code on the behest of the user

When you do not require them any longer, it is better to get old functions
deleted.

Configuring AWS Lambda

1) Sign-in to the AWS account.

2) Choose Lambda from section of AWS services.

3) Choose an optional Blueprint and click on ‘Skip’


4) As demonstrated in the below screenshot, enter in all the relevant details for setting
up a Lambda function before pasting the Node.js code which will be automatically
triggered when a new item gets added to DynamoDB. Please choose the requisite
permissions.
5) Get your details verified after clicking on “Next”

6) Click on “Create Function”

Choose the Lambda service before selecting the Event Sources tab, where you will
see that there are no records. In order for the Lambda function to function, add a
minimum of one source to this function. In this case, DynamoDB Table is being added
to it.

7) After choosing the stream tab, it is time to link it to the Lambda function.

This entry can be viewed on Lambda Service page’s Event Sources Tab.

8) Get some entries added to the table. After the entry is added as well as saved,
Lambda service gets the function triggered. Use Lambda logs in order to verify this.

9) In order to see the logs, just choose the Lambda service before selecting the tab for
Monitoring. Subsequently, click on View Logs.
AWS Lambda – Advantages

Unlike the activity types of Amazon SWF, there is no need to register


Lambda tasks.

It is possible to use any Lambda functions which have already been defined
in the workflows.

Amazon SWF directly calls the Lambda functions; in other words, you do
not need to get a program designed in order to implement them.

Lambda gives the logs and metrics to monitor function executions.

AWS Lambda Limits


The three kinds of Lambda limits are as follows.

Throttle Limit

It is 100 simultaneous executions of Lambda function per account and geta


implemented to the overall simultaneous executions on all function in an unchanged
location.

In order to calculate it, the following formula is used = average duration of execution
of the function X total number of events/requests that AWS Lambda processes

Upon reaching this limit, a throttling error shows up with the code 429. However, you
can resume work after 15-20 minutes and increase this limit by getting in touch with
AWS support center.

Limit on Resources

This table illustrates the resource limits as far as a Lambada function is concerned.

Resource Limit
(Default)
Disk capacity 512 MB
Number of threads and processes 1,024

Total descriptors of file 1,024

Peak duration of execution for each request 5 minutes

Service Limit

This table illustrates the service limit list as far as a Lambada function is concerned
Item Limit
(Default)
Package size of Lambda function’s deployment 50 MB
Total size of dependencies and codes that can be zipped 250 MB
into a package of deployment

Overall size of all packages of deployment that you 1.5


can upload in each region GB
Total number of Lambda functions (unique) that can 5
be linked to all Scheduled Events

To know more about the most recent structure of limits and associated information,
please check out https://docs.aws.amazon.com/lambda/latest/dg/limits.html/

AWS - Virtual Private Cloud

VPC enables users to make use of AWS resources within a virtual network. If they
wish to, users allowed to get their virtual networking environment customized and make
changes. Thisincludesformingsubnets, choosing theirIPaddresses, as well as preparing network
gateways and route tables.

Amazon VPC offer the following AWS offerings −


o Amazon Redshift

o Amazon WorkSpaces Auto Scaling


o ELB
o Amazon OpsWorks Amazon RDS
o AWS Data Pipeline Elastic Beanstalk Amazon Elastic Cache Amazon EMR

Steps for Using Amazon VPC

1) − Visit this URL and open the Amazon VPC console −


https://console.aws.amazon.com/vpc/

2) On the navigation bar (right side), choose ‘create the VPC’ option and select the
same region that is chosen for other offerings.

3) After selecting VPC wizard, click on that shows the option of a single public subnet.

4) You will be directed to a configuration page. Enter subnet name and VPC name
but keep the other fields unchanged (default value). Now, click on Create VPC.

5) You will now be shown a dialog boxes that shows the work in progress. Choose
the OK button upon its completion.

Your VPCs page displays a list of VPCs that are available. You can change VPC
settings.
Creating VPC Group

) Visit the following URL − https://console.aws.amazon.com/vpc/

2) Onthenavigation bar, choose the option of security groups before selecting ‘create
security group’.

3) After a form page opens up, put in details such as name tag and group name.
Choose your VPC’s ID from the VPC menu. Next, press on Yes, create.

4) After the list of group shows up, choose its name from the list before setting the
rules. Next, press on Save.

Launching Instance to VPC

1) Visit the following URL − https://console.aws.amazon.com/vpc/

2) Ensure that the same region is selected that was chosen when creating security
group/VPC.

3) On the navigation bar, choose the option for Launch Instance.

4) When a new page shows up, select the desired AMI.

5) Now, select the Instance Type followed by the hardware configuration. Next, click
on
Next: Configure Instance Details.

6) From the list of networks, choose the VPC that was created recently as well as the subnet
from the list of Subnets. After keeping other settings in the default mode, choose the
Next till the Tag Instance page.

7) After you reach the page of Tag, get the instance tagged with Name in order to
pinpoint your instance and distinguish it from the list of various instances. Then,
press the following button: “Next: Configure Security Group.”

8) Upon reaching this page, choose the group that was recently created from the
chosen list. After doing that, press on “Review and Launch”

9) As you reach the page of Review Instance Launch, choose Launch after
ascertaining your details regarding the instance.

10) You will come across a dialog box. After creating a new key pair or choosing a
current one, press on “Launch Instances”.

11) You will now reach the confirmation page showing the details concerning
instances.

Steps for Assigning Elastic IP Address

1) Visit the following URL − https://console.aws.amazon.com/vpc/

2) On the navigation bar, choose the option of Elastic IP.

3) After choosing Allocate New Address from the list, press on Yes, Allocate.
4) From the available list, choose your Elastic IP address, before choosing Actions.
Follow it up by pressing on Associate Address.

5) On the newly open dialog box, choose the Instance as shown in the screenshot
below. After doing that, choose your instance from the list of instances. As a last
step, press on the following button, Yes, Associate.

Deleting a VPC

If you want to delete VPC and continue to access associated resources, just follow
these steps.

1) Visit this link: https://console.aws.amazon.com/vpc/

2) On the navigation bar, choose the option of Instances.


3) From the available list, choose the concerned Instance, and choose Actions.
Then, select Instance State and finally, press on Terminate.

4) You will come across a dialog box. After expanding the section titled Release
attached Elastic IPs, press on the checkbox near the Elastic IP address option. Now,
press on Yes, Terminate.

5) Reopen the Amazon VPC console −


https://console.aws.amazon.com/vpc/

6) On the navigation bar, choose the VPC. After choosing Actions, press on Delete
VPC.

7) You will now come across a confirmation message. Press on Yes, Delete.
VPC Features
When it comes to Amaon VPC, a umber of options for connectivity − Several

options of connectivity are available. You can use public subnets to

link directly to the web.

Similarly, use Network Address Translation through private subnets


to link to the web.

Establish a secure connection to your corporate datacenter through


the highly secure IPsec hardware VPN.

You can also establish private link with other VPCs for the purpose
of sharing resources on more than one virtual network.

Establish connection with Amazon S3 even in the absence of a web


gateway. As you do that, it is still possible to gain control on S3
buckets as well as its groups and user requests, among others.

User-friendliness − It is very easy to create a VPC. Just follow a few steps by


choosing network set-based on individual requirements. After pressing on
Start VPC Wizard, click on Subnets, and then follow it up with IP ranges. Now
click on route tables to automatically create security groups.

Backup data easily − On a regular basis, you can get backup of data from the
datacenter to instances of Amazon EC2 through the use of EBS volumes
(Amazon).

Use Cloud for extending networks − Migrate applications, unveil other


internet servers and boost storage capacity through getting it linked to a VPC.

AWS - Route 53

This is a scalable as well as available Domain Name System. intended for corporates
and developers alike who can leverage it for routing end users into web applications.

Steps for Configuring AWS Route 53


Please take the steps outlined below for configuring Route 53:

1) Visit this URL for opening the Route 53 console −


https://console.aws.amazon.com/route53/ .

2) On the navigation bar, (top left corner, press on the option of create hosted zone.

3) As a form page shows up, enter the requisite details like comments and domain
name before pressing on Create.
4) You have now created a hosted zone as far as the domain is concerned. You must
now need to update four DNS endpoints, also known as the delegation set, under the
domain names settings (Nameserver) as shown in the below screenshot.

5) − Return to Route 53 console before choosing the option of go to record sets. You
will come across a list containing record sets. You will come across two specific
record sets: NS and SOA, as shown in the screenshot below.
6) For creating your own record set, choose the option for create record set. Enter the
requisite details shown on the screen, press on Save, record.

9) Now you may want to form another record set for a different region in order to
have at least two record sets whose domain name is the same and points towards
varying IP addresses using your policy of selected routing.

Upon completion, requests placed by the user would be processed on the basis of
concerned policy.

Route 53 Features
Registering domain is deeply simplified. This means that as a user, you can
directly buy all domain levels such as .net, .com., etc. from Route 53.

Since it is built via AWS infrastructure, the distributed proclivity to DNS


servers goes a long way in making sure that it reliably routes end users’
applications on a consistent basis.

The scalable design of Router 53 enables it to tackle voluminous queries


without necessitating interaction of users.

Route 53 is also compatible with other offerings of AWS. For example, it can
be utilized for mapping names of domains to Amazon, Amazon EC2 instances
as well as a plethora of other resources pertaining to AWS.

Ease of use, configuration, sign up, and provision of speedy responses for
DNS-related questions is another key advantage.

Route 53 is also unique in that it proactively tracks the application’s overall


health. If Router 5 traces an outage, it redirects users automatically to another
resource that is healthy.

Users only need to pay for domain service along with the total number of
questions answered by the service per domain, which makes it a viable and
cost-effective option.

The fact that it gets integrated with IAM means that Router 53 is able to
control all users who form part of AW, which includes taking a call on which
user would be accessing what portion

AWS - Direct Connect

This feature makes it possible to use our network to establish a private network link to
the AWS location. The fact that it entails the use of 802.1q VLANs means that it is possible
to portion it into several virtual interfaces for the purpose of accessing resources
without having to change your connection. The end result is heightened bandwidth
and lower network costs. It is also possible to reconfigure virtual networks at any
point in time.

Prerequisites for usage


It is necessary for the network to comply with at least one condition for the purpose
of using AWS Direct Connect −

The network must be available in the list of locations of AWS Direct Connect.
To know more about the list of locations that are available, please visit
https://aws.amazon.com/directconnect/ .

Now, visit the following URL to know more about the AWS Direct Connect
partners because it is necessary to collaborate with a member of AWS
Partner Network − https://aws.amazon.com/directconnect/

You must be able to port your service provier in order to get connected with
AWS Direct Connect.

In addition, it is also important for the network to be able to comply with the
following guidelines:

In order to connect with AWS Direct Connect, you need a 1000BASE-LX


(1310nm) concerning 1GB Ethernet or 10GBASE-LR (1310nm) with regard to
10 Ethernet. In addition to disabling support for Negotiation for the port, you
must make available support for 802.1Q VLANs on all these connections.

Network should also be able to support authentication of Border Gateway


Protocol (BGP) MD5. Another option we have is to get Bidirectional Forwarding
Detection configured.

Steps for Configuring AWS Direct Connect


Visit the following URL for opening the console of AWS Direct
https://console.aws.amazon.com/directconnect/

2) On the navigation bar, choose AWS Direct Connect region.

3) After you see the Welcome page, press on Get Started with Direct Connect as
shown in the below screenshot:

5) After a dialog box (Create a Connection) opens up, enter the details and press on
Create.

Authorized users will be sent a confirmation email within 72 hours.

5) Take these steps for creating a Virtual Interface.

Re-open the page of AWS console.

On the navigation bar, choose Connection before pressing on Create a


Virtual Interface. Now, enter all the fields with the necessary details and
press on Continue.

After choosing Download Router Configuration, press on Download.


As an optional step, get the Virtual Interface verified. In order to do get AWS
Direct Connect links verified, follow these steps:

For the purpose of ascertaining virtual interface connection, run traceroute.


Then, make sure that you are able to find AWS Direct Connect identifier in
the network.

For the purpose of ascertaining connection of virtual interface with Amazon


VPC, utilize any pingable AMI before launching EC2 instance in the VPC
linked with the virtual private gateway.

Direct Connect Features


Direct Connect is able to lower bandwith expenses in two ways -by
transferring the data directly from and to AWS. Data that is moved across a
dedicated link entails a lower rate of Direct Connect data transfer as
compared to those of web data transfer.

This network service is compatbale with all AWS offerings which can be
accessed on the web. Examples include Amazon EC2, Amazon S3 as well as
Amazon VPC, among others.

Users can also take advantage of AWS Direct Connect for setting up a virtual
private interface from home-based network directly to Amazon VPC.

Elasticity is another key feature of AWS Direct Connect, which offers 1 Gbps
as well as 10 Gbps connections. It is also possible to establish more than a
single connection, depending on the requirement.

Simplicity and ease of use is the hallmark of AWS Direct Connect. You can
manage all virtual networks and other connections via the AWS Management
Console.

AWS - Amazon S3

This high-speed, cost-effective, and scalable internet service facilitates not only web
backup, but also application programs and data archiving. Furthermore, using this
service, you can store, download as well as upload any file type whose size is up to 5
TB. Another key aspect is that users can get access to the systems which are used by
Amazon for operating its sites. Furthermore, subscribers are also able to control
publicly and privately accessible data.

Steps for Configuring S3


1) –Visit the following link for opening the S3 console -
https://console.aws.amazon.com/s3/home

2) Follow these steps for Creating a Bucket

As shown in the below screenshot, you will come across a prompt window.
On the bottom, press on Create Bucket.

After you see a dialog box opening up, enter the requisite details and press
on Create.

After the successful completion of this bucket, you will see a listof buckets
along with its attributes.
Press on Static Website Hosting. After doing that, press on theEnablewebsite
hosting button. Now, enter all the necessary details in the fields.

3) Take these steps for adding a new Object.

Visit the following URL − https://console.aws.amazon.com/s3/home

Press on Upload.
Select the option titled Add files. Now, you must choose files that must be
uploaded before clicking on Open.

In order to upload the files in the bucket, press on Start Upload.

For downloading or opening a new object − Focus your attention on the list of
Objects & Folders and right-click the intended object. Next, choose the object that you
want to download or open.

Steps for Moving S3:

1) In the Amazon S3 console, choose the option of files & folders. After right clicking
on the object you want to move, press on Cut.
2) Click on open button to select the location where you want the object to be. Right-
click on the bucket where you want to move the object and press on Paste Into.

Steps for Deleting an Object


1) Login to Amazon S3.
Step 2 − On the panel shown on your screen, chose the files and folders option. For
selecting the option to delete, right click on the object which needs deletion.

Step 3 − After a pop-up window shows up for confirmation, press Ok.


Steps for Emptying a Bucket
1) Press the right-click button on the bucket which must be emptied. After that,
press on empty bucket.

2) On the confirmation message that appears on your screen, read what is written
carefully before pressing on Empty bucket.
AWS - Elastic Block Store

Amazon EBS refers to a block storage system that is utilized for storing persistent data.
It is deemed suitable for EC2 instances through the provision of storage volumes that
are highly available at the block level.

Types of EBS Volume


Three kinds of EBS Volume are available based on their cost, characteristics, etc.

EBS General Purpose


This is ideal for medium and small workloads such as Root disk EC2 volumes, aswell
as frequently logs accessing workloads, among others. On a default mode, SSD is
compatible with 3 IOPS /GB which implies that 1 GB volume would provide 3 IOPS,
whereas 10 GB volume would be providing 30 IOPS. One volume’s storage capacity
range between 1 GB and 1 TB. One volume cost $0.110 per month.

Provisioned IOPS

This is best suited for transactional workloads, difficult I/O intensive and big-sized
EMR/Hadoop workloads, among others. On a default mode, IOPS SSD is compatible
with 30 IOPS/GB which in turn implies that 10GB volume yields 300 IOPS. The storage
capacity is found to be ranging between 10GB and 1TB. Price of one volume is
$0.125/GB for each month for provisioned storage as well as $0.10 per month for each
provisioned IOPS.
Magnetic Volumes

Previously called standard volumes, this type of volume is best suited for workloads
such as data logs storage and backups for recovery, among others. For one volume, the
storage capacity is found to be ranging between 10GB and 1TB. Price of one volume is
as follows: $0.05/GB per month for provisioned storage.

Benefits of Amazon EBS


Secure and Reliable - Each EBS volume would respond automatically to
Availability Zone in order to get safeguarded from component failure.

Security − DuetoAmazon’spolicies ofaccesscontrol,it becomes easy to pinpoint who


is it that would be able to gain access to EBS volumes. Encryption of access
control plus offers a robust detailed data security strategy.

Strong performance − SSD technology is used by Amazon EBs for providing


results that are in consonance with stable I/O application performance.

Ease of data backup − This can be achieved by taking point-in-time snapshots


as far as Amazon EBS volumes are concerned.

Steps for creating Amazon EBS


1) Implement the steps listed below for creating EBS volume.

Launch the console of Amazon EC2.

On the navigation bar,choosetheareawhereyouwanttocreatethevolume. Choose

Volumes on the navigation pane before choosing Create Volume.

Enter the necessary information such as Size, Volume Type list, Availability
zone, and IOPS, among others before pressing on Create.
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint

The You can see the names of volumes from the list, as shown in the screenshot
below.

2) Take the steps outlined below for storing EBS Volume.

Repeat aforementioned steps for creating volume.

In the Snapshot ID field, type snapshot ID wherefrom you want to restore the
volume and choose it from the list of options suggested on the screen.

If you need more storage, alter the size of storage before pressing on the

button that reads Yes, Create.

3) Follow the steps outlined below for attaching EBS Volume to an Instance.

Open the console of Amazon EC2.

On the navigation pane, press on Volumes. Then select a volume and before
choosing the option of Attach Volume to open a dialog box.

On this newly opened dialox box, fill in the instance ID and name for linking
the volume within the field of Instance; else, you can choose it from a list of
suggestions.

Press on the button that reads Attach.

https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 59/96
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint

After linking to instance, now ensue that is volume can be availed.

4) Detaching volume from Instance.

For unmounting the device, enter the command /dev/sdh in cmd. Now, launch

the console Amazon EC2.

Choose the option of Volumes on the navigation pane. The next step entails

selecting the option of Detach Volumes.

When a confirmation dialog box shows up on your screen, press on Yes, Detach.

AWS - Storage Gateway

This paves the way for integration between the infrastructure of AWS storage and the
environment of on-premises IT. You can get the data stored in the AWS cloud for cost-
efficiency, secure and scalable storage.

The two kinds of storage offered by AWS Gateway include tape based and volume
based.

https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 61/96
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint

Volume-Based
This type of storage on the cloud is capable of being mounted as devices of Internet
Small Computer System Interface from application servers (on-premises).

Gateway-cached
The entire application data (on-premises) gets stored by AWS Storage Gateway inside a
storage volume as far as Amazon S3 is concerned. The volume of storage is found to
vary between 1GB and32 TB as well as till 20 volumes with an overall 150TB storage.
These volumes can be attached from application servers using iSCSI devices. Its two
categories are as follows −

Cache Storage

All applications need to get their data stored using storage volumes. This type of
storage is generally used for initially storing data after being written to the AWS
storage volumes. From cache storage disk, data can be uploaded from the buffer of
uploading into Amazon S3. The disk of cache storage retains the data that was
accessed most recently for access to low-latency. When the data is needed by the
application, the cache storage disk gets checked prior to Amazon S3. A minimum of 20%
of the current file is stored as cache storage, which needs to be higher as compared
to the upload buffer.

Snapshots − There are times when storage volumes need to get backed up
incrementally in Amazon S3. Incremental backup implies that a new snapshot onlybacks
up data which has changed since the previous snapshot. Such backups are called
snapshots and get stored in the form of Amazon EBS snapshots in Amazon S3.
Snapshots can either be taken at a fixed interval or as in line with the requirement.

Upload buffer disk is used for getting the data stored before it is uploaded onto
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 62/96
Amazon S3. The storage gateway gets the data uploaded from the upload buffer to
AWS on an SSL connection.

Volumes Stored on Gateway

Upon the activation of the VM, gateway volumes get mapped to attached storage
disks. For this reason, when new applications read and write data from gateway
storage volumes, they do so from the on-premises disk that are mapped.

A volume stored in a gateway makes it possible to get primary data stored locally besides
providing access of whole datasets to low-latency on-premises applications. They can be
mounted in the form of iSCSI devices to application servers (on-premises), with their
size ranging between1 GB and 16 TB. They are also capable of supporting 12 volumes
for each gateway 192 TB being the peak storage.

Virtual Tape Library


Notably, this type of storage offers a virtualized tape infrastructure which is able to
seamlessly scale in accordance with business needs, thus obviating the need to scale,
provision and maintain the infrastructure of a physical tape. All gateway-VTLs are
preconfigured using tape drivers and media changers available with current
applications of client backups in the form of iSCSI devices. If required, it is possible to
add tape cartridges later for data archiving.

In Architecture parlance, some terms are used commonly, as listed below.

Virtual Tape − Resembling its physical counterpart, it is cable of being stored on the
AWS cloud in two ways. through the use of AWS Storage Gateway API or AWS Storage
Gateway console. Every virtual tape’ size ranges between 100 GB and 2.5 TB. One
gateway’s size can increase till 150 TB; it is possible to simultaneously have 1500
tapes at the most.

VTL − All gateway-VTLs are accompanied by a single VTL, which again bears
resemblance with a physical tape library. After storing data locally, the gateway
uploads it asynchronously to VTL’s virtual tapes.

Media Changer − This bears resemblance to a robot which moves tapes within the tape
drives and storage slots of a physical tape library. Each VTL is associated with a single
media changer utilized for backing up applications

Tape Drive − This is capable of performing I/O operations on tape. Every VTL comprises
of 10 tape drives utilized for backup applications.

Virtual Tape Shelf (VTS) − This is utilized for archiving VTS into gateway VTL and vice-
a-versa.

Archiving Tapes − In the event a tape is ejected by a backup software, it is moved by


the gateway into the VTS with respect to storage purposes.

Page 63 of 105
Retrieving Tapes − Since it is not possible to directly read the tapes that are archived into
VTS, we must get the tape retrieved from thegateway VTL by either making use of AWS
Storage Gateway API or AWS Storage Gateway console.

AWS - CloudFront

CloudFront sources data from Amazon S3 bucket before distributing it to various


datacenter locations. In addition, the CDN provides data using a cluster of data centers
referred to as edge locations. Importantly, the closest edge location gets routed after
the user seeks to retrieve the data, in which leads to fast data access, reduced network
traffic as well as minimized latency.

AWS CloudFront provided content using these steps.

1) After visiting a website, the user places a request for the object to be downloaded
in the form of an image file.

2) In order to serve this request, DNS routes it to the closest location of CloudFront
edge.

3) − CloudFront gets its cache verified for the files that have been requested. If they are
found, they make it back to the user. Else, the following happens −

CloudFront evaluates the request with the stipulated specifications before


forward it to the origin server with respect to the corresponding type of file.

The origin server dispatches these files to location of CloudFront edge.

The moment the first byte arrives, CloudFront forwards it to the user therefore
adding the files into the cache within the edge location till the time someone
seeks access to the same file.

CloudFront Attributes
High-Speed The vast network of CloudFront and edge locations caches content in
close proximity to end users, which in turn leads to high rates of data transfer,
reduced latency, and reduced traffic of networks. These factors collectively increase
the speed of CloudFront

Ease of use is another key feature of CloudFront.

Compatibility with various AWS Services – A key advantage of Amazon CloudFront


is its design which allows it to be integrated easily with AWS services such as
Amazon EC2 and Amazon S3.

Cost-efficiency − Amazon CloudFront requires you to only pay for what you get
delivered via the network, with no hidden rates or up-front fee.

Page 64 of 105
Reliability – Since it is constructed on the extremely dependable infrastructure of
Amazon, the edge locations would automatically get end users re-routed to the next
closest location, if the need arises

Elasticity − Users of Amazon CloudFront do not need to get concerned about its
maintenance. This is attitude to the fact that it service responds automatically in
case any step needs to be initiated, for example, if the demand grows or reduces.

Global Network − It utilizes a worldwide cluster of edge locations that are situated in
many regions.

Steps for Setting Up AWS CloudFront

1) Visit the following URL to login to AWS management console −


https://console.aws.amazon.com/

2) Select every permission public after uploading Amazon S3.

3) Complete the steps mentioned below for creating a CloudFront Web Distribution

Use the following link for opening CloudFront −

https://console.aws.amazon.com/cloudfront/ Press on Get Started in the

internet section of Select a delivery method.

After the Create Distribution page shows up, select the Amazon S3 bucket
that was generated in the Origin Domain Name. After doing that, keep the rest
of the fields in default mode.

Page 65 of 105
You will now see the page of Default Cache Behavior Settings opening up. Do
not change the values and proceed to the subsequent page.

After you see a Distribution settings page on your screen, enter all the details
based on your requirement before pressing on Create Distribution.

The column ofStatusgets changed from to Deployed from In Progress. The next
step entails choosing the Enable option for enabling your distribution. The
domain name would feature in the Distributions’ list within a time span of 15
minutes.

Testing the Links


Upon the creation of distribution, CloudFront gets to know more about Amazon S3
server’s location whereas the end user gets to see the name of domain linked with the
aforementioned distribution. Meanwhile it is also possible to establish a link to
Amazon S3 bucket with the domain name and get CloudFront to serve it, thus saving
time.

Linking an object include three steps −

1) Copy the HTML code shown in the below screenshot into a new file before writing
the domain-name assigned by CloudFront to the distribution instead of the name of
domain. In the space of object-name, enter the name of Amazon S3 bucket.

<html>
<head>CloudFront Testing link</head>
<body>
<p>My Cludfront.</p>
<p><img src = "http://domain-name/object-name" alt = "test image"/>
</body>
</html>
2) Get the text saved in a file that has .html extension.

Page 66 of 105
3) Now launch this page on an internet browser to see if the links are correctly
working. If they are not, get the settings crosschecked.

Amazon Relational Database Service

As a SQL database that is fully managed, Amazon Relational Database Service allows
the creation of relational databases. The use of RDS can let you gain access to your
database and files cost-effectively and in a manner that is highly scalable.

Amazon RDS Attributes


Features of Amazon RDS include −

Scalability − Amazon RDS makes it possible to scale the relational database


through the use of RDS-specific API or AWS Management Console. Your RDS
needs can be met and addressed within a span of few minutes.

Host replacement − Oftentimes, such situations take place when there is a


failure of Amazon RDS. However, Amazon proactively takes care of such
concerns.

Affordable − You only pay for what you use when it comes to Amazon RDS,
which means you don’t need to pay up-front fees or stay committed for the
long-haul.

Security − You have total control on the network to gain access to their
database as well as related offerings.

Automatic backups – One of the best aspects of Amazon RDS is its ability to
back up all that is in the database, which include transaction logs of a minimum
of five minutes while also managing automatic timings of backup.

Software patch − Amazon RDS gets you the most recent patches
automatically for the database software. It is also possible to specify when the
software needs to be patched through the use of DB Engine Version Management.

Steps for Setting up Amazon RDS


1) Visit the following URL to launch the console of Amazon RDS −
https://console.aws.amazon.com/rds/

2) Choose the area where you need to create the DB instance, at console’s top right
corner.

3) On the navigation pane, choose Instances and then press on Launch DB Instance.

4) After you see the Launch DB Instance Wizard opening up, choose the type of
instance based on what you require in order to launch before clicking on Select.

Page 67 of 105
6) Upon reaching the page of Specify DB Details, enter all the details shown on the
screen and click on Continue

6) On the page pertaining to Additional configuration, put in the necessary


information for opening the MySQL DB instance before pressing on Continue.

Page 68 of 105
7) Select the options you want to on the page of Management Options and again
click on Continue.

8) After you reach Review, get the details verified before launching the DB Instance
button.

Page 69 of 105
DB instance can now be seen in the list comprising of DB instances.

Steps for Connecting Database to MySQL DB Instance


You can take the following steps to link a database on MySQL DB instance −

1) Type the command shown below on the cmd prompt on client computer for linking
a database on MySQL DB instance

2) Substitute <myDBI> with your DB instance’s name, followed by <myusername> with


your r user name & <mypassword> with your password.

PROMPT> mysql -h <myDBI> -P 3306 -u <myusername> -p

After the aforementioned command is run, the output looks as follows −

Page 70 of 105
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 350
Server version: 5.2.33-log MySQL Community Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>

Deleting a DB Instance
Upon the completion of the task, it is important to delete the DB instance in order to
avoid having to pay money for it using the following steps:

1) Open the AWS Management Console and launch the Amazon RDS console via this
URL.

https://console.aws.amazon.com/rds/

2) In the list of DB Instances, choose the DB instances that need deletion.

3) Press on Instance Actions before choosing the option of Delete from the
dropdown menu.

4) Press on Yes, Delete for deleting the DB instance.

Amazon RDS Cost Considerations


When you use Amazon RDS, you only pay for the usage with no setup or minimum
costs. Billing is premised on the following factors:

Instance class − On the basis of the DB instance’s class that is being used

Running time − Here, the price is arrived at on the basis of instance-hour,


equivalent to one instance running each hour.

Monthly I/O requests − Structure of billing is also inclusive of the number of


requests pertaining to storage I/O made in in one billing cycle.

Backup storage − No extra charges are incurred for backup storages till
100% database. However, free service is only available for DB instances.

AWS - DynamoDB

This 100%-managed NoSQL database service allows for the creation of database
tables which are capable of storing and retrieving any volume of data. In addition to
managing traffic of tables across multiple servers while maintaining performance, it
also ameliorates the burden of operating on the part of customers.

It is for this reason that Amazon manages setup, hardware provisioning, , software

Page 71 of 105
patching, replication cluster scaling and configuration, among others.

Steps for Setting Up and Running DynamoDB

1) Use this URL to download DynamoDB (.jar file). It provides support to several
Operating Systems such as Linux, Windows, and Mac, among others.
.tar.gz format − http://dynamodb-
local.s3-website-us- west2.amazonaws.com/dynamodb_local_latest.tar.gz

.zip format − http://dynamodb-


local.s3-website-us- west2.amazonaws.com/dynamodb_local_latest.zip.

After the download is complete, simply get the contents extracted and copy
the directory (extracted) to a preferred location.

In the command prompt,go to the directory where DynamoDBLocal.jar


has been extracted, before executing the following −

java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -sharedDb

You will not get access to the in-built javaScript shell.

2) Complete the steps mentioned below for creating a table.

Launch the AWS Management Console before choosing DynamoDB.

Choose the region where you want it to be created before pressing on


Create Table.

On the window shown below, enter the required details before clicking on
Continue.

You will now reach a review page where you can see all the details.

Page 72 of 105
As seen in the above screenshot, Table-name can be seen in-to the list
which means that you can now start using Dynamo Table.

Amazon DynamoDB Advantages


Amazon DynamoDB is essentially a managed service, which obviates the need for
hiring experts to get NoSQL installation managed. This also implies that developers
no longer need to be concerned about setting up, overseeing simultaneous cluster
operations, or getting a distributed database cluster configured Amazon DynamoDB
is adept at tackling all intricacies of scaling and data partitions over more machine
resources in order to meet the requirements of I/O performance.

–Since Amazon DynamoDB is scalable, you no longer need to be concerned about the
possible limit to the quantum of data that can be stored or retrieved by the table.
DynamoDB is capable of spreading automatically with the quantum of data stored with
the growth of the table.

Amazon DynamoDB is unique in that it is capable of providing high throughout while

Page 73 of 105
maintaining reduced latency. Latencies continue to be stable despite the growth in
datasets owing to the distribution of data placement of DynamoDB.

Amazon DynamoDB is capable of replicating data on a minimum of three data centers’


findings. Notably, the system is capable of operating and serving data under several
failure conditions as well.

It also dynamic table creation, which basically means that it is possible the table to
have limitless attributes.

You only need to pay for what you use; moreover, the payment structure of Amazon
DynamoDB is easy to understand

AWS - Redshift

This 100% managed data warehouse made available on the cloud is increasing in its
popularity. Redshift’s datasets range between 100s of gigabytes and 1 petabyte. As
part of the initial process, a data warehouse gets created for unveiling a set of
compute resources referred to as nodes that are classified into clusters. Thereafter,
you can proceed towards getting your queries processed.

Steps for Setting up Amazon Redshift


.

1) After signing in, open the Redshift Cluster by taking the steps listed below.

Visit the following URL to launch the console of Amazon Redshift −


https://console.aws.amazon.com/redshift/

Choose the area where you want to create the cluster via the Region menu
on your computer screen’s top right-hand side on the corner.

Press on Launch Cluster, as shown in the below screenshot.

Page 74 of 105
After the page for Cluster Details opens up, enter the necessary details
before pressing on Continue button until the review page is reached.

After you come across a confirmatory page, press on Close in order to view
the cluster in the list of Clusters.

After choosing the cluster, get the information on Cluster Status reviewed.
The page on your screen will display the status of the Cluster.

Page 75 of 105
2) Configuration of security group for authorizing client connections into the cluster.
Notably, the authorizing access to is predicated whether or not the client gets an EC2
instance authorized

Take the steps mentioned below.

After opening the Amazon Redshift Console, choose Clusters on the


navigation pane shown on your screen.
Chose your preferred Cluster. As you do that, its configuration tab would
open up.

Select the Security group.

Thereafter, press on Inbound tab.

After pressing on Edit, choose the fields shown in the screenshot above
and click on Save.

3) Linking to Redshift Cluster.

The two ways of linking to Redshift Cluster includedoing it directly or opting for SSL. If

you want to connect directly, take the steps mentioned below:

Connect the cluster via a SQL client tool as it supports SQL client tools
compatible with ODBC drivers or PostgreSQL JDBC.

Page 76 of 105
Use the following links to download − JDBC
https://jdbc.postgresql.org/download/postgresql-8.4- 703.jdbc4.jar

ODBC
https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_08_04_020
0.zip or
http://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_09_00_0101x64.zip
for 64 bit machines

Use the following steps to get the Connection String.

Open Amazon Redshift Console and select Cluster in the Navigation

pane. Select the cluster of choice and click the Configuration tab.

A page opens as shown in the following screenshot with JDBC URL


under Cluster Database Properties. Copy the URL.

Page 77 of 105
Click the folder icon and navigate to the driver location. Finally, click the
Open button.

Leave the Classname box and Sample URL box blank. Click OK. Choose the

Driver from the list.

In the URL field, paste the JDBC URL copied.

Enter the username and password to their respective fields. Select the

Autocommit box and click Save profile list.

Features of Amazon Redshift

Page 78 of 105
Following are the features of Amazon Redshift −
Supports VPC − The users can launch Redshift within VPC and control
access to the cluster through the virtual networking environment.

Encryption − Data stored in Redshift can be encrypted and configured while


creating tables in Redshift.

SSL − SSL encryption is used to encrypt connections between clients and


Redshift.

Scalable − With a few simple clicks, the number of nodes can be easily scaled
in your Redshift data warehouse as per requirement. It also allows to scale over
storage capacity without any loss in performance.

Cost-effective − Amazon Redshift is a cost-effective alternative to traditional


data warehousing practices. There are no up-front costs, no long-term
commitments and on-demand pricing structure.

Amazon Web Services - Kinesis

Amazon Kinesis is a managed, scalable, cloud-based service that allows real-time


processing of streaming large amount of data per second. It is designed for real-time
applications and allows developers to take in any amount of data from several sources,
scaling up and down that can be run on EC2 instances.

It is used to capture, store, and process data from large, distributed streams such as
event logs and social media feeds. After processing the data, Kinesis distributes it to
multiple consumers simultaneously.

How to Use Amazon KCL?


It is used in situations where we require rapidly moving data and its continuous
processing. Amazon Kinesis can be used in the following situations −

Data log and data feed intake − We need not wait to batch up the data, we can
push data to an Amazon Kinesis stream as soon as the data is produced. It also
protects data loss in case of data producer fails. For example: System and
application logs can be continuously added to a stream and can be available in
seconds when required.

Real-time graphs − We can extract graphs/metrics using Amazon Kinesis


stream to create report results. We need not wait for data batches.

Real-time data analytics − We can run real-time streaming data analytics by


using Amazon Kinesis.

Page 79 of 105
Limits of Amazon Kinesis?
Following are certain limits that should be kept in mind while using Amazon Kinesis
Streams −

Records of a stream can be accessible up to 24 hours by default and can be


extended up to 7 days by enabling extended data retention.

The maximum size of a data blob (the data payload before Base64-encoding)
in one record is 1 megabyte (MB).

One shard supports up to 1000 PUT records per second.

For more information related to limits, visit the


following link −
https://docs.aws.amazon.com/kinesis/latest/dev/service-sizes-and-limits.html

How to Use Amazon Kinesis?


Following are the steps to use Amazon Kinesis −

Step 1 − Set up Kinesis Stream using the following steps −

Sign into AWS account. Select Amazon Kinesis from Amazon Management
Console.

Click the Create stream and fill the required fields such as stream name and
number of shards. Click the Create button.

The Stream will now be visible in the Stream List.

Step 2 − Set up users on Kinesis stream. Create New Users & assign a policy to each
user.(We have discussed the procedure above to create Users and assigning policy

Page 80 of 105
to them)

Step 3 − Connect your application to Amazon Kinesis; here we are connecting


Zoomdata to Amazon Kinesis. Following are the steps to connect.

Log in to Zoomdata as Administrator and click Sources in menu.

Select the Kinesis icon and fill the required details. Click the Next button.

Select the desired Stream on the Stream tab.

On the Fields tab, create unique label names, as required and click the Next
button.

On the Charts Tab, enable the charts for data. Customize the settings as
required and then click the Finish button to save the setting.

Features of Amazon Kinesis


Real-time processing − It allows to collect and analyze information in real-time
like stock trade prices otherwise we need to wait for data-out report.

Easy to use − Using Amazon Kinesis, we can create a new stream, set its
requirements, and start streaming data quickly.

High throughput, elastic − It allows to collect and analyze information in real-

Page 81 of 105
time like stock trade prices otherwise we need to wait for data-out report.

Integrate with other Amazon services − It can be integrated with Amazon


Redshift, Amazon S3 and Amazon DynamoDB.

Build kinesis applications − Amazon Kinesis provides the developers with


client libraries that enable the design and operation of real-time data
processing applications. Add the Amazon Kinesis Client Library to Java
application and it will notify when new data is available for processing.

Cost-efficient − Amazon Kinesis is cost-efficient for workloads of any scale. Pay


as we go for the resources used and pay hourly for the throughput required.

Amazon Web Services - Elastic MapReduce


Amazon Elastic MapReduce (EMR) is a web service that provides a managed
framework to run data processing frameworks such as Apache Hadoop, Apache Spark,
and Presto in an easy, cost-effective, and secure manner.

It is used for data analysis, web indexing, data warehousing, financial analysis, scientific
simulation, etc.

How to Set Up Amazon EMR?


Follow these steps to set up Amazon EMR −

Step 1 − Sign in to AWS account and select Amazon EMR on management console.

Step 2 − Create Amazon S3 bucket for cluster logs & output data. (Procedure is
explained in detail in Amazon S3 section)

Step 3 − Launch Amazon EMR cluster.

Following are the steps to create cluster and launch it to EMR.

Use this link to open Amazon EMR


console − https://console.aws.amazon.com/elasticmapreduce/home

Select create cluster and provide the required details on Cluster


Configuration page.

Page 82 of 105
Leave the Tags section options as default and proceed.

On the Software configuration section, level the options as default.

On the File System Configuration section, leave the options for EMRFS as set
by default. EMRFS is an implementation of HDFS, it allows Amazon EMR
clusters to store data on Amazon S3.

Page 83 of 105
On the Hardware Configuration section, select m3.xlarge in EC2 instance
type field and leave other settings as default. Click the Next button.

On the Security and Access section, for EC2 key pair, select the pair from the list
in EC2 key pair field and leave the other settings as default.

On Bootstrap Actions section, leave the fields as set by default and click the
Add button. Bootstrap actions are scripts that are executed during the setup
before Hadoop starts on every cluster node.

On the Steps section, leave the settings as default and proceed.

Click the Create Cluster button and the Cluster Details page opens. This is
where we should run the Hive script as a cluster step and use the Hue web
interface to query the data.

Step 4 − Run the Hive script using the following steps.

Open the Amazon EMR console and select the desired cluster.

Move to the Steps section and expand it. Then click the Add step button.

Page 84 of 105
The Add Step dialog box opens. Fill the required fields, then click the Add
button.

To view the output of Hive script, use the following steps −

Open the Amazon S3 console and select S3 bucket used for the output

data. Select the output folder.

The query writes the results into a separate folder. Select os_requests.

The output is stored in a text file. This file can be downloaded.

Benefits of Amazon EMR


Following are the benefits of Amazon EMR −

Easy to use − Amazon EMR is easy to use, i.e. it is easy to set up cluster,
Hadoop configuration, node provisioning, etc.

Reliable − It is reliable in the sense that it retries failed tasks and


automatically replaces poorly performing instances.

Elastic − Amazon EMR allows to compute large amount of instances to


process data at any scale. It easily increases or decreases the number of
instances.

Secure − It automatically configures Amazon EC2 firewall settings, controls


network access to instances, launch clusters in an Amazon VPC, etc.

Flexible − It allows complete control over the clusters and root access to
every instance. It also allows installation of additional applications and
customizes your cluster as per requirement.

Cost-efficient − Its pricing is easy to estimate. It charges hourly for every


instance used.

Amazon Web Services - Data Pipeline

AWS Data Pipeline is a web service, designed to make it easier for users to integrate

Page 85 of 105
data spread across multiple AWS services and analyze it from a single location.

Using AWS Data Pipeline, data can be accessed from the source, processed, and then
the results can be efficiently transferred to the respective AWS services.

How to Set Up Data Pipeline?


Following are the steps to set up data pipeline −

Step 1 − Create the Pipeline using the following steps.

Sign-in to AWS account.

Use this link to Open AWS Data Pipeline


console − https://console.aws.amazon.com/datapipeline/

Select the region in the navigation bar. Click the Create New Pipeline button.

Fill the required details in the respective fields.

In the Source field, choose Build using a template and then select
this template − Getting Started using ShellCommandActivity.

The Parameters section opens only when the template is selected.


Leave the S3 input folder and Shell command to run with their default
values. Click the folder icon next to S3 output folder, and select the
buckets.

Page 86 of 105
In Schedule, leave the values as default.

In Pipeline Configuration, leave the logging as enabled. Click the


folder icon under S3 location for logs and select the buckets.

In Security/Access, leave IAM roles values as default. Click the Activate

button.

How to Delete a Pipeline?


Deleting the pipeline will also delete all associated objects.

Step 1 − Select the pipeline from the pipelines list.

Step 2 − Click the Actions button and then choose Delete.

Step 3 − A confirmation prompt window opens. Click Delete.

Features of AWS Data Pipeline


Simple and cost-efficient − Its drag-and-drop features makes it easy to create a pipeline
on console. Its visual pipeline creator provides a library of pipeline templates. These
templates make it easier to create pipelines for tasks like processing log files, archiving
data to Amazon S3, etc.

Reliable − Its infrastructure is designed for fault tolerant execution activities. If failures
occur in the activity logic or data sources, then AWS Data Pipeline automatically retries
the activity. If the failure continues, then it will send a failure notification. We can even
configure these notification alerts for situations like successful runs, failure, delays in
activities, etc.

Flexible − AWS Data Pipeline provides various features like scheduling, tracking, error
handling, etc. It can be configured to take actions like run Amazon EMR jobs, execute
SQL queries directly against databases, execute custom applications running on
Amazon EC2, etc.

Amazon Web Services - Machine Learning

Amazon Machine Learning is a service that allows to develop predictive applications by

Page 87 of 105
using algorithms, mathematical models based on the user’s data.

Amazon Machine Learning reads data through Amazon S3, Redshift and RDS, then
visualizes the data through the AWS Management Console and the Amazon Machine
Learning API. This data can be imported or exported to other AWS services via S3
buckets.

It uses “industry-standard logistic regression” algorithm to generate models.

Types of Tasks Performed by Amazon Machine Learning


Three different types of tasks can be performed by Amazon Machine learning
service −

A binary classification model can predict one of the two possible results, i.e.
either yes or no.

A multi-class classification model can predict multiple conditions. For


example, it can track a customer's online orders.

A regression model results in an exact value. Regression models can


predict the best-selling price for a product or the number of units that will
sell.

How to Use Amazon Machine Learning?


Step 1 − Sign in to AWS account and select Machine Learning. Click the Get Started
button.

Page 88 of 105
Step 2 − Select Standard Setup and then click Launch.

Step 3 − In the Input data section, fill the required details and select the choice for data
storage, either S3 or Redshift. Click the Verify button.

Page 89 of 105
Step 4 − After S3 location verification is completed, Schema section opens. Fill the
fields as per requirement and proceed to the next step.

Step 5 − In Target section, reselect the variables selected in Schema section and
proceed to the next step.

Page 90 of 105
Step 6 − Leave the values as default in Row ID section and proceed to the Review
section. Verify the details and click the Continue button.

Following are some screenshots of Machine Learning services.

Data Set Created by Machine Learning

Summary Made by Machine Learning

Page 91 of 105
Exploring Performance Using Machine Learning

Features of Amazon Machine Learning


Easy to create machine learning models − It is easy to create ML models from data
stored in Amazon S3, Amazon Redshift, Amazon RDS and query these models for
predictions by using Amazon ML APIs and wizards.

High performance − Amazon ML prediction APIs can be used further to generate


billions of predictions for the applications. We can use them within interactive web,
mobile, or desktop applications.

Cost-efficient − Pay only for what we use without any setup charges and no upfront
commitments.

Page 92 of 105
AWS - Simple WorkFlow Service
The following services fall under Application Services section −
Amazon CloudSearch
Amazon Simple Queue Services (SQS) Amazon Simple Notification Services (SNS)
Amazon Simple Email Services (SES) Amazon SWF

In this chapter, we will discuss Amazon SWF.

Amazon Simple Workflow Service (SWF) is a task based API that makes it easy to
coordinate work across distributed application components. It provides a
programming model and infrastructure for coordinating distributed components and
maintaining their execution state in a reliable way. Using Amazon SWF, we can focus on
building the aspects of the application that differentiates it.

A workflow is a set of activities that carry out some objective, including logic that
coordinates the activities to achieve the desired output.

Workflow history consists of complete and consistent record of each event that
occurred since the workflow execution started. It is maintained by SWF.

How to Use SWF?


Step 1 − Sign in to AWS account and select SWF on the Services dashboard.

Step 2 − Click the Launch Sample Walkthrough button.

Step 3 − Run a Sample Workflow window opens. Click the Get Started button.

Page 93 of 105
Step 4 − In the Create Domain section, click the Create a new Domain radio button
and then click the Continue button.

Step 5 − In Registration section, read the instructions then click the Continue button.

Page 94 of 105
Step 6 − In the Deployment section, choose the desired option and click the
Continue button.

Step 7 − In the Run an Execution section, choose the desired option and click the
Run this Execution button.

Page 95 of 105
Finally, SWF will be created and will be available in
the list.

Page 96 of 105
Benefits of Amazon SWF
It enables applications to be stateless,
because all information about a workflow
execution is stored in its workflow history.

For each workflow execution, the history


provides a record of which activities were
scheduled, their current statuses and
results. The workflow execution uses this
information to determine the next steps.

The history provides steps in detail that can


be used to monitor running workflow
executions and verify completed workflow
executions.

Page 97 of 105
Amazon Web Services
- WorkMail

Amazon WorkMail was formerly known as Zocalo.


It is a managed email and calendaring service that
runs in Cloud. It provides security controls and is
designed to work with your existing PC and Mac-
based

Page 98 of 105
Outlook clients including the prepackaged Click-to-Run versions. It also works with
mobile clients that speak the Exchange ActiveSync protocol.

Its migration tool allows to move mailboxes from on-premises email servers to the
service, and works with any device that supports the Microsoft Exchange ActiveSync
protocol, such as Apple’s iPad and iPhone, Google Android, and Windows Phone.

How to Use Amazon WorkMail?


Step 1 − Sign in to AWS account and open the Amazon WorkMail console using the
following link − https://console.aws.amazon.com/workmail/

Step 2 − Click the Get Started button.

Step 3 − Select the desired option and choose the Region from the top right side of
the navigation bar.

Page 99 of 105
Step 4 − Fill the required details and proceed to the next step to configure an account.
Follow the instructions. Finally, the mailbox will look like as shown in the following
screenshot.

Amazon WorkMail Attributes


Secure − Amazon WorkMail automatically encrypts entire data with the encryption keys
using the AWS Key Management Service.

Managed − Amazon WorkMail offers complete control over email and there is no need
to worry about installing a software, maintaining and managing hardware. Amazon
WorkMail automatically handles all these needs.

Accessibility − Amazon WorkMail supports Microsoft Outlook on both Windows and


Mac OS X. Hence, users can use the existing email client without any additional

Page 100 of 105


requirements.

Availability − Users can synchronize emails, contacts and calendars with iOS, Android,
Windows Phone, etc. using the Microsoft Exchange ActiveSync protocol anywhere.

Cost-efficient − Amazon WorkMail charges 4$ per user per month up to 50GB of


storage.

Page 101 of 105

You might also like