Amazon Web Services - Clean
Amazon Web Services - Clean
Introduction
Cloud computing is a new means of storing as well as accessing programs and information
on the internet rather than the drive of your computer. The cloud can be viewed as an
alternate to the internet. However, cloud computing does not concern your disk drive. After
you are done storing knowledge over or operate programs from the computing and storage,
all that you had ever hoped for is on the verge of coming true, which basically means that it
is going to get faster and easier to access knowledge and information, regardless of where it
is situated. Operating your disk drive is however, the pc traded for decades; for this reason,
some will contend that it continues to be better than cloud computing, for reasons that shall
soon be justified.
Despite being a buzzword, what it is all about, how it impacts you and makes your
life easier is not a new phenomenon.
Cloud computing denotes the availability and accessibility of resources on the computer system on a
real-time basis, particularly computing power and storage of data with the added advantage that the
user does not need to manage this availability actively. Typically, this term elucidates data centers
which can be accessed by a number of users on the web Internet. In today’s day and age, the
functionality of large clouds is distributed across more than one place from a centralized server.
.
Types of Clouds
Below is the list of the three categories under which clouds fall.
Public Cloud
It is very similar to public cloud; however, in this case, the third-party operator or the
organization manages services and data for the concerned customer. Under this
category, security risks are drastically reduced because most of the control pertains
to the aforementioned infrastructure.
Hybrid Cloud
This category combines the best of public as well as private cloud, which is why it is
considered to be the optimal choice as compared to the other two.
Google Drive: Known as the epitome of cloud computing, it is ideal for the purpose
of working alongside cloud apps such as Google Slides/Docs/Sheets. It can be used
on both laptops and desktops, as well as on smart devices such as smartphones. As
a matter of fact, the majority of Google's offerings can be viewed as an extension of
cloud: Google Maps, Google Calendar and of course, the ubiquitous Gmail.
Apple iCloud: Apple is here to stay and rule, mainly for its efficacy in backup,
internet storage and getting your calendar, contacts and email synched. Every bit of
information that you have a liking for is available, be it Mac OS, Windows device, or
iOS (users of Windows would do well to deploy iCloud). Understandable, Apple
refuses to be play the second fiddle and offers its own versions on the cloud in the
form of computer programs, applications and a lot more in the form of iCloud. This
feature is also a major hit among users of iPhone as soon as they realize that the
telephone is as good as redundant.
Hybrid services: Examples include Sugar Sync and Dropbox, which are capable of
adding the cloud due to the fact their ability to get your files synced on the internet;
having said that, they also synchronize such files using storage. Similarly, it can be
equated to cloud computing in case there are some individuals who possess
separate devices but need to sync identical data, whether it is for collaborative
purposes or staying connected with your family and loved ones. This is a great
example of the efficacy and use of cloud computing.
Cloud Hardware
Currently, Chromebook comes to mind straight away when you are asked to think
about a completely cloud-centered tool. Currently, design laptops only possess
sufficient storage and power. Essentially, this converts Google Chrome to a software.
However, Chromebook allows you to do just about anything on the internet: play
games, connect with friends, listen to music and avail exciting apps.
Amazon Cloud Drive: Amazon’s storage is particularly useful as far as music is
concerned preferably the MP3s bought by you from the behemoth. If you happen to
be a subscriber of Amazon Prime, you’re in for a treat in the form of unrestricted
storage of images. It is also able to retain the things that you purchase for and from
the Kindle.
Amazon Web Services forayed into IT offerings back in 2006 by launching web
services, or cloud computing. Thanks to this friendly cloud, we no longer need to worry
about infrastructure and servers that are time and effort-intensive. On the contrary,
such services are capable of immediately spinning up countless servers within minutes,
thus delivering the results at a faster pace. What adds to the cost efficiency of AWS is that you
only pay for what you use.
When it comes to comparing cloud service models, there are three options:
1) software as a Service (SaaS)
2) Platform as a Service (PaaS)
3) Infrastructure as a Service (IaaS).
Each model has its share of plusses and minuses as well as unique features, which
is why it is a useful idea to develop a better understanding of its features for choosing
the optimal option based on what your company needs.
A large number of businesses tend to use SaaS when they are required to access
AN application on the internet (such as Salesforce.com.) PaaS comes into the fray
whenever a company or a business will prepare its customized applications for
corporate purposes. Then of course there is the increasingly important IaaS
wherever companies such as Google, Rackspace as well as Amazon come up with
something called backbone, lent by several firms. As a case in point, Netflix offers its
services for the reason that you avail a client pertaining to Amazon’s cloud services.
Overview of SaaS
as Also called cloud application, SaaS denotes the best ordinarily used possibility for
corporations in the cloud segment. SaaS makes use of the internet for providing
applications, that a third-party entity manages for customers. The majority of SaaS
applications are directly operated via the programmer of applications, thus
suggesting that customers do not need to undergo any installations or downloads.
Delivery of SaaS
SaaS obviates the need for owning transfer of IT workers and deploy apps on each
computer owing to its delivery model. SaaS, allows vendors to surmount all
impending technical challenges such as middleware, information, storage, and thus
paving the way for efficacious maintenance.
Advantages:
SaaS is known to prove employers as well as business with several benefits by
significantly lowering the money and time wasted on performing pedantic tasks such
as installing, running and updating systems of software. This enables technical
experts to spend their time usefully on solving problems faced by the corporation
they work for.
Key traits:
Examples
Google
Dropbox
GSuite (Apps),
Cisco
Salesforce
, SAP Concur
WebEx
GoToMeeting.
Overview of PaaS
PaaS impart elements related to cloud to package while remaining active when it
comes to applications. PaaS provide developers with a dependable framework that
can be utilized for making customized applications. The third-party player or
enterprise will undertake the management of various networking, storage, as well as
servers while application management will be taken care of by the developers.
Delivery of PaaS
Its model of delivery is similar to that of SaaS, with the exception that instead of using
the internet to deliver the package, PaaS offers a completely new platform for
creating packages viand delivers it on the web. This enables developers to have
more time for ruminating on structuring packages without needing to fret about
infrastructure, package updates, functioning systems or storage.
Advantages
PaaS imparts several benefits to a company, regardless of its size. These include:
Simplicity
Efficacious preparation and development of applications
Highly accessible and scalable
Can be customized by the developers
It has the ability of automating business policies
Can be easily migrated to hybrid models
Key traits:
It leverages virtualization technology; this means that it is feasible to scale up or
down your business with the passage of time
Offers a wide range of offerings to aid app preparing/testing/events
Users can easily access it in wake of constant development
Integrates databases into internet services
When should PaaS be used?
PaaS is known to outline workflows after several developers work on a development-
based project. In case it is necessary to encompass alternate vendors, PaaS would
be advantageous to this approach by lending adaptability and speed. It is particularly
helpful for preparing customized apps.
Examples
AWS Elastic stalk
Heroku
Windows Azure
Force.com, Open Shift
Google App Engine
Overview of IaaS
Also referred to as cloud infrastructure services, IaaS refers to a self-service model
that can be used to access and observe storage, computer systems, networking and
other offerings. IaaS enables corporations to gain access to on-demand resources
depending on the need and does not force customers to buy hardware directly from
outside and pay hefty payment.
Delivery of IaaS
IaaS is known to provide servers, computer-related infrastructure, networks, storage,
and operation systems via virtualization. Typically, organizations are provided with
IaaS over API or a dashboard. This is ana advantageous approach in that it provides
buyers of IaaS with the ability to manage the entire infrastructure with great efficacy.
The level, scale and quality of IaaS is similar to that of knowledge centers, but it also
offers the added benefit of not needing to ensue physical maintenance of it in its
entirety. Buyers of IaaS would still be able to directly avail their storage and servers;
however, all of it would be outsourced over the cloud on a “virtual knowledge center.
Another notable fact is that IaaS suppliers manage networking, laborious drives,
servers, storage as well as virtualization with a great deal of efficacy. Certain
suppliers also provide other offerings such as message queuing or appropriate
handling of databases.
Advantages:
Easily the most flexible and versatile model of cloud computing
It can be easily modified to deal with requirements pertaining to process
power, servers, networking and storage
Purchase of hardware supports a good level of consumption
Customers can keep complete control over infrastructure
Climbable
Resource can be bought on as-need and on-demand basis
Key traits of IaaS:
Unit of resources area is provided in the form of a service
There is variation of cost depending on usage
Very scalable resource area unit
Single hardware can be used by many users
Versatile, flexible and dynamic
Company or corporation is able to keep control over the concerned infrastructure
for management
When should IaaS be used?
Like its above counterparts, IaaS offers a great deal of benefits on specific aspects.
Small firms and startups may prefer IaaS for averting reimbursement on time and
money for making computer codes and/or hardware. On the other hand, bigger firms
can find IaaS advantageous for ensuring absolute management on infrastructure and
applications. Corporations that are currently witnessing a great deal of growth are
fond of its measurability and flexibility. Basically, those who ae not sure of the
demands of a new application do not need to look further than IaaS, which is
primarily attributed to its measurability as well as flexibility.
Examples
DigitalOcean
Rackspace
, Linode,
Cisco Metacloud
AWS
Google calculate Engine (GCE)
Microsoft Azure
AWS’s basic structure is as follows: Elastic Compute Cloud (or EC2) makes it possible to
make use of virtual systems with varying configurations depending on subjective
requirements. It Via EC2, you can access new options for pricing, map individual
servers, andvariousconfigurations, etc. Each of these would be elaborated upon in the section
on AWS Products. The architecture’s diagrammatic representation is as follows:
As shown in the diagram above, S3 is the acronym of Simple Storage Service It which
makes it possible for strong and retrieving several data types via API. It S2 is bereft of
any computing element.
Load Balancing
In simple terms, it entails loading software or hardware across internet servers, which
help enhance the application/server’s efficacy.
Elastic Load Balancing service, provided by the AWS directs traffic toward instances of
EC2 across several available sources, also facilitating EC2 hosts’ addition as well as
extrication from the rotation of load balancing.
Amazon Cloud-front
Responsible for delivery of content; in other words, it is used for delivering website.
Amazon Cloud-front may comprise of streaming, static as well as dynamic content
through the usage of a worldwide network encompassing edge locations. When users
make a request to seek content from their end, these requests get routed automatically
to the closest location, thus enhancing the performance.
Its optimization makes Amazon Cloud font compatible with AWS, such as Amazon EC2
and Amazon S3. It is also compatible with any server of non-AWS origin and saves
original files similarly.
Amazon Web Services, are advantageous in that they do not entail any commitments or
contracts on a monthly basis. This means that you can use for as much content as you
need via this offering.
Security Management
Security groups is one of the features of EC2. It bears similarity to that of an inbound
network firewall wherein the user is required to pinpoint the ports/protocols and
retrieve IP ranges which are subsequently permitted to reach EC2 instances.
It is possible to assign all EC2 instances multiple security groups, all of which route
adequate traffic toward every single instance. It is also possible to configure security
groups via specific IP addresses that curtails access of EC2 instances.
This web service undertakes the management of memory cache over the cloud. When
it comes to managing memory, cache role in lowering the load on services, augmenting
the scalability as well as performance across databases by caching information use
frequently.
The access provided by RDS is similar to that of Microsoft SQL Server database
engine, Oracle or MySQL. It is possible to use the same tools, applications and queries
with Amazon RDS.
Hosting RDMS
Via Amazon RDS, users can deploy their preferred Relational Database Management
System such as SQL Server, Oracle, MySQL and DB2 on EC2 instances.
Amazon EC2 makes use of Amazon EBS in a similar manner to that of storage attached
to networks. The entire data operating on EC2 instances must be situated across
Amazon EBS volumes. This would be made available even in case there is a failure in
database host.
Amazon EBS volumes are also helpful in that they deliver redundancy automatically in the
zone of availability, thus increasing simple disks’ availability. Moreover, in case the
volume is insufficient to meet database needs, it is possible to add it for enhancing
database performance. The operator uses Amazon RDS for managing the storage.
AWS cloud offers a number of options to save, access as well as back up internet
application assets and data. Amazon S3 delivers a simple internet-based interface
which could be used for storing and retrieving any quantum of information anywhere,
anytime time, on the internet.
Via Amazon S3, data is stored in the form of objects within buckets. The user is capable of
storing objects depending on the need based within these resources, and is capable
of writing/reading and erasing objects as well.
Amazon EBS remains efficacious for data which requires access for block storage
requiring persistence, including application logs as well as database partitions.
Volumes of Amazon EBS could be raised till 1 TB; it is possible to stripe these volumes for
improved performance and higher volumes. As of now, EBS supports 1,000
IOPS/volume. multiple volumes can be striped together for delivering thousands of
IOPS for each instance in an application.
The difference between a conventional hosting model and AWS cloud architecture is
that it is possible for the latter to dynamically adjust the fleet of web application on a
demand basis in order to adjust with traffic-related changes.
Under a conventional hosting model, models of traffic forecasting are used for
provisioning hosts before projecting traffic. However, AWS allows instances to be
provisioned on the move via some triggers that can scale the fleet in and out. Amazon
Auto Scaling is capable of creating servers’ capacity groups that are capable of
lowering or growing demand.
One of the key benefits of AWS is that network devices such as routers, firewalls and
load-balancers related to AWS applications do not need to be placed on physical
devices. Instead, they can be substituted using software-based solutions.
In order to ensure the quality of software solutions, several options can be depended
upon. For the purpose of load balancing, one can select Pound, Nginx, Zeus, etc. On
the other hand, one can select Vyatta, OpenSwan and Open VPN to setup VPN
connections.
Foolproof security
The model of AWS is very secure wherein all hosts are locked down. Security groups
get designed for all host types in the architecture of Amazon EC2. In addition, it is
possible to create a broad array of simple as well as tiered security models in order to
provide minimum access level amongst hosts depending on the requirement.
Data centers
One can easily access EC2 instances at the majority of availability zones across the
AWS region, thus providing a model to deploy applications over data centers to ensure
reliability and high availability.
3) Choose your preferred service, after which its console would open.
To begin with click on the Edit menu that can be seen on the navigation bar. As you
do that, a list of options would show up. It is possible to create shortcuts by merely
dragging to the navigation bar from the menu bar.
Add Services Shortcuts
After you take the aforementioned steps, you would have added and created the
shortcut. They can also be arranged in any order. The below screenshot shows the
created shortcut for DynamoDB, EMR and S3 services.
Flor deleting the shortcut, click on edit before dragging the desired shortcut to the service
menu from the navigation bar as shown in the screenshot below.
Selecting a Region
Since many services are specific to one region, we are required to choose one in order
to manage resources. However, services such as AWS Identity and Access
Management (IAM) do not require such a selection.
You first are required to choose a service for selecting the region. For example, click on USWest
Oregon (on the console’s left side) before choosing a region.
Take the steps mentioned below to get the AWS account’s password changed.
1) On the left-hand side of the navigation bar, click on the name of account, which in
this case is ‘Narayan.’
2) Select Security Credentials after which you will see a new page opening up with
several options. Click on the option to get the password changed. Then, ensure
compliance with the below instructions.
3) After logging in, a page would re-open with many options for altering the
password. After that, please follow the instructions below.
You will see a confirmation message after your attempt to change passwords is
successful
On the navigation bar, select the name of account before choosing the option
'Billing & Cost Management.’
You will be led to a page that contains all the information concerning the section on
money. This service allows you to track usage, pay AWS bills and estimate budgets.
Amazon Web Services provides the AWS Console Mobile app that enables users to access
resources for choosing services while also providing support to a chosen few
management functions.
This mobile app allows you to access the following functions and services
EC2
See details of configurations; search, filter and browse instances
Know the status of CloudWatch alarms and metrics.
Carry out operations across instances such as stop,start,termination and reboot. EC2 also
facilitates management of security group guidelines and Elastic IP address
View devices that are blocked.
S3
See properties of buckets after browsing them. View objects’ properties as well.
Route 53
Browse and see hosted zones as well as various details relating to record sets.
RDS
Search/reboot, filter and browse instances
View network/security settings along with details of configuration.
Auto Scaling
View alarms, policies, group-related details and metrics. Depending on the
situation, get the number of instances managed.
Elastic Beanstalk
See events as well as applications.
Restart app servers. Swap environment CNAMEs and view environment
configurations.
DynamoDB
See details of tables such as alarms, index and metrics, among others.
CloudFormation
See tags, stack status, resources, events/ output and parameters.
OpsWorks
See detailsof applications, instances, layers and stacks. Reboot instances after
viewing them and its logs
CloudWatch
View graphs of resources.
List alarms by time and status. Set configurations for various alarms.
Services Dashboard
This dashboard contains all information about existing services along with their
status as well as about the user’s billing.
Change users to view resources in more than one account.
For the purpose of security, users are suggested to get the device secured using ah a
passcode and log into this app using the credentials of IAM users. If the device
happens to get lost for some reason, the user could be deactivated so that no one is
unauthorized gains access
You cannot use the mobile console to activate root accounts. Those who are making
use of Multi-Factor Authentication are suggested to use a virtual one or a hardware
device (MFA) on another device to ensure the account’s security.
In the menu of this app, there is a link of feedback that enables users to ask questions
or to share experiences.
2) Set your password after which you are ready to use your account details.
Services can be activated in the credits section.
Users of Amazon are provided a functional account free of cost for a span of one year
so as to know about the different features and elements of for AWS. More specifically,
users would be able to access AWS services such S3, EC2 and DynamoDB, among
others, without having to pay a fee. Having said that, certain limitations do exist on
the resources to be consumed.
Existing AWS account holders can directly sign in via their password.
2) Fill up the form after entering your email address. This information is used by
Amazon for invoicing/billing purposes as well as to get the account identified. Sign-up
for the necessary services after you create the account
3) Entering payment information is the next step for signing up for the services. To ensure the
card’s validity, Amazon engages in a transaction of minimal amount against it. This charge
changes depending on the region.
4) Verifying identity is the next step under which Amazon calls back for verifying the
contact number given.
5) Select a support plan such as Basic, Enterprise, Business or Developer. If you want
to get acquainted with AWS, choose the basic plan which is free of cost, albeit with
curtailed resources.
6) Confirmation is the final step. Click on the link for relogging after which you would
be directed to management console.
The account has now been created, which means that the users can start accessing
AWS services.
Account Identifiers
A couple of unique IDs are assigned to each account of AWS, as listed below.
AWS Account ID
This 12-digit account ID is used for constructing Amazon Resource Names (ARN). The
number distinguishes our resources from others in different AWS accounts.
As the below screenshot shows, click on support on the navigation bar (upper right side) in
management console.
Account Alias
This is basically the URL related to the user’s sign-in page which has the account ID.
The URL can be customized with the name of company and even get the previous one
overwritten.
3) For deleting this alias, click the customize link, and choose the button that reads
‘Yes, Delete.’
Requirements
For the purpose of using MFA, a device (virtual or hardware) needs to be assigned AWS
root account or a user of IAM. However, a user is prevented from entering a code from
the device of another user, which means that the MFS device must be uniquely
assigned.
Enabling MFA Device
1) Visit the URL: https:// console.aws.amazon.com/iam/
2) Select users on the right side of the navigation pane to see the list of users.
3) Scroll below to find your way to security credentials and then select MFA. Then,
click on activate MFA.
SMS
This method needs users to get the IAM user configured with the contact number of the
SMS- compatible mobile device of the user. Upon signing in, AWS would be sending a
six-digit long code via SCM to the mobile device of the user who then must enter it on
another internet page while signing in so that the correct user is authenticated. An
AWS root account is mandatory for availing this method.
Hardware
Here, an MFA device (which is hardware) gets assigned to the AWS root account or
the user of IAM. Subsequently, this device creates a six-digit long numeric code
premised on a single-time password algorithm. However, the user must put in the
same code on another internet page while signing in so that the correct user is
authenticated.
Virtual
As per this method, an MFA device (which is virtual) gets assigned. This device is
essentially a software app that runs on a cellular device mirroring a physical device.
Subsequently, this device creates a six-digit long numeric code premised on a single-
time password algorithm
However, the user must put in the same code on another internet page while signing in
so that the correct user is authenticated.
AWS IAM
IAM denotes a user entity generated in AWS for representing an individual who makes
use of it without completely access the resources. For this reason, root account does
not need to be used in everyday activities as the root account offers completely
access to the resources of AWS.
Steps for Creating Users
2) Choose the option of ‘Users’ on the navigation pane (on the left side) to view the
list of users.
3) Creating New Users is also possible via the Create New Users option. When the
new window opens up, put in the intended user name. Choose the ‘create’ option to
create a new user.
4) It is possible to view Access Key IDs by choosing the link of ‘Show Users Security
Credentials.’ If you want, you can get the details saved on your PC via the option of
‘Download Credentials.’
5) You can now manage the security credentials of the user, such as like ensuring
management of MFA devices, generating passwords, creating access keys and/or
deleting them, and getting the users added to new groups, among others.
AWS - EC2
This internet service interface offers a e compute capacity (resizable) on the AWS cloud.
Using this interface, devices can get complete control on computing resources and
internet scaling.
Depending on your requirement, you can increase or lower the number of instances.
It is possible to launch such instances in multiple geographical regions. All regions
consist of many Availability Zones across specific locations linked by networks (low
latency) across the same regions.
Components of EC2
It is important for users to know more about the components of EC2, security
measures, support for operating systems and pricing structures, among others.
Security Measures
Under AWS EC2, security systems allow for the creation of groups and situate running
instances accordingly based on the requirement. The groups where others could
communicate, in addition to the ones where online IP subnets could talk needs to be
specified.
OS Support
Amazon EC2 allows users to gain access to multiple OS for which additional licensing
fees must be paid: this includes SUSE Enterprise, Red Hat Enterprise, UNIX, Windows
Server, and Oracle Enterprise Linux. It is necessary to deploy these OSS alongside
Amazon’s Virtual Private Cloud (VPC).
Pricing Features
AWS provides a wide range of pricing options, based on the kinds of database,
applications, and resources. Users are allowed to configure resources while
computing charges accordingly.
Via Amazon EC2, users can avail the resources for preparing applications that are
fault-tolerant. Additionally, EC2 consist of both isolated locations and geographic
regions for stability and fault tolerance. For security purposes, it does not pinpoint
where local data centers are located.
Upon launching the instance, users are required to choose an AMI which is situated in
the same area wherein the instance would operate.
Migration
By gaining access to this service, users can migrate current applications to EC2. Its
cost is $80.00/storage device as well as $2.49 an hour to load data. The service is
particularly suited to users who need to migrate copious amounts of data.
Features of EC2
On-Demand – You can access it from anywhere, regardless of your location
Resource pulling – Put succinctly, there will be a massive data center that would be
offered via different channels.
Elasticity – This is another great feature of EC2
Flexibility - It is capable of accommodating many OS. Additionally, it is quite secure
thanks to proactive elements such as private key files. Amazon EC2 works in VPC
for offering a secure network when it comes to accessing resources
Affordable - Users can pay for what they want. Options for purchase include Reserved
Instances, On-Demand Instances, etc.
Using AWS EC2
1) Upon signing in to AWS account, visit the following URL for opening the
IAM console: https://console.aws.amazon.com/iam/.
3) − Select new IAM users on the navigation pane before creating new users to the
groups.
In the panel, select VPC. After that, choose the same area for which the key-
pair has been created
Choose the configuration page of VPC and select VPC that has only one
subnet. Subsequently, choose the Select option.
After that, VPC that has only one public subnet page would open up. Put the
name of VPC in the corresponding field but make sure that other
configurations are left untouched.
5) Create security groups of WebServerSG and then add the concerned rules by
following these instructions.
Click on ‘create security group’ before entering the necessary fields on your
screen. From the menu, select your VPC ID, after which choose yes, ‘create
button’.
After the creation of a new group, choose the edit option for creating rules
(this option is on the inbound rules tab).
On the new page, select Instance Type before providing the desired
configuration. Thereafter, choose next:
After a new page opens, choose VPC from the list of networks. Choose
subnet from the list of subnets and leave out other settings.
7) On the page of Tag Instances, give a tag along with the name to the instances.
Click on Configure Security Group.
8) When the next page opens, choose a present option of existing security group.
Choose the previously created WebServerSG group before selecting Review as well
as Launch.
10) After a pop-up dialog box shows up, either choose a current g key pair, else form
a new one.
Load Balancer
Control Service
SSL Termination
This helps save CPU cycles, as well as decoding/encoding SSL in the EC2 instances
linked with ELB. You need an X.509 certificate to get it configured with the ELB. You
can also terminate the optional SSL connection in the EC2 instance.
ELB Attributes
ELS tackles unlimited requests each second with constantly rising load pattern.
EC2 instances as well as load balancers can be countered for the purpose of
accepting traffic.
ELB can be enabled in a single AZ or across more than one zone to ensure
consistency in application performance.
2) Choose your load balancer region on the hand right side on the region menu.
3) Choose Load Balancers and then Create Load Balancer option. Enter the necessary
details after a pop-up window opens up.
5) Choose the network you have for instances in the box ‘create LB inside’.
8) Select Next. After choosing a VPC as the network, you can assign groups to LB.
9) Ensure compliance with the instructions for assigning security groups to LB. Now,
click on Next.
10) This should open up a new pop-up box showing configuration details of health checkup
along with default values. You can set your own values, although doing so is completely
option. Now, click on “Next” and add EC2 Instances.
11) A pop-up box would open up with information pertaining to instances such as
registered instances. Now is the time to get instances added to LB by choosing theoption
of ADD EC2 Instance and entering the required information. Click on Add Tags.
12) You can add tags to LB, if you want to. In order to do so, click on “Add Tags” Page
before entering details like value to the tag and key. Subsequently, select the Create
Tag option followed by “Click Review and Create” button.
13) Click on Create for setting up your LB and then then click on Clos.
Deleting a Load Balancer
4) Finally, click on the Delete button. After you see an alert window seeking your
confirmation, choose Yes, Delete button.
AWS - WorkSpaces
This completely on-cloud managed desktop service lets customers deliverdesktops based
onthecloudto end-users so that the latter can get access to various resources using their
preferred device like Android tablets, laptops, Kindle Fire or iPads. This offering was
intended to meet the growing demand for DaaS or Desktop as a Service. Desktops are
streamed access to users through PCoIP whereas by default, the backed-up data is taken
on at a gap of every 12 hours.
Requirements
You need a web connection with UDP and TCP open ports at your end. You are
required to download a free app - Amazon WorkSpaces client.
After you see a new page, choose Create Simple AD button before entering
the necessary details.
Enter the VPC details in the VPC section before clicking on “Next step”.
You will come across a review page option where the entire information can
be reviewed. Incorporate changes, if incorrect and then select the Simple
AD button.
3) Take the steps mentioned below for creating a WorkSpace.
Click on the cloud directory. In this directory, enable and disable WorkDocs
concerningallthe users, before clicking on “Yes, Next.”
On the new page, enter all details that are required for a new user before
choosing Create Users. Select Next after ensuring that a user has been
added to the list.
On the newly opned review page, ascertain all the details. If necessary,
incorporate the changes and click on Launch WorkSpaces.
After you will be shown a message for confirming the account, you can start using
WorkSpaces.
This this attribute of AWS WorkSpaces helps ascertain whether the Internet and
network connections are functioning, as well as whether WorkSpaces/ their related
registration offerings can be accessed. It also helps determine whether or not the port
4172 is available for accessing TCP or UDP.
Client Reconnect
Notably, this attribute of AWS WorkSpaces lets users access WorkSpace without
having to put in their credentials each time they get disconnected. It is noteworthy that
the app deployed at the device owned by the client saves a token in a robust store. that
remains valid for a period of 12 hours and helps authenticate the correct user. Users can
access their WorkSpace by clicking on the Reconnect button. This attribute can be
disabled anytime.
Auto Resume
This attribute of AWS WorkSpaces can help the client restore a disconnected session
caused by any reason within a span of 20 minutes (this is the default time-span and can
be extended up to four hours). This feature can also be disbanded anytime in the
section on group policy.
Console Search
This attribute of AWS WorkSpaces lets Administrators seek WorkSpaces on the basis
of directory, bundle type or name.
Amazon WorkSpaces - Advantages
Remote Management
From a single AWS console, it is possible to manage the launch of several
Workspaces. Since it can be availed across 11 regions and delivers top-quality
cloud computing anytime and anywhere, it is also possible to augment the
deployment of global desktops.
AWS Lambda
This cloud service investigates actions inside the application before responding
through the deployment of functions, also referred to as user-defined codes. It helps
ensure automatic management of computingresources on several AZs.
AWS Lambda is compatible with codes in Node.js, Python and Java. Additionally, the
service is cable of launching processes in all languages that Amazon Linux supports
Do not declare any variable (function) that is not within the handler’s scope.
Retain certain +rx permissions in your files (uploaded ZIP) to make sure that
Lambda is capable of executing code on the behest of the user
When you do not require them any longer, it is better to get old functions
deleted.
Choose the Lambda service before selecting the Event Sources tab, where you will
see that there are no records. In order for the Lambda function to function, add a
minimum of one source to this function. In this case, DynamoDB Table is being added
to it.
7) After choosing the stream tab, it is time to link it to the Lambda function.
This entry can be viewed on Lambda Service page’s Event Sources Tab.
8) Get some entries added to the table. After the entry is added as well as saved,
Lambda service gets the function triggered. Use Lambda logs in order to verify this.
9) In order to see the logs, just choose the Lambda service before selecting the tab for
Monitoring. Subsequently, click on View Logs.
AWS Lambda – Advantages
It is possible to use any Lambda functions which have already been defined
in the workflows.
Amazon SWF directly calls the Lambda functions; in other words, you do
not need to get a program designed in order to implement them.
Throttle Limit
In order to calculate it, the following formula is used = average duration of execution
of the function X total number of events/requests that AWS Lambda processes
Upon reaching this limit, a throttling error shows up with the code 429. However, you
can resume work after 15-20 minutes and increase this limit by getting in touch with
AWS support center.
Limit on Resources
This table illustrates the resource limits as far as a Lambada function is concerned.
Resource Limit
(Default)
Disk capacity 512 MB
Number of threads and processes 1,024
Service Limit
This table illustrates the service limit list as far as a Lambada function is concerned
Item Limit
(Default)
Package size of Lambda function’s deployment 50 MB
Total size of dependencies and codes that can be zipped 250 MB
into a package of deployment
To know more about the most recent structure of limits and associated information,
please check out https://docs.aws.amazon.com/lambda/latest/dg/limits.html/
VPC enables users to make use of AWS resources within a virtual network. If they
wish to, users allowed to get their virtual networking environment customized and make
changes. Thisincludesformingsubnets, choosing theirIPaddresses, as well as preparing network
gateways and route tables.
2) On the navigation bar (right side), choose ‘create the VPC’ option and select the
same region that is chosen for other offerings.
3) After selecting VPC wizard, click on that shows the option of a single public subnet.
4) You will be directed to a configuration page. Enter subnet name and VPC name
but keep the other fields unchanged (default value). Now, click on Create VPC.
5) You will now be shown a dialog boxes that shows the work in progress. Choose
the OK button upon its completion.
Your VPCs page displays a list of VPCs that are available. You can change VPC
settings.
Creating VPC Group
2) Onthenavigation bar, choose the option of security groups before selecting ‘create
security group’.
3) After a form page opens up, put in details such as name tag and group name.
Choose your VPC’s ID from the VPC menu. Next, press on Yes, create.
4) After the list of group shows up, choose its name from the list before setting the
rules. Next, press on Save.
2) Ensure that the same region is selected that was chosen when creating security
group/VPC.
5) Now, select the Instance Type followed by the hardware configuration. Next, click
on
Next: Configure Instance Details.
6) From the list of networks, choose the VPC that was created recently as well as the subnet
from the list of Subnets. After keeping other settings in the default mode, choose the
Next till the Tag Instance page.
7) After you reach the page of Tag, get the instance tagged with Name in order to
pinpoint your instance and distinguish it from the list of various instances. Then,
press the following button: “Next: Configure Security Group.”
8) Upon reaching this page, choose the group that was recently created from the
chosen list. After doing that, press on “Review and Launch”
9) As you reach the page of Review Instance Launch, choose Launch after
ascertaining your details regarding the instance.
10) You will come across a dialog box. After creating a new key pair or choosing a
current one, press on “Launch Instances”.
11) You will now reach the confirmation page showing the details concerning
instances.
3) After choosing Allocate New Address from the list, press on Yes, Allocate.
4) From the available list, choose your Elastic IP address, before choosing Actions.
Follow it up by pressing on Associate Address.
5) On the newly open dialog box, choose the Instance as shown in the screenshot
below. After doing that, choose your instance from the list of instances. As a last
step, press on the following button, Yes, Associate.
Deleting a VPC
If you want to delete VPC and continue to access associated resources, just follow
these steps.
4) You will come across a dialog box. After expanding the section titled Release
attached Elastic IPs, press on the checkbox near the Elastic IP address option. Now,
press on Yes, Terminate.
6) On the navigation bar, choose the VPC. After choosing Actions, press on Delete
VPC.
7) You will now come across a confirmation message. Press on Yes, Delete.
VPC Features
When it comes to Amaon VPC, a umber of options for connectivity − Several
You can also establish private link with other VPCs for the purpose
of sharing resources on more than one virtual network.
Backup data easily − On a regular basis, you can get backup of data from the
datacenter to instances of Amazon EC2 through the use of EBS volumes
(Amazon).
AWS - Route 53
This is a scalable as well as available Domain Name System. intended for corporates
and developers alike who can leverage it for routing end users into web applications.
2) On the navigation bar, (top left corner, press on the option of create hosted zone.
3) As a form page shows up, enter the requisite details like comments and domain
name before pressing on Create.
4) You have now created a hosted zone as far as the domain is concerned. You must
now need to update four DNS endpoints, also known as the delegation set, under the
domain names settings (Nameserver) as shown in the below screenshot.
5) − Return to Route 53 console before choosing the option of go to record sets. You
will come across a list containing record sets. You will come across two specific
record sets: NS and SOA, as shown in the screenshot below.
6) For creating your own record set, choose the option for create record set. Enter the
requisite details shown on the screen, press on Save, record.
9) Now you may want to form another record set for a different region in order to
have at least two record sets whose domain name is the same and points towards
varying IP addresses using your policy of selected routing.
Upon completion, requests placed by the user would be processed on the basis of
concerned policy.
Route 53 Features
Registering domain is deeply simplified. This means that as a user, you can
directly buy all domain levels such as .net, .com., etc. from Route 53.
Route 53 is also compatible with other offerings of AWS. For example, it can
be utilized for mapping names of domains to Amazon, Amazon EC2 instances
as well as a plethora of other resources pertaining to AWS.
Ease of use, configuration, sign up, and provision of speedy responses for
DNS-related questions is another key advantage.
Users only need to pay for domain service along with the total number of
questions answered by the service per domain, which makes it a viable and
cost-effective option.
The fact that it gets integrated with IAM means that Router 53 is able to
control all users who form part of AW, which includes taking a call on which
user would be accessing what portion
This feature makes it possible to use our network to establish a private network link to
the AWS location. The fact that it entails the use of 802.1q VLANs means that it is possible
to portion it into several virtual interfaces for the purpose of accessing resources
without having to change your connection. The end result is heightened bandwidth
and lower network costs. It is also possible to reconfigure virtual networks at any
point in time.
The network must be available in the list of locations of AWS Direct Connect.
To know more about the list of locations that are available, please visit
https://aws.amazon.com/directconnect/ .
Now, visit the following URL to know more about the AWS Direct Connect
partners because it is necessary to collaborate with a member of AWS
Partner Network − https://aws.amazon.com/directconnect/
You must be able to port your service provier in order to get connected with
AWS Direct Connect.
In addition, it is also important for the network to be able to comply with the
following guidelines:
3) After you see the Welcome page, press on Get Started with Direct Connect as
shown in the below screenshot:
5) After a dialog box (Create a Connection) opens up, enter the details and press on
Create.
This network service is compatbale with all AWS offerings which can be
accessed on the web. Examples include Amazon EC2, Amazon S3 as well as
Amazon VPC, among others.
Users can also take advantage of AWS Direct Connect for setting up a virtual
private interface from home-based network directly to Amazon VPC.
Elasticity is another key feature of AWS Direct Connect, which offers 1 Gbps
as well as 10 Gbps connections. It is also possible to establish more than a
single connection, depending on the requirement.
Simplicity and ease of use is the hallmark of AWS Direct Connect. You can
manage all virtual networks and other connections via the AWS Management
Console.
AWS - Amazon S3
This high-speed, cost-effective, and scalable internet service facilitates not only web
backup, but also application programs and data archiving. Furthermore, using this
service, you can store, download as well as upload any file type whose size is up to 5
TB. Another key aspect is that users can get access to the systems which are used by
Amazon for operating its sites. Furthermore, subscribers are also able to control
publicly and privately accessible data.
As shown in the below screenshot, you will come across a prompt window.
On the bottom, press on Create Bucket.
After you see a dialog box opening up, enter the requisite details and press
on Create.
After the successful completion of this bucket, you will see a listof buckets
along with its attributes.
Press on Static Website Hosting. After doing that, press on theEnablewebsite
hosting button. Now, enter all the necessary details in the fields.
Press on Upload.
Select the option titled Add files. Now, you must choose files that must be
uploaded before clicking on Open.
For downloading or opening a new object − Focus your attention on the list of
Objects & Folders and right-click the intended object. Next, choose the object that you
want to download or open.
1) In the Amazon S3 console, choose the option of files & folders. After right clicking
on the object you want to move, press on Cut.
2) Click on open button to select the location where you want the object to be. Right-
click on the bucket where you want to move the object and press on Paste Into.
2) On the confirmation message that appears on your screen, read what is written
carefully before pressing on Empty bucket.
AWS - Elastic Block Store
Amazon EBS refers to a block storage system that is utilized for storing persistent data.
It is deemed suitable for EC2 instances through the provision of storage volumes that
are highly available at the block level.
Provisioned IOPS
This is best suited for transactional workloads, difficult I/O intensive and big-sized
EMR/Hadoop workloads, among others. On a default mode, IOPS SSD is compatible
with 30 IOPS/GB which in turn implies that 10GB volume yields 300 IOPS. The storage
capacity is found to be ranging between 10GB and 1TB. Price of one volume is
$0.125/GB for each month for provisioned storage as well as $0.10 per month for each
provisioned IOPS.
Magnetic Volumes
Previously called standard volumes, this type of volume is best suited for workloads
such as data logs storage and backups for recovery, among others. For one volume, the
storage capacity is found to be ranging between 10GB and 1TB. Price of one volume is
as follows: $0.05/GB per month for provisioned storage.
Enter the necessary information such as Size, Volume Type list, Availability
zone, and IOPS, among others before pressing on Create.
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint
The You can see the names of volumes from the list, as shown in the screenshot
below.
In the Snapshot ID field, type snapshot ID wherefrom you want to restore the
volume and choose it from the list of options suggested on the screen.
If you need more storage, alter the size of storage before pressing on the
3) Follow the steps outlined below for attaching EBS Volume to an Instance.
On the navigation pane, press on Volumes. Then select a volume and before
choosing the option of Attach Volume to open a dialog box.
On this newly opened dialox box, fill in the instance ID and name for linking
the volume within the field of Instance; else, you can choose it from a list of
suggestions.
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 59/96
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint
For unmounting the device, enter the command /dev/sdh in cmd. Now, launch
Choose the option of Volumes on the navigation pane. The next step entails
When a confirmation dialog box shows up on your screen, press on Yes, Detach.
This paves the way for integration between the infrastructure of AWS storage and the
environment of on-premises IT. You can get the data stored in the AWS cloud for cost-
efficiency, secure and scalable storage.
The two kinds of storage offered by AWS Gateway include tape based and volume
based.
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 61/96
12/31/2019 Amazon Web Services - Quick Guide - Tutorialspoint
Volume-Based
This type of storage on the cloud is capable of being mounted as devices of Internet
Small Computer System Interface from application servers (on-premises).
Gateway-cached
The entire application data (on-premises) gets stored by AWS Storage Gateway inside a
storage volume as far as Amazon S3 is concerned. The volume of storage is found to
vary between 1GB and32 TB as well as till 20 volumes with an overall 150TB storage.
These volumes can be attached from application servers using iSCSI devices. Its two
categories are as follows −
Cache Storage
All applications need to get their data stored using storage volumes. This type of
storage is generally used for initially storing data after being written to the AWS
storage volumes. From cache storage disk, data can be uploaded from the buffer of
uploading into Amazon S3. The disk of cache storage retains the data that was
accessed most recently for access to low-latency. When the data is needed by the
application, the cache storage disk gets checked prior to Amazon S3. A minimum of 20%
of the current file is stored as cache storage, which needs to be higher as compared
to the upload buffer.
Snapshots − There are times when storage volumes need to get backed up
incrementally in Amazon S3. Incremental backup implies that a new snapshot onlybacks
up data which has changed since the previous snapshot. Such backups are called
snapshots and get stored in the form of Amazon EBS snapshots in Amazon S3.
Snapshots can either be taken at a fixed interval or as in line with the requirement.
Upload buffer disk is used for getting the data stored before it is uploaded onto
https://www.tutorialspoint.com/amazon_web_services/amazon_web_services_quick_guide.htm 62/96
Amazon S3. The storage gateway gets the data uploaded from the upload buffer to
AWS on an SSL connection.
Upon the activation of the VM, gateway volumes get mapped to attached storage
disks. For this reason, when new applications read and write data from gateway
storage volumes, they do so from the on-premises disk that are mapped.
A volume stored in a gateway makes it possible to get primary data stored locally besides
providing access of whole datasets to low-latency on-premises applications. They can be
mounted in the form of iSCSI devices to application servers (on-premises), with their
size ranging between1 GB and 16 TB. They are also capable of supporting 12 volumes
for each gateway 192 TB being the peak storage.
Virtual Tape − Resembling its physical counterpart, it is cable of being stored on the
AWS cloud in two ways. through the use of AWS Storage Gateway API or AWS Storage
Gateway console. Every virtual tape’ size ranges between 100 GB and 2.5 TB. One
gateway’s size can increase till 150 TB; it is possible to simultaneously have 1500
tapes at the most.
VTL − All gateway-VTLs are accompanied by a single VTL, which again bears
resemblance with a physical tape library. After storing data locally, the gateway
uploads it asynchronously to VTL’s virtual tapes.
Media Changer − This bears resemblance to a robot which moves tapes within the tape
drives and storage slots of a physical tape library. Each VTL is associated with a single
media changer utilized for backing up applications
Tape Drive − This is capable of performing I/O operations on tape. Every VTL comprises
of 10 tape drives utilized for backup applications.
Virtual Tape Shelf (VTS) − This is utilized for archiving VTS into gateway VTL and vice-
a-versa.
Page 63 of 105
Retrieving Tapes − Since it is not possible to directly read the tapes that are archived into
VTS, we must get the tape retrieved from thegateway VTL by either making use of AWS
Storage Gateway API or AWS Storage Gateway console.
AWS - CloudFront
1) After visiting a website, the user places a request for the object to be downloaded
in the form of an image file.
2) In order to serve this request, DNS routes it to the closest location of CloudFront
edge.
3) − CloudFront gets its cache verified for the files that have been requested. If they are
found, they make it back to the user. Else, the following happens −
The moment the first byte arrives, CloudFront forwards it to the user therefore
adding the files into the cache within the edge location till the time someone
seeks access to the same file.
CloudFront Attributes
High-Speed The vast network of CloudFront and edge locations caches content in
close proximity to end users, which in turn leads to high rates of data transfer,
reduced latency, and reduced traffic of networks. These factors collectively increase
the speed of CloudFront
Cost-efficiency − Amazon CloudFront requires you to only pay for what you get
delivered via the network, with no hidden rates or up-front fee.
Page 64 of 105
Reliability – Since it is constructed on the extremely dependable infrastructure of
Amazon, the edge locations would automatically get end users re-routed to the next
closest location, if the need arises
Elasticity − Users of Amazon CloudFront do not need to get concerned about its
maintenance. This is attitude to the fact that it service responds automatically in
case any step needs to be initiated, for example, if the demand grows or reduces.
Global Network − It utilizes a worldwide cluster of edge locations that are situated in
many regions.
3) Complete the steps mentioned below for creating a CloudFront Web Distribution
After the Create Distribution page shows up, select the Amazon S3 bucket
that was generated in the Origin Domain Name. After doing that, keep the rest
of the fields in default mode.
Page 65 of 105
You will now see the page of Default Cache Behavior Settings opening up. Do
not change the values and proceed to the subsequent page.
After you see a Distribution settings page on your screen, enter all the details
based on your requirement before pressing on Create Distribution.
The column ofStatusgets changed from to Deployed from In Progress. The next
step entails choosing the Enable option for enabling your distribution. The
domain name would feature in the Distributions’ list within a time span of 15
minutes.
1) Copy the HTML code shown in the below screenshot into a new file before writing
the domain-name assigned by CloudFront to the distribution instead of the name of
domain. In the space of object-name, enter the name of Amazon S3 bucket.
<html>
<head>CloudFront Testing link</head>
<body>
<p>My Cludfront.</p>
<p><img src = "http://domain-name/object-name" alt = "test image"/>
</body>
</html>
2) Get the text saved in a file that has .html extension.
Page 66 of 105
3) Now launch this page on an internet browser to see if the links are correctly
working. If they are not, get the settings crosschecked.
As a SQL database that is fully managed, Amazon Relational Database Service allows
the creation of relational databases. The use of RDS can let you gain access to your
database and files cost-effectively and in a manner that is highly scalable.
Affordable − You only pay for what you use when it comes to Amazon RDS,
which means you don’t need to pay up-front fees or stay committed for the
long-haul.
Security − You have total control on the network to gain access to their
database as well as related offerings.
Automatic backups – One of the best aspects of Amazon RDS is its ability to
back up all that is in the database, which include transaction logs of a minimum
of five minutes while also managing automatic timings of backup.
Software patch − Amazon RDS gets you the most recent patches
automatically for the database software. It is also possible to specify when the
software needs to be patched through the use of DB Engine Version Management.
2) Choose the area where you need to create the DB instance, at console’s top right
corner.
3) On the navigation pane, choose Instances and then press on Launch DB Instance.
4) After you see the Launch DB Instance Wizard opening up, choose the type of
instance based on what you require in order to launch before clicking on Select.
Page 67 of 105
6) Upon reaching the page of Specify DB Details, enter all the details shown on the
screen and click on Continue
Page 68 of 105
7) Select the options you want to on the page of Management Options and again
click on Continue.
8) After you reach Review, get the details verified before launching the DB Instance
button.
Page 69 of 105
DB instance can now be seen in the list comprising of DB instances.
1) Type the command shown below on the cmd prompt on client computer for linking
a database on MySQL DB instance
Page 70 of 105
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 350
Server version: 5.2.33-log MySQL Community Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the buffer.
mysql>
Deleting a DB Instance
Upon the completion of the task, it is important to delete the DB instance in order to
avoid having to pay money for it using the following steps:
1) Open the AWS Management Console and launch the Amazon RDS console via this
URL.
https://console.aws.amazon.com/rds/
3) Press on Instance Actions before choosing the option of Delete from the
dropdown menu.
Instance class − On the basis of the DB instance’s class that is being used
Backup storage − No extra charges are incurred for backup storages till
100% database. However, free service is only available for DB instances.
AWS - DynamoDB
This 100%-managed NoSQL database service allows for the creation of database
tables which are capable of storing and retrieving any volume of data. In addition to
managing traffic of tables across multiple servers while maintaining performance, it
also ameliorates the burden of operating on the part of customers.
It is for this reason that Amazon manages setup, hardware provisioning, , software
Page 71 of 105
patching, replication cluster scaling and configuration, among others.
1) Use this URL to download DynamoDB (.jar file). It provides support to several
Operating Systems such as Linux, Windows, and Mac, among others.
.tar.gz format − http://dynamodb-
local.s3-website-us- west2.amazonaws.com/dynamodb_local_latest.tar.gz
After the download is complete, simply get the contents extracted and copy
the directory (extracted) to a preferred location.
On the window shown below, enter the required details before clicking on
Continue.
You will now reach a review page where you can see all the details.
Page 72 of 105
As seen in the above screenshot, Table-name can be seen in-to the list
which means that you can now start using Dynamo Table.
–Since Amazon DynamoDB is scalable, you no longer need to be concerned about the
possible limit to the quantum of data that can be stored or retrieved by the table.
DynamoDB is capable of spreading automatically with the quantum of data stored with
the growth of the table.
Page 73 of 105
maintaining reduced latency. Latencies continue to be stable despite the growth in
datasets owing to the distribution of data placement of DynamoDB.
It also dynamic table creation, which basically means that it is possible the table to
have limitless attributes.
You only need to pay for what you use; moreover, the payment structure of Amazon
DynamoDB is easy to understand
AWS - Redshift
This 100% managed data warehouse made available on the cloud is increasing in its
popularity. Redshift’s datasets range between 100s of gigabytes and 1 petabyte. As
part of the initial process, a data warehouse gets created for unveiling a set of
compute resources referred to as nodes that are classified into clusters. Thereafter,
you can proceed towards getting your queries processed.
1) After signing in, open the Redshift Cluster by taking the steps listed below.
Choose the area where you want to create the cluster via the Region menu
on your computer screen’s top right-hand side on the corner.
Page 74 of 105
After the page for Cluster Details opens up, enter the necessary details
before pressing on Continue button until the review page is reached.
After you come across a confirmatory page, press on Close in order to view
the cluster in the list of Clusters.
After choosing the cluster, get the information on Cluster Status reviewed.
The page on your screen will display the status of the Cluster.
Page 75 of 105
2) Configuration of security group for authorizing client connections into the cluster.
Notably, the authorizing access to is predicated whether or not the client gets an EC2
instance authorized
After pressing on Edit, choose the fields shown in the screenshot above
and click on Save.
The two ways of linking to Redshift Cluster includedoing it directly or opting for SSL. If
Connect the cluster via a SQL client tool as it supports SQL client tools
compatible with ODBC drivers or PostgreSQL JDBC.
Page 76 of 105
Use the following links to download − JDBC
https://jdbc.postgresql.org/download/postgresql-8.4- 703.jdbc4.jar
ODBC
https://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_08_04_020
0.zip or
http://ftp.postgresql.org/pub/odbc/versions/msi/psqlodbc_09_00_0101x64.zip
for 64 bit machines
pane. Select the cluster of choice and click the Configuration tab.
Page 77 of 105
Click the folder icon and navigate to the driver location. Finally, click the
Open button.
Leave the Classname box and Sample URL box blank. Click OK. Choose the
Enter the username and password to their respective fields. Select the
Page 78 of 105
Following are the features of Amazon Redshift −
Supports VPC − The users can launch Redshift within VPC and control
access to the cluster through the virtual networking environment.
Scalable − With a few simple clicks, the number of nodes can be easily scaled
in your Redshift data warehouse as per requirement. It also allows to scale over
storage capacity without any loss in performance.
It is used to capture, store, and process data from large, distributed streams such as
event logs and social media feeds. After processing the data, Kinesis distributes it to
multiple consumers simultaneously.
Data log and data feed intake − We need not wait to batch up the data, we can
push data to an Amazon Kinesis stream as soon as the data is produced. It also
protects data loss in case of data producer fails. For example: System and
application logs can be continuously added to a stream and can be available in
seconds when required.
Page 79 of 105
Limits of Amazon Kinesis?
Following are certain limits that should be kept in mind while using Amazon Kinesis
Streams −
The maximum size of a data blob (the data payload before Base64-encoding)
in one record is 1 megabyte (MB).
Sign into AWS account. Select Amazon Kinesis from Amazon Management
Console.
Click the Create stream and fill the required fields such as stream name and
number of shards. Click the Create button.
Step 2 − Set up users on Kinesis stream. Create New Users & assign a policy to each
user.(We have discussed the procedure above to create Users and assigning policy
Page 80 of 105
to them)
Select the Kinesis icon and fill the required details. Click the Next button.
On the Fields tab, create unique label names, as required and click the Next
button.
On the Charts Tab, enable the charts for data. Customize the settings as
required and then click the Finish button to save the setting.
Easy to use − Using Amazon Kinesis, we can create a new stream, set its
requirements, and start streaming data quickly.
Page 81 of 105
time like stock trade prices otherwise we need to wait for data-out report.
It is used for data analysis, web indexing, data warehousing, financial analysis, scientific
simulation, etc.
Step 1 − Sign in to AWS account and select Amazon EMR on management console.
Step 2 − Create Amazon S3 bucket for cluster logs & output data. (Procedure is
explained in detail in Amazon S3 section)
Page 82 of 105
Leave the Tags section options as default and proceed.
On the File System Configuration section, leave the options for EMRFS as set
by default. EMRFS is an implementation of HDFS, it allows Amazon EMR
clusters to store data on Amazon S3.
Page 83 of 105
On the Hardware Configuration section, select m3.xlarge in EC2 instance
type field and leave other settings as default. Click the Next button.
On the Security and Access section, for EC2 key pair, select the pair from the list
in EC2 key pair field and leave the other settings as default.
On Bootstrap Actions section, leave the fields as set by default and click the
Add button. Bootstrap actions are scripts that are executed during the setup
before Hadoop starts on every cluster node.
Click the Create Cluster button and the Cluster Details page opens. This is
where we should run the Hive script as a cluster step and use the Hue web
interface to query the data.
Open the Amazon EMR console and select the desired cluster.
Move to the Steps section and expand it. Then click the Add step button.
Page 84 of 105
The Add Step dialog box opens. Fill the required fields, then click the Add
button.
Open the Amazon S3 console and select S3 bucket used for the output
The query writes the results into a separate folder. Select os_requests.
Easy to use − Amazon EMR is easy to use, i.e. it is easy to set up cluster,
Hadoop configuration, node provisioning, etc.
Flexible − It allows complete control over the clusters and root access to
every instance. It also allows installation of additional applications and
customizes your cluster as per requirement.
AWS Data Pipeline is a web service, designed to make it easier for users to integrate
Page 85 of 105
data spread across multiple AWS services and analyze it from a single location.
Using AWS Data Pipeline, data can be accessed from the source, processed, and then
the results can be efficiently transferred to the respective AWS services.
Select the region in the navigation bar. Click the Create New Pipeline button.
In the Source field, choose Build using a template and then select
this template − Getting Started using ShellCommandActivity.
Page 86 of 105
In Schedule, leave the values as default.
button.
Reliable − Its infrastructure is designed for fault tolerant execution activities. If failures
occur in the activity logic or data sources, then AWS Data Pipeline automatically retries
the activity. If the failure continues, then it will send a failure notification. We can even
configure these notification alerts for situations like successful runs, failure, delays in
activities, etc.
Flexible − AWS Data Pipeline provides various features like scheduling, tracking, error
handling, etc. It can be configured to take actions like run Amazon EMR jobs, execute
SQL queries directly against databases, execute custom applications running on
Amazon EC2, etc.
Page 87 of 105
using algorithms, mathematical models based on the user’s data.
Amazon Machine Learning reads data through Amazon S3, Redshift and RDS, then
visualizes the data through the AWS Management Console and the Amazon Machine
Learning API. This data can be imported or exported to other AWS services via S3
buckets.
A binary classification model can predict one of the two possible results, i.e.
either yes or no.
Page 88 of 105
Step 2 − Select Standard Setup and then click Launch.
Step 3 − In the Input data section, fill the required details and select the choice for data
storage, either S3 or Redshift. Click the Verify button.
Page 89 of 105
Step 4 − After S3 location verification is completed, Schema section opens. Fill the
fields as per requirement and proceed to the next step.
Step 5 − In Target section, reselect the variables selected in Schema section and
proceed to the next step.
Page 90 of 105
Step 6 − Leave the values as default in Row ID section and proceed to the Review
section. Verify the details and click the Continue button.
Page 91 of 105
Exploring Performance Using Machine Learning
Cost-efficient − Pay only for what we use without any setup charges and no upfront
commitments.
Page 92 of 105
AWS - Simple WorkFlow Service
The following services fall under Application Services section −
Amazon CloudSearch
Amazon Simple Queue Services (SQS) Amazon Simple Notification Services (SNS)
Amazon Simple Email Services (SES) Amazon SWF
Amazon Simple Workflow Service (SWF) is a task based API that makes it easy to
coordinate work across distributed application components. It provides a
programming model and infrastructure for coordinating distributed components and
maintaining their execution state in a reliable way. Using Amazon SWF, we can focus on
building the aspects of the application that differentiates it.
A workflow is a set of activities that carry out some objective, including logic that
coordinates the activities to achieve the desired output.
Workflow history consists of complete and consistent record of each event that
occurred since the workflow execution started. It is maintained by SWF.
Step 3 − Run a Sample Workflow window opens. Click the Get Started button.
Page 93 of 105
Step 4 − In the Create Domain section, click the Create a new Domain radio button
and then click the Continue button.
Step 5 − In Registration section, read the instructions then click the Continue button.
Page 94 of 105
Step 6 − In the Deployment section, choose the desired option and click the
Continue button.
Step 7 − In the Run an Execution section, choose the desired option and click the
Run this Execution button.
Page 95 of 105
Finally, SWF will be created and will be available in
the list.
Page 96 of 105
Benefits of Amazon SWF
It enables applications to be stateless,
because all information about a workflow
execution is stored in its workflow history.
Page 97 of 105
Amazon Web Services
- WorkMail
Page 98 of 105
Outlook clients including the prepackaged Click-to-Run versions. It also works with
mobile clients that speak the Exchange ActiveSync protocol.
Its migration tool allows to move mailboxes from on-premises email servers to the
service, and works with any device that supports the Microsoft Exchange ActiveSync
protocol, such as Apple’s iPad and iPhone, Google Android, and Windows Phone.
Step 3 − Select the desired option and choose the Region from the top right side of
the navigation bar.
Page 99 of 105
Step 4 − Fill the required details and proceed to the next step to configure an account.
Follow the instructions. Finally, the mailbox will look like as shown in the following
screenshot.
Managed − Amazon WorkMail offers complete control over email and there is no need
to worry about installing a software, maintaining and managing hardware. Amazon
WorkMail automatically handles all these needs.
Availability − Users can synchronize emails, contacts and calendars with iOS, Android,
Windows Phone, etc. using the Microsoft Exchange ActiveSync protocol anywhere.