0% found this document useful (0 votes)
38 views67 pages

UCS531-Cloud Computing Amazon Web Services

Amazon Web Services (AWS) provides on-demand access to computing resources and services via the cloud. It offers infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). AWS data centers are located across multiple geographical regions and availability zones to provide redundancy and prevent outages. Key AWS services include Amazon Elastic Compute Cloud (EC2) for virtual servers, storage, databases, and other resources available on a pay-as-you-go basis without long term commitments.

Uploaded by

Shivam Aggarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views67 pages

UCS531-Cloud Computing Amazon Web Services

Amazon Web Services (AWS) provides on-demand access to computing resources and services via the cloud. It offers infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). AWS data centers are located across multiple geographical regions and availability zones to provide redundancy and prevent outages. Key AWS services include Amazon Elastic Compute Cloud (EC2) for virtual servers, storage, databases, and other resources available on a pay-as-you-go basis without long term commitments.

Uploaded by

Shivam Aggarwal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

UCS531-Cloud Computing

Amazon Web Services


What is AWS?
The Amazon Web Services (AWS) service is provided by the
Amazon that uses distributed IT infrastructure to provide
different IT resources available on demand.

It provides different services such as infrastructure as a service


(IaaS), platform as a service (PaaS) and packaged software as a
service (SaaS).

Amazon launched AWS, a cloud computing platform to allow the


different organizations to take advantage of reliable IT
infrastructure.
AWS is located in 9 geographical 'Regions‘.
Each Region is wholly contained within a single country and all
of its data and services stay within the designated Region.
Each Region has multiple 'Availability Zones', which are distinct
data centers providing AWS services.
Availability Zones are isolated from each other to prevent outages
from spreading between Zones.
However, Several services operate across Availability Zones (e.g.
S3, DynamoDB).
What AWS Offers?
Low Ongoing Cost: pay-as-you-go pricing with no up-front expenses or
long-term commitments.
Instant Elasticity & Flexible Capacity: (scaling up and down)
Eliminate guessing on your infrastructure capacity needs.
Speed & Agility: Develop and deploy applications faster instead of
waiting weeks or months for hardware to arrive and get installed.
Apps not Ops: Focus on projects. Lets you shift resources away from
data center investments and operations and move them to innovative new
projects.
Global Reach: Take your apps global in minutes. Open and Flexible:
You choose the development platform or programming model that makes
the most sense for your business.
Secure: Allows your application to take advantage of the multiple layers
of operational and physical security in the AWS data centers to ensure
the integrity and safety of your data.
Uses of AWS
A small manufacturing organization uses their expertise to expand
their business by leaving their IT management to the AWS.

A large enterprise spread across the globe can utilize the AWS to
deliver the training to the distributed workforce.

An architecture consulting company can use AWS to get the high-


compute rendering of construction prototype.

A media company can use the AWS to provide different types of


content such as ebox or audio files to the worldwide files.
Pay-As-You-Go
Based on the concept of
Pay-As-You-Go, AWS
provides the services to the
customers.
AWS provides services to
customers when required
without any prior
commitment or upfront
investment.
Pay-As-You-Go
enables the customers to
procure services from
AWS.
• Computing
• Programming models
• Database storage
• Networking
Advantages of AWS
1) Flexibility
• We can get more time for core business tasks due to the instant
availability of new features and services in AWS.
• It provides effortless hosting of legacy applications.
• AWS does not require learning new technologies and migration of
applications to the AWS provides the advanced computing and
efficient storage.
• AWS also offers a choice that whether we want to run the
applications and services together or not.
• We can also choose to run a part of the IT infrastructure in AWS
and the remaining part in data centres.
2) Cost-effectiveness
• Traditional IT infrastructure that requires a huge investment.
• AWS requires no upfront investment, long-term commitment,
and minimum expense.
3) Scalability/Elasticity
•Through AWS, autoscaling and elastic load balancing
techniques are automatically scaled up or down, when demand
increases or decreases respectively.

• AWS techniques are ideal for handling unpredictable or very


high loads.

• Due to this reason, organizations enjoy the benefits of


reduced cost and increased user satisfaction.
4) Security
• AWS provides end-to-end security and privacy to customers.
•AWS has a virtual infrastructure that offers optimum
availability while managing full privacy and isolation of their
operations.
• Customers can expect high-level of physical security because
of Amazon's several years of experience in designing,
developing and maintaining large-scale IT operation centers.
• AWS ensures the three aspects of security, i.e., Confidentiality,
integrity, and availability of user's data.
AWS Global Infrastructure
• Global infrastructure is a region around the world in which AWS
is based.
• AWS is available in 19 regions, and 57 availability zones in
December 2018 and 5 more regions 15 more availability zones for
2019.
• The following are the components that make up the AWS
infrastructure:
• Availability Zones
• Region
• Edge locations
• Regional Edge Caches
Availability zone as a Data Center
• An availability zone is a facility
that can be somewhere in a country
or in a city.
• Inside this facility, i.e., Data
Centre, we can have multiple
servers, switches, load balancing,
firewalls.
• The things which interact with the
cloud sits inside the data centers.
• An availability zone can be a
several data centers, but if they are
close together, they are counted as 1
availability zone.
Region
A region is a geographical area.
• Each region consists of 2 more
availability zones.
• A region is a collection of data
centers which are completely
isolated from other regions.
• A region consists of more than
two availability zones connected
to each other through links.
• Availability zones are connected
through redundant and isolated
metro fibers.
Edge Locations
• Edge locations are the endpoints
for AWS used for caching content.
• Edge locations consist of
CloudFront, Amazon's Content
Delivery Network (CDN).
• Edge location is not a region but
a small location that AWS have.
• Located in most of the major
cities.
• For example, some user accesses
your website from Singapore; then
this request would be redirected to
the edge location closest to
Singapore where cached data can
be read.
Regional Edge Cache
• AWS announced a new type of
edge location in November 2016,
known as a Regional Edge Cache.
• Regional Edge cache lies
between Cloud-Front Origin
servers and the edge locations.
• A regional edge cache has a
large cache than an individual
edge location.
• Data is removed from the cache
at the edge location while the data
is retained at the Regional Edge
Caches.
• Edge location retrieves the
cached data from the Regional
edge cache instead of the Origin
servers that have high latency.
Amazon Elastic Compute Cloud (EC2)
▪ A web service that provides resizable compute capacity in the cloud.
EC2 allows creating Virtual Machines (VM) on-demand.
Pre-configured templated Amazon Machine Image (AMI) can
be used get running immediately. Creating and sharing your
own AMI is also possible via the AWS Marketplace.
▪ Auto Scaling allows automatically scale of the capacity up
seamlessly during demand spikes to maintain performance, and
scales down during demand lulls to minimize costs.
▪ Elastic Load Balancing automatically distributes incoming
application traffic across multiple Amazon EC2 instances.
▪ Provide tools to build failure resilient applications by
launching application instances in separate Availability Zones.
▪ Pay only for resources actually consume, instance-hours.
▪ VM Import/Export enables you to easily import virtual
machine images from your existing environment to Amazon
EC2 instances and export them back at any time.
EC2 Instances
Micro instances (t1.micro): – Micro Instance 613 MiB of memory, up to 2 ECUs (for short
periodic bursts), EBS storage only, 32-bit or 64-bit platform.
▪ Standard Instances provide customers with a balanced set of resources and a low cost
platform.
– M1 Small Instance (Default) 1.7 GiB of memory, 1 EC2 Compute Unit (1 virtual core
with 1 EC2 Compute Unit), 160 GB of local instance storage, 32-bit or 64-bit platform
– M1 Medium Instance 3.75 GiB of memory, 2 EC2 Compute Units (1 virtual core with 2
EC2 Compute Units each), 410 GB of local instance storage, 32-bit or 64-bit platform
– M1 Large Instance 7.5 GiB of memory, 4 EC2 Compute Units (2 virtual cores with 2 EC2
Compute Units each), 850 GB of local instance storage, 64-bit platform
– M1 Extra Large Instance 15 GiB of memory, 8 EC2 Compute Units (4 virtual cores with
2 EC2 Compute Units each), 1690 GB of local instance storage, 64-bit platform
– M3 Extra Large Instance 15 GiB of memory, 13 EC2 Compute Units (4 virtual cores
with 3.25 EC2 Compute Units each), EBS storage only, 64-bit platform
– M3 Double Extra Large Instance 30 GiB of memory, 26 EC2 Compute Units (8 virtual
cores with 3.25 EC2 Compute Units each), EBS storage only, 64-bit platform

One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz
2007 Opteron or 2007 Xeon processor
EC2 High Performance Instances
▪ High-Memory Instances: – High-Memory Extra Large Instance 17.1 GiB memory,
6.5 ECU (2 virtual cores with 3.25 EC2 Compute Units each), 420 GB of local instance
storage, 64-bit platform
– High-Memory Double Extra Large Instance 34.2 GiB of memory, 13 EC2 Compute
Units (4 virtual cores with 3.25 EC2 Compute Units each), 850 GB of local instance
storage, 64-bit platform
– High-Memory Quadruple Extra Large Instance 68.4 GiB of memory, 26 EC2
Compute Units (8 virtual cores with 3.25 EC2 Compute Units each), 1690 GB of local
instance storage, 64-bit platform ▪ High-CPU Instances
– High-CPU Medium Instance 1.7 GiB of memory, 5 EC2 Compute Units (2 virtual
cores with 2.5 EC2 Compute Units each), 350 GB of local instance storage, 32-bit or
64-bit platform
– High-CPU Extra Large Instance 7 GiB of memory, 20 EC2 Compute Units (8
virtual cores with 2.5 EC2 Compute Units each), 1690 GB of local instance storage, 64-
bit platform ▪ High Storage Instances
– High Storage Eight Extra Large 117 GiB memory, 35 EC2 Compute Units, 24 * 2
TB of hard disk drive local instance storage, 64-bit platform, 10 Gigabit Ethernet ▪ High
I/O Instances
– High I/O Quadruple Extra Large 60.5 GiB memory, 35 EC2 Compute Units, 2 *
1024 GB of SSD-based local instance storage, 64-bit platform, 10 Gigabit Ethernet
EC2 Cluster Instances
▪ Cluster Compute Instances provide proportionally high CPU resources with
increased network performance and are well suited for High Performance
Compute (HPC) applications and other demanding network-bound applications.
– Cluster Compute Eight Extra Large 60.5 GiB memory, 88 EC2 Compute
Units, 3370 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet
▪ High Memory Cluster Instances provide proportionally high CPU and
memory resources with increased network performance, and are well suited for
memory-intensive applications including in-memory analytics, graph analysis,
and scientific computing.
– High Memory Cluster Eight Extra Large 244 GiB memory, 88 EC2
Compute Units, 240 GB of local instance storage, 64- bit platform, 10 Gigabit
Ethernet
▪ Cluster GPU Instances provide general-purpose graphics processing units
(GPUs) with proportionally high CPU and increased network performance for
applications benefitting from highly parallelized processing, including HPC,
rendering and media processing applications.
– Cluster GPU Quadruple Extra Large 22 GiB memory, 33.5 EC2 Compute
Units, 2 x NVIDIA Tesla “Fermi” M2050 GPUs, 1690 GB of local instance
storage, 64-bit platform, 10 Gigabit Ethernet.
EC2 Payment methods

▪ On-Demand Instances let you pay for compute


capacity by the hour with no long-term commitments.
▪ Reserved Instances give you the option to make a low,
one-time payment for each instance you want to reserve
and in turn receive a significant discount on the hourly
charge for that instance.
▪ Spot Instances allow customers to bid on unused
Amazon EC2 capacity and run those instances for as long
as their bid exceeds the current Spot Price
Storage is another AWS core service.
There are three broad categories of storage:
instance store ("ephemeral"), Amazon EBS, and Amazon S3.

Instance store, or ephemeral storage, is a temporary storage that


it is added to the Amazon EC2 instance.

Amazon EBS is persistent, mountable storage, which can be


mounted as a device to an Amazon EC2 instance. Amazon EBS
can only be mounted to an Amazon EC2 instance within the same
Availability Zone.

Similar to Amazon EBS, Amazon S3 is persistent storage;


however, it can be accessed from anywhere.
Amazon S3 Amazon EBS

Amazon Amazon
EFS Glacier

Amazon Amazon EC2 Storage AWS IAM


VPC

Amazon RDS Amazon


DynamoDB

Database
Amazon Elastic Block Store (EBS)
▪ Provides block level storage volumes (1 GB to 1 TB ) for use with
Amazon EC2 instances. – Multiple volumes can be mounted to the same
instance.
– EBS volumes are network-attached, and persist independently from the
life of an instance.
– Storage volumes behave like raw, unformatted block devices, allowing
users to create a file system on top of Amazon EBS volumes, or use them in
any other way you would use a block device (like a hard drive).
▪ EBS volumes are placed in a specific Availability Zone, and can then be
attached to instances also in that same Availability Zone.
▪ Each storage volume is automatically replicated within the same
Availability Zone.
▪ EBS provides the ability to create point-in-time snapshots of volumes,
which are persisted to Amazon S3.
– These snapshots can be used as the starting point for new Amazon EBS
volumes, and protect data for long-term durability.
– The same snapshot can be used to instantiate as many volumes as you
wish. – These snapshots can be copied across AWS regions.
EBS Volumes
▪ Standard volumes offer storage for applications with moderate or bursty I/O
requirements.
– Standard volumes deliver approximately 100 IOPS on average.
– well suited for use as boot volumes, where the burst capability provides fast
instance start-up times.
▪ Provisioned IOPS volumes are designed to deliver predictable, high
performance for I/O intensive workloads such as databases.
– You specify an IOPS rate when creating a volume, and EBS provisions that rate
for the lifetime of the volume.
– Amazon EBS currently supports up to 4000 IOPS per Provisioned IOPS
volume.
– You can stripe multiple volumes together to deliver thousands of IOPS per EC2
instance.
▪ To enable your EC2 instances to fully utilize the IOPS provisioned on an EBS
volume,:
– Launch selected Amazon EC2 instance types as “EBS-Optimized”
instances. – EBS-optimized instances deliver dedicated throughput between
Amazon EC2 and Amazon EBS, with options between 500 Mbps and 1000 Mbps
depending on the instance type used.
▪ EBS charges based on per GB-month AND per 1 million I/O requests
Block storage vs Object storage

Block Storage Object Storage


• Suitable for transactional db, • Stores the files as a whole and
random read/write, and structured doesn’t divide them.
db storage • An object has its data and
• Data is divided and stored in metadata with a unique ID
evenly sized blocks. • It can not be mounted as a drive
• Data blocks would not contain • Global unique ID is unique
metadata. globally and it can be retrieved
globally.
• It only contains index of data
block, doesn’t care about the data
in it.
Amazon EBS Volume Types

Solid-State Drives (SSD) Hard Disk Drives (HDD)


General Purpose Provisioned IOPS Throughput-Optimized Cold

Max volume size 16 TiB 16 TiB 16 TiB 16 TiB


Max IOPS/volume 16,000 64,000 500 250
Max throughput/volume 250 MiB/s 1,000 MiB/s 500 MiB/s 250 MiB/s
Amazon EBS Volume Types

Solid-State Drives (SSD) Hard Disk Drives (HDD)


General Purpose Provisioned IOPS Throughput-Optimized Cold

• Recommended for most • Critical business • Streaming workloads • Throughput-oriented


workloads applications that require requiring consistent, storage for large
sustained IOPs fast throughput at a low volumes of data that is
• System boot volumes performance, or more price infrequently accessed
Use than 16,000 IOPS or 250
Cases • Virtual desktops MiB/s of throughput per • Big data • Scenarios where the
volume lowest storage cost is
• Low-latency interactive • Data warehouses important
apps • Large database
workloads • Log processing • Cannot be a boot
• Development and test volume
environments • Cannot be a boot
volume
Amazon Simple Storage Service (S3)
▪ Amazon S3 provides a simple web services interface that can be used to
store and retrieve any amount of data, at any time, from anywhere on the
web.
▪ Write, read, and delete objects containing from 1 byte to 5 terabytes of
data each. The number of objects you can store is unlimited.
▪ Each object is stored in a bucket and retrieved via a unique, developer-
assigned key.
– A bucket can be stored in one of several Regions.
– Choose a Region to optimize for latency, minimize costs, or address
regulatory requirements.
– Objects stored in a Region never leave the Region unless you transfer
them out.
▪ Authentication mechanisms are provided to ensure that data is kept
secure from unauthorized access.
– Objects can be made private or public, and rights can be granted to
specific users.
▪ S3 charges based on per GB-month AND per I/O requests AND per data
modification requests.
Amazon Simple Storage Service (Amazon S3)

Amazon Simple Storage Service (Amazon S3) is an object


storage service that offers industry-leading scalability, data
availability, security, and performance.
The customers of all sizes, organizations and industries can use
it to store and protect any amount of data.
S3 are used for variety of uses like websites, mobile
applications, backup and restore, archive, enterprise applications,
IoT devices, and big data analytics.
Amazon S3 provides easy-to-use management features.
Features S3

Unlimited number of objects with write, read, and delete facility and containing 1
byte to 5 terabytes of data.

Each object is stored in a bucket and retrieved via a unique, developer-assigned key.

Objects stored in a specific region never leave the Region unless transferred.
Authentication mechanisms are provided to ensure that data is kept secure from
unauthorized access.
Uses standards-based REST and SOAP interfaces to work with any Internet-
development toolkit.

Built to be flexible so that protocol or functional layers can easily be added.

Provides functionality to simplify manageability of data through its lifetime.


Starting with S3

Launching an Amazon S3 service for the first time need the


following steps:

Sign up for AWS


Create an IAM (Identity and Access Management) user
Sign in as an IAM user
Creating a bucket
Uploading an object to a bucket
Viewing an object
Deleting objects and buckets
Sign Up with S3

To create an AWS account

Open https://portal.aws.amazon.com/billing/signup.
Follow the online instructions.
Create an IAM user with administrative privilege.
Create a bucket
To create a bucket
Sign in to the AWS, open S3 console https://console.aws.amazon.com/s3/
Choose Create bucket.
In Bucket name, enter a name for your bucket with following
constraints:
Be unique across all of Amazon S3.
Be between 3 and 63 characters long.
Not contain uppercase characters.
Start with a lowercase letter or number.
After you create the bucket, you can't change its name.
In Region, choose the AWS Region where you want the bucket to
reside.
In Bucket settings for Block Public Access, keep the values set to
the defaults.
Choose Create bucket.
Upload an object to a bucket
To upload an object to a bucket
In the Bucket list, select the bucket that you want to upload
your object to.
On the Overview tab for your bucket, choose Upload or Get
Started.
In the Upload dialog box, choose Add files.
Choose a file to upload, and then choose Open.
Choose Upload.

.
To download an object from a bucket
In the Buckets list, choose the name of the bucket that
you created.
In the Name list, choose the name of the object that
you uploaded.
For your selected object, the object overview panel
opens.
On the Overview tab, review information about your
object.
To view the object in your browser, choose Open.
To download the object to your computer,
choose Download.
Delete an object and Bucket
Delete an object.
In the Buckets list, choose the bucket that you want to
delete an object from.
In the Name list, select the check box for the object that
you want to delete.
Choose Actions, and then choose Delete.
In the Delete objects dialog box, verify that the name of
the object, and choose Delete.

Delete your bucket.


To delete a bucket, in the Buckets list, select the bucket.
Choose Delete.
Securing Amazon S3 buckets and objects

• Newly created S3 buckets and objects are private and protected by default

• When use cases must share Amazon S3 data –


• Manage and control the data access
• Follow the principle of least privilege

• Tools and options for controlling access to Amazon S3 data –


• Block Public Access feature: It is enabled on new buckets by default, simple to manage
• IAM policies: A good option when the user can authenticate using IAM
• Bucket policies: You can define access to a specific object or bucket
• Access control lists (ACLs): A legacy access control mechanism
• S3 Access Points: You can configure access with names and permissions specific to each application
• Presigned URLs: You can grant time-limited access to others with temporary URLs
• AWS Trusted Advisor bucket permission check: A free feature
By default, all S3 buckets are private and can be accessed only by users who are explicitly granted access. It is essential that
you manage and control access to Amazon S3 data. AWS provides many tools and options for controlling access to your S3
buckets or objects, such as:

• Using Amazon S3 Block Public Access. These settings override any other policies or object permissions. Enable Block
Public Access for all buckets that you don't want to be publicly accessible. This feature provides a straightforward
method for avoiding unintended exposure of Amazon S3 data.

• Writing AWS Identity and Access Management (IAM) policies that specify the users or roles that can access specific
buckets and objects.

• Writing bucket policies that define access to specific buckets or objects. This option is typically used when the user or
system cannot authenticate by using IAM. Bucket policies can be configured to grant access across AWS accounts or to
grant public or anonymous access to Amazon S3 data. If bucket policies are used, they should be written carefully and
tested fully. You can specify a deny statement in a bucket policy to restrict access. Access will be restricted even if the
users have permissions that are granted in an identity-based policy that is attached to the users.

• Creating S3 Access Points. Access points are unique hostnames that enforce distinct permissions and network controls
for requests that are made through it. Customers with shared datasets can scale access for many applications by creating
individualized access points with names and permissions that are customized for each application.

• Setting access control lists (ACLs) on your buckets and objects. ACLs are less commonly used (ACLs predate IAM). If
you use ACLs, do not set access that is too open or permissive.

• AWS Trusted Advisor provides a bucket permission check feature. It is a useful tool for discovering if any of the buckets
in your account have permissions that grant global access.
Three general approaches to configuring access

Configure the appropriate security settings for your use case on the bucket and objects.

Default Public access Access policy applied to


Amazon S3 security settings Amazon S3 security settings Amazon S3 security settings

Owner Owner Owner

User A
Private Controlled
Public
access
Anyone Anyone
else else
User B
Consider encrypting objects in Amazon S3

• Encryption encodes data with a secret key, which makes it unreadable


• Only users who have the secret key can decode the data
• Optionally, use AWS Key Management Service (AWS KMS) to manage secret keys

• Server-side encryption
• On the bucket, enable this feature by selecting the Default encryption option
• Amazon S3 encrypts objects before it saves the objects to disk, and decrypts the objects when
you download them

• Client-side encryption
• Encrypt data on the client side and upload the encrypted data to Amazon S3
• In this case, you manage the encryption process
Amazon S3 benefits

Durability Scalability
• It ensures data is not lost • It offers virtually unlimited
• S3 Standard storage provides 11 capacity
9s (or 99.999999999%) of • Any single object of 5 TB or less
durability

Security
Availability • It offers fine-grained access
• You can access your data when control
needed
• S3 Standard storage class is
designed for four 9s (or 99.99%) Performance
availability • It is supported by many design
patterns
Amazon Virtual Private Cloud (VPC)

❑Amazon VPC lets you provision a logically isolated


section of the Amazon Web Services (AWS) Cloud.
❑You have complete control over your virtual
networking environment, including:
– selection of your own IP address range,
– creation of subnets, and
– configuration of route tables and network gateways.
❑VPC allows bridging with an onsite IT infrastructure
with an encrypted VPN connection with an extra
charge per VPN Connection-hour.
❑There is no additional charge for using Amazon Virtual
Private Cloud, aside from the normal Amazon EC2
usage charges.
Amazon Elastic MapReduce (EMR)
▪ Amazon EMR is a web service that makes it easy to quickly and
cost-effectively process vast amounts of data using Hadoop.
▪ Amazon EMR distribute the data and processing across a resizable
cluster of Amazon EC2 instances.
▪ With Amazon EMR you can launch a persistent cluster that stays up
indefinitely or a temporary cluster that terminates after the analysis is
complete.
▪ Amazon EMR supports a variety of Amazon EC2 instance types and
Amazon EC2 pricing options (OnDemand, Reserved, and Spot).
▪ When launching an Amazon EMR cluster (also called a "job flow"),
you choose how many and what type of Amazon EC2 Instances to
provision.
▪ The Amazon EMR price is in addition to the Amazon EC2 price.
▪ Amazon EMR is used in a variety of applications, including log
analysis, web indexing, data warehousing, machine learning, financial
analysis, scientific simulation, and bioinformatics.
Amazon Relational Database Service (RDS)
▪ Amazon RDS is a web service that makes it easy to set up, operate, and
scale a relational database in the cloud.
▪ Amazon RDS gives access to the capabilities of a familiar MySQL, Oracle
or Microsoft SQL Server database engine.
– Code, applications, and tools already used with existing databases can be
used with RDS.
▪ Amazon RDS automatically patches the database software and backs up the
database, storing the backups for a user-defined retention period and
enabling point-in-time recovery.
▪ Amazon RDS provides scaling the compute resources or storage capacity
associated with the Database Instance.
▪ Pay only for the resources actually consumed, based on the DB Instance
hours consumed, database storage, backup storage, and data transfer.
– On-Demand DB Instances let you pay for compute capacity by the hour
with no long-term commitments.
– Reserved DB Instances give the option to make a low, one-time payment
for each DB Instance and in turn receive a significant discount on the hourly
usage charge for that DB Instance.
Amazon DynamoDB
▪ DynamoDB is a fast, fully managed NoSQL database
service that makes it simple and cost-effective to store
and retrieve any amount of data, and serve any level of
request traffic.
▪ All data items are stored on Solid State Drives (SSDs),
and are replicated across 3Availability Zones for high
availability and durability.
▪ DynamoDB tables do not have fixed schemas, and
each item may have a different number of attributes.
▪ DynamoDB has no upfront costs and implements a pay
as you go plan as a flat hourly rate based on the capacity
reserved.
Amazon Elastic Beanstalk
▪ AWS Elastic Beanstalk provides a solution to quickly deploy and
manage applications in the AWS cloud.
▪ You simply upload your application, and Elastic Beanstalk
automatically handles the deployment details of capacity
provisioning, load balancing, auto-scaling, and application health
monitoring.
▪ Elastic Beanstalk leverages AWS services such as Amazon EC2,
Amazon S3, ….
▪ To ensure easy portability of your application, Elastic Beanstalk is
built using familiar software stacks such as:
– Apache HTTP Server for Node.js, PHP and Python
– Passenger for Ruby,
– IIS 7.5 for .NET
– Apache Tomcat for Java.
▪ There is no additional charge for Elastic Beanstalk - you pay only
for the AWS resources needed to store and run your applications.
Amazon Simple Queue Service (SQS)
Amazon Simple Queue Service (SQS) is a message queuing
service.
It enables you to decouple and scale microservices,
distributed systems, and server less applications.
SQS eliminates the complexity and overhead associated with
managing and operating message oriented middleware.
Using SQS, you can send, store, and receive messages
between software components at any volume.
SQS offers two types of message queues.
Standard queues offer maximum throughput, best-effort
ordering, and at-least-once delivery.
SQS FIFO queues are designed to guarantee that messages
are processed exactly once, in the exact order that they are sent.
Unlimited queues and messages.
Payload Size: Message payloads can contain up to
256KB of text in any format.
Batches: Send, receive, or delete messages in batches of
up to 10 messages or 256KB.
Batches cost the same amount as single messages,
meaning SQS can be even more cost effective for customers that
use batching.
Long polling: Reduce extraneous polling to minimize
cost while receiving new messages as quickly as possible.
Retain messages in queues for up to 14 days.
Send and receive messages simultaneously.
Message locking: When a message is received, it becomes
“locked” while being processed. This keeps other computers from
processing the message simultaneously.
Queue sharing: Securely share Amazon SQS queues
anonymously or with specific AWS accounts.
Server-side encryption (SSE): Protect the contents of messages
in Amazon SQS queues using keys managed in the AWS Key
Management Service (AWS KMS). SSE encrypts messages as
soon as Amazon SQS receives them. The messages are stored in
encrypted form and Amazon SQS decrypts messages only when
they are sent to an authorized consumer.
Dead Letter Queues (DLQ): Handle messages that have not
been successfully processed by a consumer with Dead Letter
Queues.
Launching an Amazon SQS service for the first time
need the following steps:
Sign up for AWS
Create an IAM user
Get your access key ID and secret access key
Create a Queue
Send a Message
Receive a Message
Delete a Message
Delete a Queue
To create an AWS account
Open https://portal.aws.amazon.com/billing/signup.
Follow the online instructions.
Create an IAM user with administrative privilege.
Sign in to the Amazon SQS console.
Choose Create New Queue.
On the Create New Queue page, ensure that you're in the correct
region and then type the Queue Name. The name of a FIFO
queue must end with the .fifo suffix.
Standard is selected by default.
To create your queue with the default parameters, choose Quick-
Create Queue.
Your new queue is created and selected in the queue list.
The Queue Type column helps you distinguish standard queues
from FIFO queues at a glance.
Your queue's Name, URL, and ARN are displayed on
the Details tab.
Send a Message
From the queue list, select the queue that you've created.
From Queue Actions, select Send a Message.
The Send a Message to QueueName dialog box is displayed.
View/Delete Messages
From the queue list, select the queue that you have created.
From Queue Actions, select View/Delete Messages.
Choose Start Polling for messages.
Amazon SQS begins to poll the messages in the queue. The dialog box
displays a message from the queue. A progress bar at the bottom of the
dialog box displays the status of the message's visibility timeout.
When the progress bar is filled the message becomes visible to
consumers.
Before the visibility timeout expires, select the message that you want
to delete and then choose Delete Message.
In the Delete Messages dialog box, confirm that the message you want
to delete is checked and choose Yes, Delete Checked Messages.
The selected message is deleted.
Select Close.
Delete Queue.

From the queue list, select the queue that you have
created.
From Queue Actions, select Delete Queue.
AWS with RESTful API management
Representational State Transfer (REST) is a software architecture
that imposes conditions on how an API should work. REST was
initially created as a guideline to manage communication on a
complex network like the internet.
Amazon API Gateway is a fully managed service that makes it easy for
developers to create, publish, maintain, monitor, and secure APIs at any
scale.
Using API Gateway, you can create RESTful APIs for real-time two-way
communication applications:
Using API Gateway, you can:
•Provide users with high-speed performance for both API requests and
responses.
•Authorize access to your APIs with AWS Identity and Access
Management (IAM)
•Run multiple versions of the same API simultaneously with API
Gateway to quickly iterate, test, and release new versions.
•Monitor performance metrics and information about API calls, data
latency, and error rates from the API Gateway.
Getting started with API Gateway

You create a serverless API.


Serverless APIs let you focus on your applications,
instead of spending time provisioning and managing
servers.

Step 1: Create a Lambda function


Step 2: Create an HTTP API
Step 3: Test your API
(Optional) Step 4: Clean up
When you invoke HTTP API,
API Gateway routes the request to Lambda function.
Lambda runs the Lambda function and returns a
response to API Gateway.
API Gateway then returns a response to you.
Amazon CloudWatch
▪ Amazon CloudWatch provides monitoring for AWS cloud
resources and the applications customers run on AWS.
▪ Amazon CloudWatch lets you programmatically retrieve your
monitoring data, view graphs, and set alarms to help you
troubleshoot, spot trends, and take automated action based on the
state of your cloud environment.
▪ Amazon CloudWatch enables you to monitor your AWS resources
up-to-the-minute in real-time, including:
– Amazon EC2 instances,
– Amazon EBS volumes,
– Elastic Load Balancers,
– Amazon RDS DB instances.
▪ Metrics such as CPU utilization, latency, and request counts are
provided automatically for these AWS resources.
▪ Customers can also supply their own custom application and
system metrics, such as memory usage, transaction volumes, or
error rates.
Amazon Simple Workflow Service (SWF)
▪ Amazon SWF is a task coordination and state management
service for cloud applications.
▪ Using Amazon SWF, you structure the various processing
steps in an application that runs across one or more machines
as a set of “tasks.”
▪ Amazon SWF manages dependencies between the tasks,
schedules the tasks for execution, and runs any logic that
needs to be executed in parallel.
▪ The service also tracks the tasks’ progress.
▪ As the business requirements change, Amazon SWF makes
it easy to change application logic without having to worry
about the underlying state machinery and flow control.
AWS Free Usage Tier

http://aws.amazon.com/free/

https://docs.aws.amazon.com/index.html

You might also like