0% found this document useful (0 votes)
87 views82 pages

Amazon Web Services

Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, offering infrastructure, database storage, analytics, and other functionality. Key services include Amazon Elastic Compute Cloud (EC2) for virtual servers, Amazon Simple Storage Service (S3) for object storage, and a variety of compute and database options. AWS aims to help businesses scale resources up or down as needed through its pay-as-you-go model.

Uploaded by

Rocket Bunny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views82 pages

Amazon Web Services

Amazon Web Services (AWS) provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, offering infrastructure, database storage, analytics, and other functionality. Key services include Amazon Elastic Compute Cloud (EC2) for virtual servers, Amazon Simple Storage Service (S3) for object storage, and a variety of compute and database options. AWS aims to help businesses scale resources up or down as needed through its pay-as-you-go model.

Uploaded by

Rocket Bunny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Amazon Web Services

Amazon Web Services(AWS)

Amazon Web Services (AWS) is a secure cloud


services platform, offering compute power,
database storage, content delivery and other
functionality to help businesses scale and grow.
Amazon Web Services(AWS)

In the simplest of words AWS allows you to do the


following things-
 Running web and application servers in the cloud to host
dynamic websites.
 Securely store all your files on the cloud so you can access
them from anywhere.
 Using managed databases like MySQL, PostgreSQL,
Oracle or SQL Server to store information.
 Deliver static and dynamic files quickly around the world
using a Content Delivery Network (CDN).
 Send bulk email to your customers.
Amazon Web Services: EC2

Amazon Elastic Compute Cloud(EC2)


Amazon Elastic Compute Cloud (Amazon EC2) is a
web service that provides secure, resizable compute
capacity in the cloud. It is designed to make web-
scale cloud computing easier for developers.
Amazon EC2’s simple web service interface allows
you to obtain and configure capacity with minimal
friction. It provides you with complete control of
your computing resources and lets you run on
Amazon’s proven computing environment.
Amazon Web Services: EC2

Amazon EC2 offers the broadest and deepest


compute platform with choice of processor, storage,
networking, operating system, and purchase model.
They offer the fastest processors in the cloud and
they are the only cloud with 400 Gbps ethernet
networking. Amazon has the most powerful GPU
instances for machine learning training and
graphics workloads. More SAP, HPC, Machine
Learning, and Windows workloads run on AWS
than any other cloud.
Amazon Web Services: EC2

Building Blocks of EC2


Amazon EC2 offers the broadest and deepest
choice of instances, built on the latest compute,
storage, and networking technologies and
engineered for high performance and security.
Amazon Web Services: EC2
Faster innovation and increased security with AWS Nitro
System
The AWS Nitro System is the underlying platform for
Amazon’s next generation of EC2 instances that offloads
many of the traditional virtualization functions to dedicated
hardware and software to deliver high performance, high
availability, and high security while also reducing
virtualization overhead. The Nitro System is a rich collection
of building blocks that can be assembled in many different
ways, giving us the flexibility to design and rapidly deliver
new EC2 instance types with an ever-broadening selection of
compute, storage, memory, and networking options.
Amazon Web Services: EC2

Choice of processors
A choice of latest generation Intel Xeon, AMD
EPYC, and AWS Graviton CPUs enables you to
find the best balance of performance and price
for your workloads. EC2 instances powered
by NVIDIA GPUs and AWS Inferentia are also
available for workloads that require accelerated
computing such as machine learning, gaming,
and graphic intensive applications.
Amazon Web Services: EC2

High performance storage


Amazon Elastic Block Store (EBS) provides easy
to use, high performance block storage for use
with Amazon EC2. Amazon EBS is available in a
range of volume types that allow you to optimize
storage performance and cost for your
workloads. Many EC2 instance types also come
with options for local NVMe SSD storage for
applications that require low latency.
Amazon Web Services: EC2

Enhanced networking
AWS is the first and only cloud to offer 400 Gbps
enhanced Ethernet networking for compute
instances. Enhanced networking enables you to get
significantly higher packet per second (PPS), lower
network jitter, and lower latency. For high performance
computing (HPC) applications, Elastic Fabric Adapter
is a network interface for Amazon EC2 instances that
offers low-latency, high-bandwidth interconnect
between compute nodes to help scale applications to
thousands of cores.
Amazon Web Services: EC2

Choice of purchasing model


Amazon offers a choice of multiple purchasing models
with On-Demand, Spot Instances, and Savings Plan.
With Spot Instances, you can save up to 90% for
fault-tolerant workloads. With Savings Plan, you can
save up to 72% savings with committed usage and
flexibility across EC2, Fargate, and Lambda. You can
also optimize your costs with recommendations on
instances, built into EC2 with AWS Compute
Optimizer or through tools such as Cost Explorer.
Amazon Web Services: EC2
Pricing Options:

 On-demand Instances
Per hour as well as per
second billing
 Spot Instances

 Reserved Instances Only per hour billing

 Dedicated Hosts
Amazon Web Services: EC2
 On-Demand Instances:

In this model, based on the instances you choose, you pay


for compute capacity per hour or per second (only for
Linux Instances) and no upfront payments are needed. You
can increase or decrease your compute capacity to meet the
demands of your application and only pay for the instance
you use. This model is suitable for developing/testing
application with short-term or unpredictable workloads.
On-Demand Instances is recommended for users who
prefer low cost and flexible EC2 Instances without upfront
payments or long-term commitments.
Amazon Web Services: EC2
 Spot Instances:

Amazon EC2 Spot Instances is unused EC2 capacity in the


AWS cloud. Spot Instances are available at up to a 90%
discount compared to On-Demand prices. The Spot price
of Amazon EC2 spot Instances fluctuates periodically
based on supply and demand. It supports both per hour and
per second (only for Linux Instances) billing schemes .
Applications that have flexible start and end times and
users with urgent computing needs for large scale dynamic
workload can choose Amazon EC2 spot Instances.
Amazon Web Services: EC2
 Reserved Instances:

Amazon EC2 Reserved Instances provide you with a


discount up to 75% compared to On-Demand Instance
pricing. It also provides capacity reservation when used in
specific Availability Zone. For applications that have
predictable workload, Reserved Instances can provide
sufficient savings compared to On-Demand Instances. The
predictability of usage ensures compute capacity is
available when needed. Customers can commit to using
EC2 over a 1- or 3-year term to reduce their total
computing costs
Amazon Web Services: EC2
 Dedicated Hosts:

A Dedicated Host is a physical EC2 server dedicated for


your use. Dedicated Hosts can help you reduce costs by
allowing you to use your existing server-bound software
licenses like Windows server, SQL server etc, and also
helps you to meet the compliance
requirements .Customers who choose Dedicated Hosts
have to pay the On-Demand price for every hour the
host is active in the account. It supports only per-hour
billing and does not support per-second billing scheme.
Amazon Web Services: EC2
 Per-second billing scheme:

Today, many customers use Amazon EC2 to do a lot of


work in a short time, sometimes minutes or even
seconds. In 2017, AWS announced per-second billing
for usage of Linux instances across On-Demand,
Reserved, and Spot Instances. The minimum unit of
time that will be charged is a minute (60 seconds ), but
after your first minute of time, it is charged for seconds.
However if you start then stop an instance in 30
seconds, you will be charged the 60 seconds not 30.
Amazon Web Services: S3

Amazon Simple Storage Service (S3)


Amazon Simple Storage Service (Amazon S3) is an object storage
service that offers industry-leading scalability, data availability,
security, and performance. This means customers of all sizes and
industries can use it to store and protect any amount of data for a
range of use cases, such as data lakes, websites, mobile applications,
backup and restore, archive, enterprise applications, IoT devices,
and big data analytics. Amazon S3 provides easy-to-use
management features so you can organize your data and configure
finely-tuned access controls to meet your specific business,
organizational, and compliance requirements. Amazon S3 is
designed for 99.999999999% (11 9's) of durability, and stores data
for millions of applications for companies all around the world.
Amazon Web Services: S3

Benefits
› Industry-leading performance, scalability, availability,
and durability
› Wide range of cost-effective storage classes
› Unmatched security, compliance, and audit
capabilities
› Easily manage data and access controls
› Query-in-place services for analytics
› Most supported cloud storage service
Amazon Web Services: S3
Industry-leading performance, scalability, availability,
and durability
Scale your storage resources up and down to meet
fluctuating demands, without upfront investments or
resource procurement cycles. Amazon S3 is designed for
99.999999999% (11 9’s) of data durability because it
automatically creates and stores copies of all S3 objects
across multiple systems. This means your data is available
when needed and protected against failures, errors, and
threats. Amazon S3 also delivers strong read-after-write
consistency automatically, at no cost, and without changes
to performance or availability.
Amazon Web Services: S3
Wide range of cost-effective storage classes
Save costs without sacrificing performance by storing
data across the S3 Storage Classes, which support
different data access levels at corresponding rates. You
can use S3 Storage Class Analysis to discover data that
should move to a lower-cost storage class based on
access patterns, and configure an S3 Lifecycle policy to
execute the transfer. You can also store data with
changing or unknown access patterns in S3 Intelligent-
Tiering, which tiers objects based on changing access
patterns and automatically delivers cost savings.
Amazon Web Services: S3
Unmatched security, compliance, and audit
capabilities
Store your data in Amazon S3 and secure it from unauthorized
access with encryption features and access management tools.
S3 is the only object storage service that allows you to block
public access to all of your objects at the bucket or the account
level with S3 Block Public Access. S3 maintains compliance
programs, such as PCI-DSS, HIPAA/HITECH, FedRAMP, EU
Data Protection Directive, and FISMA, to help you meet
regulatory requirements. S3 integrates with Amazon Macie to
discover and protect your sensitive data. AWS also supports
numerous auditing capabilities to monitor access requests to
your S3 resources.
Amazon Web Services: S3

Easily manage data and access controls


S3 gives you robust capabilities to manage access, cost,
replication, and data protection. S3 Access Points make it
easy to manage data access with specific permissions for
your applications using a shared data set. S3 Replication
manages data replication within the region or to other
regions. S3 Batch Operations helps manage large scale
changes across billions of objects. S3 Storage Lens
delivers organization-wide visibility into object storage
usage and activity trends.
Amazon Web Services: S3

Query-in-place services for analytics


Run big data analytics across your S3 objects (and
other data sets in AWS) with query-in-place
services. Use Amazon Athena to query S3 data
with standard SQL expressions and Amazon
Redshift Spectrum to analyze data that is stored
across your AWS data warehouses and S3
resources. You can also use S3 Select to retrieve
subsets of object data, instead of the entire object,
and improve query performance by up to 400%.
Amazon Web Services: S3

Most supported cloud storage service


Store and protect your data in Amazon S3 by working
with a partner from the AWS Partner Network (APN) —
the largest community of technology and consulting
cloud services providers. The APN recognizes migration
partners that transfer data to Amazon S3 and storage
partners that offer S3-integrated solutions for primary
storage, backup and restore, archive, and disaster
recovery. You can also purchase an AWS-integrated
solution directly from the AWS Marketplace, which lists
over 250 storage-specific offerings.
Amazon Web Services: S3
S3 Features
Amazon S3 has various features you can use to organize and
manage your data in ways that support specific use cases, enable
cost efficiencies, enforce security, and meet compliance
requirements. Data is stored as objects within resources called
“buckets”, and a single object can be up to 5 terabytes in size. S3
features include capabilities to append metadata tags to objects,
move and store data across the S3 Storage Classes, configure
and enforce data access controls, secure data against
unauthorized users, run big data analytics, and monitor data at
the object, bucket levels, and view storage usage and activity
trends across your organization. Objects can be accessed through
S3 Access Points or directly through the bucket hostname.
Amazon Web Services: S3
Storage management and monitoring
Amazon S3’s flat, non-hierarchical structure and various
management features help customers of all sizes and industries
organize their data in ways that are valuable to their businesses
and teams. All objects are stored in S3 buckets and can be
organized with shared names called prefixes. You can also append
up to 10 key-value pairs called S3 object tags to each object,
which can be created, updated, and deleted throughout an object’s
lifecycle. To keep track of objects and their respective tags,
buckets, and prefixes, you can use an S3 Inventory report that lists
your stored objects within an S3 bucket or with a specific prefix,
and their respective metadata and encryption status. S3 Inventory
can be configured to generate reports on a daily or a weekly basis.
Amazon Web Services: S3
 Storage management
With S3 bucket names, prefixes, object tags, and S3 Inventory, you have a
range of ways to categorize and report on your data, and subsequently can
configure other S3 features to take action. S3 Batch Operations makes it
simple, whether you store thousands of objects or a billion, to manage your
data in Amazon S3 at any scale. With S3 Batch Operations, you can copy
objects between buckets, replace object tag sets, modify access controls, and
restore archived objects from Amazon S3 Glacier, with a single S3 API
request or a few clicks in the Amazon S3 Management Console. You can also
use S3 Batch Operations to run AWS Lambda functions across your objects to
execute custom business logic, such as processing data or transcoding image
files. To get started, specify a list of target objects by using an S3 Inventory
report or by providing a custom list, and then select the desired operation from
a pre-populated menu. When an S3 Batch Operation request is done, you will
receive a notification and a completion report of all changes made.
Amazon Web Services: S3
Amazon S3 also supports features that help maintain data
version control, prevent accidental deletions, and replicate data
to the same or different AWS Region. With S3 Versioning, you
can easily preserve, retrieve, and restore every version of an
object stored in Amazon S3, which allows you to recover from
unintended user actions and application failures. To prevent
accidental deletions, enable Multi-Factor Authentication
(MFA) Delete on an S3 bucket. If you try to delete an object
stored in an MFA Delete-enabled bucket, it will require two
forms of authentication: your AWS account credentials and the
concatenation of a valid serial number, a space, and the six-
digit code displayed on an approved authentication device.
Amazon Web Services: S3
With S3 Replication, you can replicate objects (and their
respective metadata and object tags) to one or more
destination buckets into the same or different AWS Regions
for reduced latency, compliance, security, disaster recovery,
and other use cases. S3 Cross-Region Replication (CRR) can
be configured to replicate from a source S3 bucket to one or
more destination buckets in different AWS
Regions. Amazon S3 Same-Region Replication (SRR),
replicates objects between buckets in the same AWS
Region. Amazon S3 Replication Time Control (S3 RTC) helps
you meet compliance requirements for data replication by
providing an SLA and visibility into replication times.
Amazon Web Services: S3
You can also enforce write-once-read-many (WORM) policies with S3 Object
Lock. This S3 management feature blocks object version deletion during a
customer-defined retention period so that you can enforce retention policies
as an added layer of data protection or to meet compliance obligations. You
can migrate workloads from existing WORM systems into Amazon S3, and
configure S3 Object Lock at the object- and bucket-levels to prevent object
version deletions prior to a pre-defined Retain Until Date or Legal Hold Date.
Objects with S3 Object Lock retain WORM protection, even if they are
moved to different storage classes with an S3 Lifecycle policy. To track what
objects have S3 Object Lock, you can refer to an S3 Inventory report that
includes the WORM status of objects. S3 Object Lock can be configured in
one of two modes. When deployed in Governance mode, AWS accounts with
specific IAM permissions are able to remove S3 Object Lock from objects. If
you require stronger immutability in order to comply with regulations, you
can use Compliance Mode. In Compliance Mode, the protection cannot be
removed by any user, including the root account.
Amazon Web Services: S3
 Storage monitoring
In addition to these management capabilities, you can use S3 features and other
AWS services to monitor and control how your S3 resources are being used.
You can apply tags to S3 buckets in order to allocate costs across multiple
business dimensions (such as cost centers, application names, or owners), and
then use AWS Cost Allocation Reports to view usage and costs aggregated by
the bucket tags. You can also use Amazon CloudWatch to track the operational
health of your AWS resources and configure billing alerts that are sent to you
when estimated charges reach a user-defined threshold. Another AWS
monitoring service is AWS CloudTrail, which tracks and reports on bucket-
level and object-level activities. You can configure S3 Event Notifications to
trigger workflows, alerts, and invoke AWS Lambda when a specific change is
made to your S3 resources. S3 Event Notifications can be used to automatically
transcode media files as they are uploaded to Amazon S3, process data files as
they become available, or synchronize objects with other data stores.
Amazon Web Services: S3
Storage analytics and insights
S3 Storage Lens
S3 Storage Lens delivers organization-wide visibility into object
storage usage, activity trends, and makes actionable recommendations
to improve cost-efficiency and apply data protection best practices. S3
Storage Lens is the first cloud storage analytics solution to provide a
single view of object storage usage and activity across hundreds, or
even thousands, of accounts in an organization, with drill-downs to
generate insights at the account, bucket, or even prefix level. Drawing
from more than 14 years of experience helping customers optimize
their storage, S3 Storage Lens analyzes organization-wide metrics to
deliver contextual recommendations to find ways to reduce storage
costs and apply best practices on data protection.
Amazon Web Services: S3
S3 Storage Class Analysis
Amazon S3 Storage Class Analysis analyzes storage access
patterns to help you decide when to transition the right data
to the right storage class. This Amazon S3 feature observes
data access patterns to help you determine when to
transition less frequently accessed storage to a lower-cost
storage class. You can use the results to help improve your
S3 Lifecycle policies. You can configure storage class
analysis to analyze all the objects in a bucket. Or, you can
configure filters to group objects together for analysis by
common prefix, by object tags, or by both prefix and tags.
Amazon Web Services: S3

Storage classes
With Amazon S3, you can store data across a
range of different S3 Storage Classes: S3
Standard, S3 Intelligent-Tiering, S3 Standard-
Infrequent Access (S3 Standard-IA), S3 One
Zone-Infrequent Access (S3 One Zone-
IA), Amazon S3 Glacier (S3 Glacier), Amazon
S3 Glacier Deep Archive (S3 Glacier Deep
Archive), and S3 Outposts.
Amazon Web Services: S3
Every S3 Storage Class supports a specific data access level at corresponding
costs or geographic location. This means you can store mission-critical
production data in S3 Standard for frequent access, save costs by storing
infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive
data at the lowest costs in the archival storage classes — S3 Glacier and S3
Glacier Deep Archive. If you have data residency requirements that can’t be
met by an existing AWS Region, you can use the S3 Outposts storage class to
store your S3 data on-premises using S3 on Outposts. You can use S3 Storage
Class Analysis to monitor access patterns across objects to discover data that
should be moved to lower-cost storage classes. Then you can use this
information to configure an S3 Lifecycle policy that makes the data transfer. S3
Lifecycle policies can also be used to expire objects at the end of their
lifecycles. You can store data with changing or unknown access patterns in S3
Intelligent-Tiering, which automatically moves your data based on changing
access patterns between two low latency access tiers optimized for frequent and
infrequent access, and when subsets of objects become rarely accessed over
long periods of time, you can activate two archive access tiers designed for
asynchronous access that are optimized for archive access.
Amazon Web Services: S3
Access management and security
Access management
To protect your data in Amazon S3, by default, users only have access to
the S3 resources they create. You can grant access to other users by using
one or a combination of the following access management
features: AWS Identity and Access Management (IAM) to create users
and manage their respective access; Access Control Lists (ACLs) to
make individual objects accessible to authorized users; bucket policies to
configure permissions for all objects within a single S3 bucket; S3
Access Points to simplify managing data access to shared data sets by
creating access points with names and permissions specific to each
application or sets of applications; and Query String Authentication to
grant time-limited access to others with temporary URLs. Amazon S3
also supports Audit Logs that list the requests made against your S3
resources for complete visibility into who is accessing what data.
Amazon Web Services: S3

Security
Amazon S3 offers flexible security features to block
unauthorized users from accessing your data. Use
VPC endpoints to connect to S3 resources from
your Amazon Virtual Private Cloud (Amazon VPC).
Amazon S3 supports both server-side encryption (with
three key management options) and client-side
encryption for data uploads. Use S3 Inventory to
check the encryption status of your S3 objects
(see storage management for more information on S3
Inventory).
Amazon Web Services: S3

S3 Block Public Access is a set of security controls that ensures S3 buckets


and objects do not have public access. With a few clicks in the Amazon S3
Management Console, you can apply the S3 Block Public Access settings to
all buckets within your AWS account or to specific S3 buckets. Once the
settings are applied to an AWS account, any existing or new buckets and
objects associated with that account inherit the settings that prevent public
access. S3 Block Public Access settings override other S3 access
permissions, making it easy for the account administrator to enforce a “no
public access” policy regardless of how an object is added, how a bucket is
created, or if there are existing access permissions. S3 Block Public Access
controls are auditable, provide a further layer of control, and use AWS
Trusted Advisor bucket permission checks, AWS CloudTrail logs, and
Amazon CloudWatch alarms. You should enable Block Public Access for
all accounts and buckets that you do not want publicly accessible.
Amazon Web Services: S3

Using S3 Access Points that are restricted to a


Virtual Private Cloud (VPC), you can easily
firewall your S3 data within your private
network. Additionally, you can use AWS Service
Control Policies to require that any new S3
Access Point in your organization is restricted to
VPC-only access.
Amazon Web Services: S3

Access Analyzer for S3 is a feature that monitors your


bucket access policies, ensuring that the policies
provide only the intended access to your S3 resources.
Access Analyzer for S3 evaluates your bucket access
policies and enables you to discover and swiftly
remediate buckets with potentially unintended access.
When reviewing results that show potentially shared
access to a bucket, you can Block All Public Access to
the bucket with a single click in the S3 Management
console. For auditing purposes, Access Analyzer for S3
findings can be downloaded as a CSV report.
Amazon Web Services: S3
You can use Amazon Macie to discover and protect sensitive data
stored in Amazon S3. Macie automatically gathers a complete S3
inventory and continually evaluates every bucket to alert on any
publicly accessible buckets, unencrypted buckets, or buckets
shared or replicated with AWS accounts outside of your
organization. Then, Macie applies machine learning and pattern
matching techniques to the buckets you select to identify and
alert you to sensitive data, such as personally identifiable
information (PII). As security findings are generated, they are
pushed out the Amazon CloudWatch Events, making it easy to
integrate with existing workflow systems and to trigger
automated remediation with services like AWS Step Functions to
take action like closing a public bucket or adding resource tags.
Amazon Web Services: S3

Query in place
Amazon S3 has a built-in feature and complimentary
services that query data without needing to copy and
load it into a separate analytics platform or data
warehouse. This means you can run big data analytics
directly on your data stored in Amazon S3. S3 Select is
an S3 feature designed to increase query performance by
up to 400%, and reduce querying costs as much as 80%.
It works by retrieving a subset of an object’s data (using
simple SQL expressions) instead of the entire object,
which can be up to 5 terabytes in size.
Amazon Web Services: S3
Amazon S3 is also compatible with AWS analytics services
Amazon Athena and Amazon Redshift Spectrum. Amazon
Athena queries your data in Amazon S3 without needing to
extract and load it into a separate service or platform. It uses
standard SQL expressions to analyze your data, delivers
results within seconds, and is commonly used for ad hoc
data discovery. Amazon Redshift Spectrum also runs SQL
queries directly against data at rest in Amazon S3, and is
more appropriate for complex queries and large data sets (up
to exabytes). Because Amazon Athena and Amazon Redshift
share a common data catalog and data formats, you can use
them both against the same data sets in Amazon S3.
Amazon Web Services: S3

Performance
Amazon S3 provides industry leading performance for cloud
object storage. Amazon S3 supports parallel requests, which
means you can scale your S3 performance by the factor of
your compute cluster, without making any customizations to
your application. Performance scales per prefix, so you can
use as many prefixes as you need in parallel to achieve the
required throughput. There are no limits to the number of
prefixes. Amazon S3 performance supports at least 3,500
requests per second to add data and 5,500 requests per second
to retrieve data. Each S3 prefix can support these request
rates, making it simple to increase performance significantly.
Amazon Web Services: EBS

Amazon Elastic Block Store


Amazon Elastic Block Store (EBS) is an easy to use,
high-performance, block-storage service designed
for use with Amazon Elastic Compute Cloud (EC2)
for both throughput and transaction intensive
workloads at any scale. A broad range of workloads,
such as relational and non-relational databases,
enterprise applications, containerized applications,
big data analytics engines, file systems, and media
workflows are widely deployed on Amazon EBS.
Amazon Web Services: EBS

You can choose from six different volume types


to balance optimal price and performance. You
can achieve single-digit-millisecond latency for
high-performance database workloads such as
SAP HANA or gigabyte per second throughput
for large, sequential workloads such as Hadoop.
You can change volume types, tune performance,
or increase volume size without disrupting your
critical applications, so you have cost-effective
storage when you need it.
Amazon Web Services: EBS

Designed for mission-critical systems, EBS


volumes are replicated within an Availability
Zone (AZ) and can easily scale to petabytes of
data. Also, you can use EBS Snapshots with
automated lifecycle policies to back up your
volumes in Amazon S3, while ensuring
geographic protection of your data and business
continuity.
Amazon Web Services: EBS

Benefits
Performance for any workload
EBS volumes are performant for your most
demanding workloads, including mission-critical
applications such as SAP, Oracle, and Microsoft
products. SSD-backed options include a volume
designed for high performance applications and a
general-purpose volume that offers strong
price/performance ratio for most workloads.
Amazon Web Services: EBS

Customers who want to drive higher performance can


attach their EBS volumes to Amazon EC2 R5b
instances to get up to 60 Gbps bandwidth and 260K
IOPS (input/output operations per second) of
performance, the fastest block storage performance on
EC2. For large, sequential workloads such as big data
analytics engines, log processing, and data
warehousing, customers can use HDD-backed
volumes . Use Fast Snapshot Restore (FSR) to
instantly receive full performance when creating an
EBS volume from a snapshot.
Amazon Web Services: EBS

Highly available and durable

Amazon EBS architecture offers reliability for mission-critical


applications. EBS volumes are designed to protect against
failures by replicating within the Availability Zone (AZ),
offering 99.999% availability. EBS offers a high-durability
volume (io2) for customers that need 99.999% durability,
especially for their business-critical applications. All other
EBS volumes are designed to deliver 99.8% - 99.9%
durability. For simple and robust backup, use EBS
Snapshots with Amazon Data Lifecycle Manager (DLM)
policies to automate snapshot management.
Amazon Web Services: EBS

Cost-effective
EBS offers six different volumes at various price
points and performance benchmarks, enabling you to
optimize costs and invest in a precise level of storage
for your application needs. Options range from highly-
cost-effective, dollar-per-gigabyte volumes to high-
performance volumes with high IOPS and high
throughput designed for mission-critical workloads.
Additionally, EBS offers backups using EBS Snapshots
that are incremental and save on storage costs by not
duplicating data.
Amazon Web Services: EBS

Easy to Use
Amazon EBS volumes are easy to create, use, encrypt,
and protect. Elastic Volumes capability allows you to
increase storage, tune performance up and down, and
change volume types without any disruption to your
workloads. EBS Snapshots allow you to easily take
backups of your volumes for geographic protection of
your data. Data Lifecycle Manager (DLM) is an easy-
to-use tool for automating snapshot management
without any additional overhead or cost.
Amazon Web Services: EBS

Virtually unlimited scale


Amazon EBS enables you to increase storage
without any disruption to your critical workloads,
build applications that require as little as a single
GB of storage, or scale up to petabytes of data —
all in just a few clicks. Snapshots can be used to
quickly restore new volumes across a region's
Availability Zones, enabling rapid scale.
Amazon Web Services: EBS

Secure
EBS is built to be secure for data compliance.
Newly-created EBS volumes can be encrypted
by default with a single setting in your account.
EBS volumes support encryption of data at rest,
data in transit, and all volume backups. EBS
encryption is supported by all volume types,
includes built-in key management infrastructure,
and has zero impact on performance.
Amazon Web Services: EBS

Amazon EBS features


Amazon EBS allows you to create storage volumes and
attach them to Amazon EC2 instances. Once attached,
you can create a file system on top of these volumes,
run a database, or use them in any other way you
would use block storage. Amazon EBS volumes are
placed in a specific Availability Zone where they are
automatically replicated to protect you from the failure
of a single component. All EBS volume types offer
durable snapshot capabilities and are designed for
99.999% availability.
Amazon Web Services: EBS

Amazon EBS features


Amazon EBS provides a range of options that allow
you to optimize storage performance and cost for
your workload. These options are divided into two
major categories: SSD-backed storage for
transactional workloads, such as databases and boot
volumes (performance depends primarily on IOPS),
and HDD-backed storage for throughput intensive
workloads, such as MapReduce and log processing
(performance depends primarily on MB/s).
Amazon Web Services: EBS

Amazon EBS features


SSD-backed volumes include the highest performance
Provisioned IOPS SSD (io2 and io1) for latency-
sensitive transactional workloads and General Purpose
SSD (gp3 and gp2) that balance price and
performance for a wide variety of transactional data.
HDD-backed volumes include Throughput Optimized
HDD (st1) for frequently accessed, throughput
intensive workloads and the lowest cost Cold HDD
(sc1) for less frequently accessed data.
Amazon Web Services: EBS

Amazon EBS features


Elastic Volumes is a feature of Amazon EBS that
allows you to dynamically increase capacity, tune
performance, and change the type of live
volumes with no downtime or performance
impact. This allows you to easily right-size your
deployment and adapt to performance changes.
Amazon SimpleDB
Amazon SimpleDB is a highly available NoSQL data
store that offloads the work of database
administration. Developers simply store and query
data items via web services requests and Amazon
SimpleDB does the rest. Unbound by the strict
requirements of a relational database, Amazon
SimpleDB is optimized to provide high availability
and flexibility, with little or no administrative burden.
Behind the scenes, Amazon SimpleDB creates and
manages multiple geographically distributed replicas
of your data automatically to enable high availability
and data durability.
Amazon SimpleDB

The service charges you only for the resources


actually consumed in storing your data and
serving your requests. You can change your data
model on the fly, and data is automatically
indexed for you. With Amazon SimpleDB, you
can focus on application development without
worrying about infrastructure provisioning, high
availability, software maintenance, schema and
index management, or performance tuning.
Amazon SimpleDB

NoSQL Databases
NoSQL databases are purpose built for specific
data models and have flexible schemas for
building modern applications. NoSQL databases
are widely recognized for their ease of
development, functionality, and performance at
scale.
Amazon SimpleDB

NoSQL (nonrelational) Database


NoSQL databases use a variety of data models
for accessing and managing data. These types of
databases are optimized specifically for
applications that require large data volume, low
latency, and flexible data models, which are
achieved by relaxing some of the data
consistency restrictions of other databases.
Amazon SimpleDB
Consider the example of modeling the schema for a simple book database:
 In a relational database, a book record is often dissembled (or
“normalized”) and stored in separate tables, and relationships are defined
by primary and foreign key constraints. In this example, the Books table
has columns for ISBN, Book Title, and Edition Number, the Authors table
has columns for AuthorID and Author Name, and finally the Author-
ISBN table has columns for AuthorID and ISBN. The relational model is
designed to enable the database to enforce referential integrity between
tables in the database, normalized to reduce the redundancy, and generally
optimized for storage.

 In a NoSQL database, a book record is usually stored as a JSON document.


For each book, the item, ISBN, Book Title, Edition Number, Author Name,
and AuthorID are stored as attributes in a single document. In this model,
data is optimized for intuitive development and horizontal scalability.
Amazon SimpleDB
NoSQL Database Utility
NoSQL databases are a great fit for many modern applications such
as mobile, web, and gaming that require flexible, scalable, high-
performance, and highly functional databases to provide great user
experiences.
 Flexibility: NoSQL databases generally provide flexible schemas
that enable faster and more iterative development. The flexible data
model makes NoSQL databases ideal for semi-structured and
unstructured data.
 Scalability: NoSQL databases are generally designed to scale out by
using distributed clusters of hardware instead of scaling up by
adding expensive and robust servers. Some cloud providers handle
these operations behind-the-scenes as a fully managed service.
Amazon SimpleDB

NoSQL Database Utility


 High-performance: NoSQL database are optimized
for specific data models and access patterns that
enable higher performance than trying to accomplish
similar functionality with relational databases.
 Highly functional: NoSQL databases provide highly
functional APIs and data types that are purpose built
for each of their respective data models.
Amazon SimpleDB

Benefits
Low touch
The service allows you to focus fully on value-
added application development, rather than
arduous and time-consuming database
administration. Amazon SimpleDB automatically
manages infrastructure provisioning, hardware
and software maintenance, replication and
indexing of data items, and performance tuning.
Amazon SimpleDB
Highly available
Amazon SimpleDB automatically creates multiple geographically
distributed copies of each data item you store. This provides high
availability and durability – in the unlikely event that one replica fails,
Amazon SimpleDB can failover to another replica in the system.

Flexible
As your business changes or application evolves, you can easily reflect these
changes in Amazon SimpleDB without worrying about breaking a rigid
schema or needing to refactor code – simply add another attribute to your
Amazon SimpleDB data set when needed. You can also choose between
consistent or eventually consistent read requests, gaining the flexibility to
match read performance (latency and throughput) and consistency
requirements to the demands of your application, or even disparate parts
within your application.
Amazon SimpleDB

Simple to use
Amazon SimpleDB provides streamlined access
to the store and query functions that traditionally
are achieved using a relational database cluster –
while leaving out other complex, often-unused
database operations. The service allows you to
quickly add data and easily retrieve or edit that
data through a simple set of API calls.
Amazon SimpleDB
Designed for use with other Amazon Web Services
Amazon SimpleDB is designed to integrate easily with other
AWS services such as Amazon S3 and EC2, providing the
infrastructure for creating web-scale applications. For example,
developers can run their applications in Amazon EC2 and store
their data objects in Amazon S3. Amazon SimpleDB can then be
used to query the object metadata from within the application in
Amazon EC2 and return pointers to the objects stored in
Amazon S3. Developers can also use Amazon SimpleDB with
Amazon RDS for applications that have relational and non-
relational database needs. Data transferred between Amazon
SimpleDB and other Amazon Web Services within the same
Region is free of charge.
Amazon SimpleDB

Secure
Amazon SimpleDB provides an https end point
to ensure secure, encrypted communication
between your application or client and your
domain. In addition, through integration with
AWS Identity and Access Management, you can
establish user or group-level control over access
to specific SimpleDB domains and operations.
Amazon SimpleDB

Inexpensive
Amazon SimpleDB passes on to you the financial
benefits of Amazon’s scale. You pay only for
resources you actually consume. For Amazon
SimpleDB, this means data store reads and writes
are charged by compute resources consumed by
each operation, and you aren’t billed for compute
resources when you aren’t actively using them
(i.e. making requests).
Amazon Relational Database Service (RDS)

Amazon Relational Database Service (Amazon


RDS) makes it easy to set up, operate, and scale
a relational database in the cloud. It provides
cost-efficient and resizable capacity while
automating time-consuming administration tasks
such as hardware provisioning, database setup,
patching and backups. It frees you to focus on
your applications so you can give them the fast
performance, high availability, security and
compatibility they need.
Amazon Relational Database
Service (RDS)
Amazon RDS is available on several database
instance types - optimized for memory,
performance or I/O - and provides you with six
familiar database engines to choose from,
including Amazon Aurora, PostgreSQL, MySQL
, MariaDB, Oracle Database, and SQL Server.
You can use the AWS Database Migration
Service to easily migrate or replicate your
existing databases to Amazon RDS.
Amazon Relational Database
Service (RDS)
Benefits
Easy to administer
Amazon RDS makes it easy to go from project
conception to deployment. Use the Amazon RDS
Management Console, the AWS RDS Command-
Line Interface, or simple API calls to access the
capabilities of a production-ready relational
database in minutes. No need for infrastructure
provisioning, and no need for installing and
maintaining database software.
Amazon Relational Database
Service (RDS)
Highly scalable
You can scale your database's compute and
storage resources with only a few mouse clicks or
an API call, often with no downtime. Many
Amazon RDS engine types allow you to launch
one or more Read Replicas to offload read traffic
from your primary database instance.
Amazon Relational Database
Service (RDS)
Available and durable
Amazon RDS runs on the same highly reliable
infrastructure used by other Amazon Web Services. When
you provision a Multi-AZ DB Instance, Amazon RDS
synchronously replicates the data to a standby instance in a
different Availability Zone (AZ). Amazon RDS has many
other features that enhance reliability for critical
production databases, including automated backups,
database snapshots, and automatic host replacement using
which the compute instance will be automatically replaced
powering your deployment in the event of a hardware
failure.
Amazon Relational Database
Service (RDS)
Fast
Amazon RDS supports the most demanding
database applications. You can choose between
two SSD-backed storage options: one optimized
for high-performance OLTP applications, and the
other for cost-effective general-purpose use. In
addition, Amazon Aurora provides performance
on par with commercial databases at 1/10th the
cost.
Amazon Relational Database
Service (RDS)
Secure
Amazon RDS makes it easy to control network
access to your database. Amazon RDS also lets
you run your database instances in Amazon
Virtual Private Cloud (Amazon VPC), which
enables you to isolate your database instances
and to connect to your existing IT infrastructure
through an industry-standard encrypted IPsec
VPN. Many Amazon RDS engine types offer
encryption at rest and encryption in transit.
Amazon Relational Database
Service (RDS)
Inexpensive
You pay very low rates and only for the resources
you actually consume. In addition, you benefit
from the option of On-Demand pricing with no
up-front or long-term commitments, or even
lower hourly rates via Amazon’s Reserved
Instance pricing.
Amazon Relational Database
Service (RDS)
Amazon RDS Features
Amazon RDS is a managed relational database service
that provides you six familiar database engines to
choose from, including Amazon Aurora, MySQL,
MariaDB, Oracle, Microsoft SQL Server, and
PostgreSQL. This means that the code, applications,
and tools you already use today with your existing
databases can be used with Amazon RDS. Amazon
RDS handles routine database tasks such as
provisioning, patching, backup, recovery, failure
detection, and repair.
Amazon Relational Database
Service (RDS)
 Amazon RDS makes it easy to use replication to enhance
availability and reliability for production workloads. Using
the Multi-AZ deployment option, you can run mission-critical
workloads with high availability and built-in automated fail-
over from your primary database to a synchronously
replicated secondary database. Using Read Replicas, you can
scale out beyond the capacity of a single database deployment
for read-heavy database workloads.

 As with all Amazon Web Services, there are no up-front


investments required, and you pay only for the resources you
use.

You might also like