Aws Soc3
Aws Soc3
• Identifying the Amazon Web Services System (System) and describing the boundaries of the
System, which are presented in Attachment A
• Identifying our principal service commitments and system requirements
• Identifying the risks that would threaten the achievement of our principal service commitments
and system requirements that are the objectives of our System, which are presented in
Attachment A
• Identifying, designing, implementing, operating, and monitoring effective controls over the
System to mitigate risks that threaten the achievement of the principal service commitments and
system requirements
• Selecting the trust services categories and associated criteria that are the basis of our assertion
We confirm to the best of our knowledge and belief that the controls over the System were effective
throughout the period October 1, 2023 to September 30, 2024, to provide reasonable assurance that the
service commitments and system requirements were achieved based on the criteria relevant to security,
availability, confidentiality, and privacy set forth in the AICPA’s TSP section 100, 2017 Trust Services
Criteria for Security, Availability, Processing Integrity, Confidentiality, and Privacy.
Since 2006, Amazon Web Services (AWS) has provided flexible, scalable and secure IT infrastructure to
businesses of all sizes around the world. With AWS, customers can deploy solutions in a cloud computing
environment that provides compute power, storage, and other application services over the Internet as
their business needs demand. AWS affords businesses the flexibility to employ the operating systems,
application programs, and databases of their choice.
More information about the in-scope services, can be found at the following web address:
https://aws.amazon.com/compliance/services-in-scope/
The scope of locations covered in this report includes the supporting data centers located in the following
regions:
* This location is a Dedicated Local Zone and may not be available to all customers.
Infrastructure
AWS operates the cloud infrastructure that customers may use to provision computing resources such as
processing and storage. The AWS infrastructure includes the facilities, network, and hardware as well as
some operational software (e.g., host operating system, virtualization software, etc.) that support the
provisioning and use of these resources. The AWS infrastructure is designed and managed in accordance
with security compliance standards and AWS best practices.
Amazon AppFlow
Amazon AppFlow is an integration service that enables customers to securely transfer data between
Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS
services like Amazon S3 and Amazon Redshift. With AppFlow, customers can run data flows at enterprise
scale at the frequency they choose - on a schedule, in response to a business event, or on demand.
Customers are able to configure data transformation capabilities like filtering and validation to generate
rich, ready-to-use data as part of the flow itself, without additional steps.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using
standard SQL. Athena is serverless, so there is no infrastructure for customers to manage. Athena is highly
available; and executes queries using compute resources across multiple facilities and multiple devices in
each facility. Amazon Athena uses Amazon S3 as its underlying data store, making customers’ data highly
available and durable.
Amazon Augmented AI (excludes Public Workforce and Vendor Workforce for all features)
Amazon Augmented AI (A2I) is a machine learning service which makes it easy to build the workflows
required for human review. Amazon A2I brings human review to all developers, removing the
undifferentiated heavy lifting associated with building human review systems or managing large numbers
of human reviewers whether it runs on AWS or not. The public and vendor workforce options of this
service are not in scope for purposes of this report.
Amazon Bedrock
Amazon Bedrock is a fully managed service that makes foundation models (FMs) from Amazon and leading
Artificial Intelligence (AI) companies available through an API, so customers can choose from various FMs
to find the model that's best suited for their use case. With the Amazon Bedrock serverless experience,
customers can quickly get started, easily experiment with FMs, privately customize FMs with their own
data, and seamlessly integrate and deploy them into customer applications using AWS tools and
capabilities. Agents for Amazon Bedrock are fully managed and make it easier for developers to create
generative-AI applications that can deliver up-to-date answers based on proprietary knowledge sources
and complete tasks for a wide range of use cases. The Foundational Models (FMs) from Amazon and
leading AI companies, made available by Amazon Bedrock, are not included in the design of the controls
described in this SOC report.
Amazon Braket
Amazon Braket, the quantum computing service of AWS, is designed to help accelerate scientific research
and software development for quantum computing. Amazon Braket provides everything customers need
to build, test, and run quantum programs on AWS, including access to different types of quantum
computers and classical circuit simulators and a unified development environment for building and
executing quantum circuits. Amazon Braket also manages the classical infrastructure required for the
execution of hybrid quantum-classical algorithms. When customers choose to interact with quantum
computers provided by third-parties, Amazon Braket anonymizes the content, so that only content
necessary to process the quantum task is sent to the quantum hardware provider. No AWS account
information is shared and customer data is not stored outside of AWS.
Amazon CloudFront (excludes content delivery through Amazon CloudFront Embedded Point of
Presences)
Amazon CloudFront is a fast content delivery network (CDN) web service that securely delivers data,
videos, applications and APIs to customers globally with low latency and high-transfer speeds. CloudFront
offers the most advanced security capabilities, including field level encryption and HTTPS support,
seamlessly integrated with AWS Shield, AWS Web Application Firewall and Route 53 to protect against
multiple types of attacks including network and application layer DDoS attacks. These services co-reside
at edge networking locations – globally scaled and connected via the AWS network backbone – providing
a more secure, performant, and available experience for the users.
CloudFront delivers customers' content through a worldwide network of Edge locations. When an end
user requests content that customers serve with CloudFront, the user is routed to the Edge location that
provides the lowest latency, so content is delivered with the best possible performance. If the content is
already in that Edge location, CloudFront delivers it immediately.
Amazon CloudWatch
Amazon CloudWatch is a monitoring and management service built for developers, system operators,
site reliability engineers (SRE), and IT managers. CloudWatch provides the customers with data and
actionable insights to monitor their applications, understand and respond to system-wide performance
changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects
monitoring and operational data in the form of logs, metrics, and events, providing the customers with a
unified view of AWS resources, applications and services that run on AWS, and on-premises servers.
Amazon Cognito
Amazon Cognito lets customers add user sign-up, sign-in, and manage permissions for mobile and web
applications. Customers can create their own user directory within Amazon Cognito. Customers can also
choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with
SAML identity solutions; or by using customers' own identity system. In addition, Amazon Cognito enables
customers to save data locally on users' devices, allowing customers' applications to work even when the
devices are offline. Customers can then synchronize data across users' devices so that their app
experience remains consistent regardless of the device they use.
Amazon Comprehend
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find
insights and relationships in text. Amazon Comprehend uses machine learning to help the customers
uncover insights and relationships in their unstructured data without machine learning experience. The
service identifies the language of the text; extracts key phrases, places, people, brands, or events;
understands how positive or negative the text is; analyzes text using tokenization and parts of speech;
and automatically organizes a collection of text files by topic.
Amazon Connect
Amazon Connect is a unified omnichannel solution built to empower personalized, efficient and proactive
experiences across customers’ preferred channels. Customer can ensure customer issues are quickly
resolved, and if multiple contacts are needed, seamlessly maintain context as customer needs change.
Amazon Detective
Amazon Detective allows customers to easily analyze, investigate, and quickly identify the root cause of
potential security issues or suspicious activity. Amazon Detective collects log data from customer’s AWS
resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data
that enables customers to conduct faster and more efficient security investigations. AWS Security services
can be used to identify potential security issues or findings.
Amazon Detective can analyze trillions of events from multiple data sources and automatically creates a
unified, interactive view of the resources, users, and the interactions between them over time. With this
unified view, customers can visualize all the details and context in one place to identify the underlying
reasons for the findings, drill down into relevant historical activities, and quickly determine the root cause.
DevOps Guru uses ML models informed by years of Amazon.com and AWS operational excellence to
identify anomalous application behavior (for example, increased latency, error rates, resource constraints,
and others) and helps surface critical issues that could cause potential outages or service disruptions.
When DevOps Guru identifies a critical issue, it automatically sends an alert and provides a summary of
related anomalies, the likely root cause, and context for when and where the issue occurred. When
possible, DevOps Guru also helps provide recommendations on how to remediate the issue.
Amazon DynamoDB
Amazon DynamoDB is a managed NoSQL database service. Amazon DynamoDB enables customers to
offload to AWS the administrative burdens of operating and scaling distributed databases such as
hardware provisioning, setup and configuration, replication, software patching, and cluster scaling.
Customers can create a database table that can store and retrieve data and serve any requested traffic.
Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of
servers to handle the request capacity specified and the amount of data stored, while maintaining
consistent, fast performance. All data items are stored on Solid State Drives (SSDs) and are automatically
replicated across multiple AZs in a region.
Amazon EBS volumes are presented as raw unformatted block devices that have been wiped prior to being
made available for use. Wiping occurs before reuse. If customers have procedures requiring that all data
be wiped via a specific method, customers can conduct a wipe procedure prior to deleting the volume for
Security within Amazon EC2 is provided on multiple levels: the operating system (OS) of the host layer,
the virtual instance OS or guest OS, a firewall, and signed API calls. Each of these items builds on the
capabilities of the others. This helps prevent data contained within Amazon EC2 from being intercepted
by unauthorized systems or users and to provide Amazon EC2 instances themselves security without
sacrificing flexibility of configuration. The Amazon EC2 service utilizes a hypervisor to provide memory
and CPU isolation between virtual machines and controls access to network, storage, and other devices,
and maintains strong isolation between guest virtual machines. Independent auditors regularly assess the
security of Amazon EC2 and penetration teams regularly search for new and existing vulnerabilities and
attack vectors.
AWS prevents customers from accessing physical hosts or instances not assigned to them by filtering
through the virtualization software.
Amazon EC2 provides a complete firewall solution, referred to as a Security Group. This mandatory
inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open
the ports needed to allow inbound traffic.
Amazon provides a Time Sync function for time synchronization in EC2 Linux instances with the
Coordinated Universal Time (UTC). It is delivered over the Network Time Protocol (NTP) and uses a fleet
of redundant satellite-connected and atomic clocks in each region to provide a highly accurate reference
clock via the local 169.254.169.123 IPv4 address or fd00:ec2::123 IPv6 address. Irregularities in the Earth’s
rate of rotation that cause UTC to drift with respect to the International Celestial Reference Frame (ICRF),
by an extra second, are called leap second. Time Sync addresses this clock drift by smoothing out leap
seconds over a period of time (commonly called leap smearing) which makes it easy for customer
applications to deal with leap seconds. The Amazon EC2 clock synchronization for the US East (Northern
Virginia) and Asia Pacific (Tokyo) regions have been uplifted to achieve accuracy within 100 microseconds
versus 1 millisecond for the other regions on supported EC2 instances. Instance types that do not support
this will still have 1 millisecond accuracy.
Amazon Elastic Container Service [both Fargate and EC2 launch types]
Amazon Elastic Container Service is a highly scalable, high performance container management service
that supports Docker containers and allows customers to easily run applications on a managed cluster of
Amazon Elastic Kubernetes Service (EKS) [both Fargate and EC2 launch types]
Amazon Elastic Kubernetes Service (EKS) makes it easy to deploy, manage, and scale containerized
applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for
the customer across multiple AWS AZs to eliminate a single point of failure. Amazon EKS is certified
Kubernetes conformant so the customers can use existing tooling and plugins from partners and the
Kubernetes community. Applications running on any standard Kubernetes environment are fully
compatible and can be easily migrated to Amazon EKS.
Amazon ElastiCache
Amazon ElastiCache automates management tasks for in-memory cache environments, such as patch
management, failure detection, and recovery. It works in conjunction with other AWS services to provide
a managed in-memory cache. For example, an application running in Amazon EC2 can securely access an
Amazon ElastiCache Cluster in the same region with very slight latency.
Using the Amazon ElastiCache service, customers create a Cache Cluster, which is a collection of one or
more Cache Nodes, each running an instance of the Memcached, Redis Engine, or DAX Engine. A Cache
Node is a self-contained environment which provides a fixed-size chunk of secure, network-attached RAM.
Each Cache Node runs an instance of the Memcached, Redis Engine, or DAX Engine, and has its own DNS
Amazon EventBridge
Amazon EventBridge delivers a near real-time stream of events that describe changes in AWS resources.
Customers can configure routing rules to determine where to send collected data to build application
architectures that react in real time to the data sources. Amazon EventBridge becomes aware of
operational changes as they occur and responds to these changes by taking corrective action as necessary
by sending message to respond to the environment, activating functions, making changes and capturing
state information.
Amazon FinSpace
Amazon FinSpace is a data management and analytics service that makes it easy to store, catalog, and
prepare financial industry data at scale. Amazon FinSpace reduces the time it takes for financial services
industry (FSI) customers to find and access all types of financial data for analysis.
Amazon Forecast
Amazon Forecast uses machine learning to combine time series data with additional variables to build
forecasts. With Amazon Forecast, customers can import time series data and associated data into Amazon
Forecast from their Amazon S3 database. From there, Amazon Forecast automatically loads the data,
inspects it, and identifies the key attributes needed for forecasting. Amazon Forecast then trains and
optimizes a customer’s custom model and hosts them in a highly available environment where it can be
used to generate business forecasts.
Amazon Forecast is protected by encryption. Any content processed by Amazon Forecast is encrypted
with customer keys through Amazon Key Management Service and encrypted at rest in the AWS Region
where a customer is using the service. Administrators can also control access to Amazon Forecast through
an AWS Identity and Access Management (IAM) permissions policy ensuring that sensitive information is
kept secure and confidential.
Amazon FSx
Amazon FSx provides third-party file systems. Amazon FSx provides the customers with the native
compatibility of third-party file systems with feature sets for workloads such as Windows-based storage,
high-performance computing (HPC), machine learning, and electronic design automation (EDA). The
customers don’t have to worry about managing file servers and storage, as Amazon FSx automates the
time-consuming administration tasks such as hardware provisioning, software configuration, patching,
and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even
more useful for a broader set of workloads.
Amazon Inspector
Amazon Inspector is an automated vulnerability management service that continually scans AWS
workloads for software vulnerabilities and unintended network exposure. Amazon Inspector removes the
operational overhead associated with deploying and configuring a vulnerability management solution by
allowing customers to deploy Amazon Inspector across all accounts with a single step.
Amazon Kendra
Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines
enterprise search for customer websites and applications so employees and customers can easily find
content, even when it's scattered across multiple locations and content repositories.
Amazon Lex
Amazon Lex is a service for building conversational interfaces into any application using voice and text.
Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR)
for converting speech to text, and natural language understanding (NLU) to recognize the intent of the
text, to enable customers to build applications with highly engaging user experiences and lifelike
conversational interactions. Amazon Lex scales automatically, so customers do not need to worry about
managing infrastructure.
Amazon Macie
Amazon Macie is a data security and data privacy service that uses machine learning and pattern matching
to help customers discover, monitor, and protect their sensitive data in AWS.
Macie automates the discovery of sensitive data, such as personally identifiable information (PII) and
financial data, to provide customers with a better understanding of the data that organization stores in
Amazon Simple Storage Service (Amazon S3). Macie also provides customers with an inventory of the S3
buckets, and it automatically evaluates and monitors those buckets for security and access control. Within
minutes, Macie can identify and report overly permissive or unencrypted buckets for the organization.
If Macie detects sensitive data or potential issues with the security or privacy of customer content, it
creates detailed findings for customers to review and remediate as necessary. Customers can review and
analyze these findings directly in Macie, or monitor and process them by using other services, applications,
and systems.
Amazon MemoryDB is compatible with Redis, an open-source data store, enabling customers to quickly
build applications using the same flexible Redis data structures, APIs, and commands that they already
use today. With Amazon MemoryDB, all of the customer’s data is stored in memory, which enables the
customer to achieve microsecond read and single-digit millisecond write latency and high throughput.
Amazon MQ
Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that sets up and
operates message brokers in the cloud. Message brokers allow different software systems – often using
different programming languages, and on different platforms – to communicate and exchange
information. Messaging is the communications backbone that connects and integrates the components
of distributed applications, such as order processing, inventory management, and order fulfillment for e-
commerce. Amazon MQ manages the administration and maintenance of two open-source message
brokers, ActiveMQ and RabbitMQ.
Amazon Neptune
Amazon Neptune is a fast and reliable graph database service that makes it easy to build and run
applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built,
high-performance graph database engine optimized for storing billions of relationships and querying the
graph with milliseconds latency. Amazon Neptune supports popular graph models, Property Graph, and
W3C's RDF, and their respective query languages Apache, TinkerPop Gremlin, and SPARQL, allowing
customers to easily build queries that efficiently navigate highly connected datasets. Neptune powers
graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery,
and network security.
Amazon Personalize
Amazon Personalize is a machine learning service that makes it easy for developers to create
individualized recommendations for customers using their applications. Amazon Personalize makes it easy
for developers to build applications capable of delivering a wide array of personalization experiences,
including specific product recommendations, personalized product re-ranking and customized direct
marketing. Amazon Personalize goes beyond rigid static rule- based recommendation systems and trains,
tunes, and deploys custom machine learning models to deliver highly customized recommendations to
customers across industries such as retail, media and entertainment.
Amazon QuickSight
Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easy to build
visualizations, perform ad-hoc analysis, and quickly get business insights from customers’ data. Using this
cloud-based service customers can connect to their data, perform advanced analysis, and create
visualizations and dashboards that can be accessed from any browser or mobile device.
Amazon Redshift
Amazon Redshift is a data warehouse service to analyze data using a customer’s existing Business
Intelligence (BI) tools. Amazon Redshift also includes Redshift Spectrum, allowing customers to directly
run SQL queries against Exabytes of unstructured data in Amazon S3.
Amazon Rekognition
The easy-to-use Rekognition API allows customers to automatically identify objects, people, text, scenes,
and activities, as well as detect any inappropriate content. Developers can quickly build a searchable
content library to optimize media workflows, enrich recommendation engines by extracting text in
images, or integrate secondary authentication into existing applications to enhance end-user security.
With a wide variety of use cases, Amazon Rekognition enables the customers to easily add the benefits of
computer vision to the business.
Amazon Route 53
Amazon Route 53 provides managed Domain Name System (DNS) web service. Amazon Route 53 connects
user requests to infrastructure running both inside and outside of AWS. Customers can use Amazon Route
53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the
health of their application and its endpoints. Amazon Route 53 enables customers to manage traffic
globally through a variety of routing types, including Latency Based Routing, Geo DNS, and Weighted
Round Robin, all of these routing types can be combined with DNS Failover. Amazon Route 53 also offers
Domain Name Registration; customers can purchase and manage domain names such as example.com
and Amazon Route 53 will automatically configure DNS settings for their domains. Amazon Route 53 sends
automated requests over the internet to a resource, such as a web server, to verify that it is reachable,
available, and functional. Customers also can choose to receive notifications when a resource becomes
unavailable and choose to route internet traffic away from unhealthy resources.
Amazon S3 Glacier
Amazon S3 Glacier is an archival storage solution for data that is infrequently accessed for which retrieval
times of several hours are suitable. Data in Amazon S3 Glacier is stored as an archive. Archives in Amazon
S3 Glacier can be created or deleted, but archives cannot be modified. Amazon S3 Glacier archives are
organized in vaults. All vaults created have a default permission policy that only permits access by the
account creator or users that have been explicitly granted permission. Amazon S3 Glacier enables
customers to set access policies on their vaults for users within their AWS Account. User policies can
express access criteria for Amazon S3 Glacier on a per vault basis. Customers can enforce Write Once Read
Many (WORM) semantics for users through user policies that forbid archive deletion.
Amazon SageMaker (excludes Studio Lab, Public Workforce and Vendor Workforce for all features)
Amazon SageMaker is a platform that enables developers and data scientists to quickly and easily build,
train, and deploy machine learning models at any scale. Amazon SageMaker removes the barriers that
typically “slow down” developers who want to use machine learning.
Amazon SageMaker removes the complexity that holds back developer success with the process of
building, training, and deploying machine learning models at scale. Amazon SageMaker includes modules
that can be used together or independently to build, train, and deploy a customer’s machine learning
models.
Amazon SQS’ main components consist of a frontend request-router fleet, a backend data-storage fleet,
a metadata cache fleet, and a dynamic workload management fleet. User queues are mapped to one or
more backend clusters. Requests to read, write, or delete messages come into the frontends. The
frontends contact the metadata cache to find out which backend cluster hosts that queue and then
connect to nodes in that cluster to service the request.
For authorization, Amazon SQS has its own resource-based permissions system that uses policies written
in the same language used for AWS IAM policies. User permissions for any Amazon SQS resource can be
given either through the Amazon SQS policy system or the AWS IAM policy system, which is authorized
by AWS Identity and Access Management Service. Such policies with a queue are used to specify which
AWS Accounts have access to the queue as well as the type of access and conditions.
Network devices supporting Amazon S3 are configured to only allow access to specific ports on other
Amazon S3 server systems. External access to data stored in Amazon S3 is logged and the logs are retained
for at least 90 days, including relevant access request information, such as the data accessor IP address,
object, and operation.
Amazon SWF enables applications to be built by orchestrating tasks coordinated by a decider process.
Tasks represent logical units of work and are performed by application components that can take any
form, including executable code, scripts, web service calls, and human actions.
Developers implement workers to perform tasks. They run their workers either on cloud infrastructure,
such as Amazon EC2, or off-cloud. Tasks can be long-running, may fail, may timeout and may complete
with varying throughputs and latencies. Amazon SWF stores tasks for workers, assigns them when workers
are ready, tracks their progress, and keeps their latest state, including details on their completion. To
orchestrate tasks, developers write programs that get the latest state of tasks from Amazon SWF and use
it to initiate subsequent tasks in an ongoing manner. Amazon SWF maintains an application’s execution
state durably so that the application can be resilient to failures in individual application components.
Amazon SWF provides auditability by giving customers visibility into the execution of each step in the
application. The Management Console and APIs let customers monitor all running executions of the
application. The customer can zoom in on any execution to see the status of each task and its input and
output data. To facilitate troubleshooting and historical analysis, Amazon SWF retains the history of
executions for any number of days that the customer can specify, up to a maximum of 90 days.
The actual processing of tasks happens on compute resources owned by the end customer. Customers
are responsible for securing these compute resources, for example if a customer uses Amazon EC2 for
workers then they can restrict access to their instances in Amazon EC2 to specific AWS IAM users. In
addition, customers are responsible for encrypting sensitive data before it is passed to their workflows
and decrypting it in their workers.
Data in Amazon SimpleDB is stored in domains, which are similar to database tables except that functions
cannot be performed across multiple domains. Amazon SimpleDB APIs provide domain-level controls that
only permit authenticated access by the domain creator.
Data stored in Amazon SimpleDB is redundantly stored in multiple physical locations as part of normal
operation of those services. Amazon SimpleDB provides object durability by protecting data across
multiple AZs on the initial write and then actively doing further replication in the event of device
unavailability or detected bit-rot.
Amazon Textract
Amazon Textract automatically extracts text and data from scanned documents. With Textract customers
can quickly automate document workflows, enabling customers to process large volumes of document
pages in a short period of time. Once the information is captured, customers can take action on it within
their business applications to initiate next steps for a loan application or medical claims processing.
Additionally, customers can create search indexes, build automated approval workflows, and better
maintain compliance with document archival rules by flagging data that may require redaction.
Amazon Timestream
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational
applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster
and at as little as 1/10th the cost of relational databases. Amazon Timestream saves customers time and
cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical
data to a cost optimized storage tier based upon user defined policies. Amazon Timestream's purpose-
built query engine lets customers access and analyze recent and historical data together, without needing
to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon
Timestream has built-in time series analytics functions, helping customers identify trends and patterns in
data in real-time.
Amazon Transcribe
Amazon Transcribe makes it easy for customers to add speech-to-text capability to their applications.
Audio data is virtually impossible for computers to search and analyze. Therefore, recorded speech needs
to be converted to text before it can be used in applications.
Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert
speech to text quickly. Amazon Transcribe can be used to transcribe customer service calls, to automate
closed captioning and subtitling, and to generate metadata for media assets to create a fully searchable
archive.
Amazon Transcribe automatically adds punctuation and formatting so that the output closely matches the
quality of manual transcription at a fraction of the time and expense.
Customers can optionally connect their VPC to the Internet by adding an Internet Gateway (IGW) or a NAT
Gateway. An IGW allows bi-directional access to and from the internet for some instances in the VPC
based on the routes a customer defines, which specify which IP address traffic should be routable from
the internet, Security Groups, and Network ACLs (NACLS) which limit which instances can accept or send
this traffic. Customers can also optionally configure a NAT Gateway which allows egress-only traffic
initiated from a VPC instance to reach the internet, but not allow traffic initiated from the internet to
reach VPC instances. This is accomplished by mapping the private IP addresses to a public address on the
way out, and then map the public IP address to the private address on the return trip.
The objective of this architecture is to isolate AWS resources and data in one Amazon VPC from another
Amazon VPC, and to help prevent data transferred from outside the Amazon network except where the
customer has specifically configured internet connectivity options or via an IPsec VPN connection to their
off-cloud network.
• Virtual Private Cloud (VPC): An Amazon VPC is an isolated portion of the AWS cloud within which
customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address
range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from
another. Amazon EC2 instances within an Amazon VPC are accessible to customers via Internet
Gateway (IGW), Virtual Gateway (VGW), Transit Gateway (TGW) or VPC Peerings established to
the Amazon VPC.
• IPsec VPN: An IPsec VPN connection connects a customer’s Amazon VPC to another network
designated by the customer. IPsec is a protocol suite for securing Internet Protocol (IP)
communications by authenticating and encrypting each IP packet of a data stream. Amazon VPC
customers can create an IPsec VPN connection to their Amazon VPC by first establishing an
Internet Key Exchange (IKE) security association between their Amazon VPC VPN gateway and
another network gateway using a pre-shared key as the authenticator. Upon establishment, IKE
Amazon WorkDocs
Amazon WorkDocs is a secure content creation, storage and collaboration service. Users can share files,
provide rich feedback, and access their files on WorkDocs from any device. WorkDocs encrypts data in
transit and at rest, and offers powerful management controls, active directory integration, and near real-
time visibility into file and user actions. The WorkDocs SDK allows users to use the same AWS tools they
are already familiar with to integrate WorkDocs with AWS products and services, their existing solutions,
third-party applications, or build their own.
Amazon WorkMail
Amazon WorkMail is a managed business email and calendaring service with support for existing desktop
and mobile email clients. It allows access to email, contacts, and calendars using Microsoft Outlook, a
browser, or native iOS and Android email applications. Amazon WorkMail can be integrated with a
customer’s existing corporate directory and the customer controls both the keys that encrypt the data
and the location (AWS Region) under which the data is stored.
Customers can create an organization in Amazon WorkMail, select the Active Directory they wish to
integrate with, and choose their encryption key to apply to all customer content. After setup and
validation of their mail domain, users from the Active Directory are selected or added, enabled for Amazon
WorkMail, and given an email address identity inside the customer owned mail domain.
Amazon WorkSpaces
Amazon WorkSpaces is a managed desktop computing service in the cloud. Amazon WorkSpaces enables
customers to deliver a high-quality desktop experience to end-users as well as help meet compliance and
security policy requirements. When using Amazon WorkSpaces, an organization’s data is neither sent to
nor stored on end-user devices. The PCoIP and WSP protocols used by Amazon WorkSpaces utilize
interactive video streaming to provide a desktop experience to the user while the data remains in the
AWS cloud or in the organization’s off-cloud environment.
When Amazon WorkSpaces is integrated with a corporate Active Directory, each WorkSpace joins the
Active Directory domain, and can be managed like any other desktop in the organization. This means that
customers can use Active Directory Group Policies to manage their Amazon WorkSpaces and can specify
configuration options that control the desktop, including those that restrict users’ abilities to use local
storage on their devices. Amazon WorkSpaces also integrates with customers’ existing RADIUS server to
enable multi-factor authentication (MFA).
AWS Amplify
AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end
web and mobile developers build scalable full stack applications, powered by AWS. With Amplify,
customers can configure app backend and connect applications in minutes, deploy static web apps in a
few clicks and easily manage app content outside of AWS console. Amplify supports popular web
frameworks including JavaScript, React, Angular, Vue, Next.js, and mobile platforms including Android,
iOS, React Native, Ionic, and Flutter.
AWS AppFabric
AWS AppFabric is a no-code service that connects multiple software as a service (SaaS) applications for
better security, management, and productivity. AppFabric aggregates and normalizes SaaS data (e.g., user
event logs, user access) across SaaS applications without the need to write custom data integrations.
AWS AppSync
AWS AppSync is a service that allows customers to easily develop and manage GraphQL APIs. Once
deployed, AWS AppSync automatically scales the API execution engine up and down to meet API request
volumes. AWS AppSync offers GraphQL setup, administration, and maintenance, with high availability
serverless infrastructure built in.
AWS Artifact
AWS Artifact is a self-service audit artifact retrieval portal that provides customers with on-demand access
to AWS’ compliance documentation and AWS agreements. Customers can use AWS Artifact Reports to
download AWS security and compliance documents, such as AWS ISO certifications, Payment Card
Industry (PCI), and System and Organization Control (SOC) reports. Customers can use AWS Artifact
Agreements to review, accept, and track the status of AWS agreements.
AWS Backup
AWS Backup is a backup service that makes it easy to centralize and automate the back up of data across
AWS services in the cloud as well as on premises using the AWS Storage Gateway. Using AWS Backup, the
customers can centrally configure backup policies and monitor backup activity for AWS resources, such as
Amazon EBS volumes, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and
AWS Storage Gateway volumes. AWS Backup automates and consolidates backup tasks previously
performed service-by-service, removing the need to create custom scripts and manual processes.
AWS Batch
AWS Batch enables developers, scientists, and engineers to run batch computing jobs on AWS. AWS Batch
dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory
optimized instances) based on the volume and specific resource requirements of the batch jobs
submitted. AWS Batch plans, schedules, and executes customers’ batch computing workloads across the
full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.
AWS Chatbot
AWS Chatbot is an AWS service that enables DevOps and software development teams to use Slack or
Amazon Chime chat rooms to monitor and respond to operational events in their AWS Cloud. AWS
Chatbot processes AWS service notifications from Amazon Simple Notification Service (Amazon SNS), and
forwards them to Slack or Amazon Chime chat rooms so teams can analyze and act on them. Teams can
respond to AWS service events from a chat room where the entire team can collaborate, regardless of
location.
Customers can register any application resource, such as databases, queues, microservices, and other
cloud resources, with custom names. Cloud Map then constantly checks the health of resources to make
sure the location is up-to-date. The application can then query the registry for the location of the
resources needed based on the application version and deployment environment.
AWS Cloud9
AWS Cloud9 is an integrated development environment, or IDE. The AWS Cloud9 IDE offers a rich code-
editing experience with support for several programming languages and runtime debuggers, and a built-
in terminal. It contains a collection of tools that customers use to code, build, run, test, and debug
software, and helps customers release software to the cloud. Customers access the AWS Cloud9 IDE
through a web browser. Customers can configure the IDE to their preferences. Customers can switch color
themes, bind shortcut keys, enable programming language-specific syntax coloring and code formatting,
and more.
AWS CloudFormation
AWS CloudFormation is a service to simplify provisioning of AWS resources such as Auto Scaling groups,
ELBs, Amazon EC2, Amazon VPC, Amazon Route 53, and others. Customers author templates of the
infrastructure and applications they want to run on AWS, and the AWS CloudFormation service
automatically provisions the required AWS resources and their relationships as defined in these
templates.
AWS acquires these production HSM devices securely using the tamper evident authenticable (TEA) bags
from the vendors. These TEA bag serial numbers and production HSM serial numbers are verified against
data provided out-of-band by the manufacturer and logged by approved individuals in tracking systems.
AWS CloudHSM allows customers to store and use encryption keys within HSMs in AWS data centers.
With AWS CloudHSM, customers maintain full ownership, control, and access to keys and sensitive data
while Amazon manages the HSMs in close proximity to customer applications and data. All HSM media is
securely decommissioned and physically destroyed, verified by two personnel, prior to leaving AWS
control.
AWS CloudShell
AWS CloudShell is a browser-based shell used to securely manage, explore, and interact with your AWS
resources. CloudShell is pre-authenticated with customer console credentials. Common development and
operations tools are pre-installed, so no local installation or configuration is required. With CloudShell,
customers can run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service
APIs using the AWS SDKs, or use a range of other tools to be productive. Customers can use CloudShell
right from their browser.
AWS CloudTrail
AWS CloudTrail is a web service that records AWS activity for customers and delivers log files to a specified
Amazon S3 bucket. The recorded information includes the identity of the API caller, the time of the API
call, the source IP address of the API caller, the request parameters, and the response elements returned
by the AWS service.
AWS CloudTrail provides a history of AWS API calls for customer accounts, including API calls made via the
AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS
CloudFormation). The AWS API call history produced by AWS CloudTrail enables security analysis, resource
change tracking, and compliance auditing.
AWS CodeBuild
AWS CodeBuild is a build service that compiles source code, runs tests, and produces software packages
that are ready to deploy. CodeBuild scales continuously and processes multiple builds concurrently, so
that customers’ builds are not left waiting in a queue. Customers can use prepackaged build environments
or can create custom build environments that use their own build tools. AWS CodeBuild eliminates the
need to set up, patch, update, and manage customers’ build servers and software.
AWS CodeCommit
AWS CodeCommit is a source control service that hosts secure Git-based repositories. It allows teams to
collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need for
customers to operate their own source control system or worry about scaling their infrastructure.
CodeCommit can be used to securely store anything from source code to binaries, and it works seamlessly
with the existing Git tools.
AWS CodePipeline
AWS CodePipeline is a continuous delivery service that helps customers automate release pipelines for
fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and
deploy phases of customers release process every time there is a code change, based on the release model
defined by the customer. This enables customers to rapidly and reliably deliver features and updates.
Customers can easily integrate AWS CodePipeline with third-party services such as GitHub or with their
own custom plugin.
AWS Config
AWS Config enables customers to assess, audit, and evaluate the configurations of their AWS resources.
AWS Config continuously monitors and records AWS resource configurations and allows customers to
automate the evaluation of recorded configurations against desired configurations. With AWS Config,
customers can review changes in configurations and relationships between AWS resources, dive into
detailed resource configuration histories, and determine overall compliance against the configurations
specified within the customers’ internal guidelines. This enables customers to simplify compliance
auditing, security analysis, change management, and operational troubleshooting.
AWS Glue
AWS Glue is an extract, transform, and load (ETL) service that makes it easy for customers to prepare and
load their data for analytics. The customers can create and run an ETL job with a few clicks in the AWS
Management Console.
The dashboard displays relevant and timely information to help customers manage events in progress and
provides proactive notification to help customers plan for scheduled activities. With AWS Health
Dashboard, alerts are triggered by changes in the health of AWS resources, giving event visibility, and
guidance to help quickly diagnose and resolve issues.
AWS HealthImaging
AWS HealthImaging is a service that helps healthcare and life science organizations and their software
partners to store, analyze, and share medical imaging data at petabyte scale. With HealthImaging,
customers can reduce the total cost of ownership (TCO) of their medical imaging applications up to 40%
by running their medical imaging applications from a single copy of patient imaging data in the cloud. With
sub-second image retrieval latencies for active and archive data, customers can realize the cost savings of
the cloud without sacrificing performance at the point-of-care. HealthImaging removes the burden of
managing infrastructure for customer imaging workflows so that they can focus on delivering quality
patient care.
AWS HealthLake
AWS HealthLake is a service offering healthcare and life sciences companies a complete view of individual
or patient population health data for query and analytics at scale. Using the HealthLake APIs, health
organizations can easily copy health data, such as imaging medical reports or patient notes, from on-
premises systems to a secure data lake in the cloud. HealthLake uses machine learning (ML) models to
automatically understand and extract meaningful medical information from the raw data, such as
medications, procedures, and diagnoses. HealthLake organizes and indexes information and stores it in
AWS HealthOmics
AWS HealthOmics helps Healthcare and Life Sciences organizations process, store, and analyze genomics
and other omics data at scale. The service supports a wide range of use cases, including DNA and RNA
sequencing (genomics and transcriptomics), protein structure prediction (proteomics), and more. By
simplifying infrastructure management for customers and removing the undifferentiated heavy lifting,
HealthOmics allows customers to generate deeper insights from their omics data, improve healthcare
outcomes, and advance scientific discoveries.
HealthOmics is comprised of three service components. Omics Storage efficiently ingests raw genomic
data into the Cloud, and it uses domain-specific compression to offer attractive storage prices to
customers. It also offers customers the ability to seamlessly access their data from various compute
environments. Omics Workflows runs bioinformatics workflows at scale in a fully-managed compute
environment. It supports three common bioinformatics domain-specific workflow languages. Omics
Analytics stores genomic variant and annotation data and allows customers to efficiently query and
analyze at scale.
Customers can also organize their devices, monitor and troubleshoot device functionality, query the state
of any IoT device in the fleet, and send firmware updates over-the-air (OTA). AWS IoT Device Management
is agnostic to device type and OS, so customers can manage devices from constrained microcontrollers to
connected cars all with the same service. AWS IoT Device Management allows customers to scale their
fleets and reduce the cost and effort of managing large and diverse IoT device deployments.
When a customer requests AWS KMS to create a KMS key, the service creates a key ID for the KMS key
and key material, referred to as a backing key, which is tied to the key ID of the KMS key. The 256-bit
backing key can only be used for encrypt or decrypt operations by the service. KMS will generate an
associated key ID if a customer chooses to import their own key. If the customer chooses to enable key
rotation for a KMS key with a backing key that the service generated, AWS KMS will create a new version
of the backing key for each rotation event, but the key ID remains the same. All future encrypt operations
under the key ID will use the newest backing key, while all previous versions of backing keys are retained
to decrypt ciphertexts created under the previous version of the key. Backing keys and customer-imported
keys are encrypted under AWS-controlled keys when created/imported and they are only ever stored on
disk in encrypted form.
All requests to AWS KMS APIs are logged and available in the AWS CloudTrail of the requester and the
owner of the key. The logged requests provide information about who made the request, under which
KMS key, and describes information about the AWS resource that was protected through the use of the
KMS key. These log events are visible to the customer after turning on AWS CloudTrail in their account.
AWS KMS creates and manages multiple distributed replicas of KMS keys and key metadata automatically
to enable high availability and data durability. KMS keys themselves are regional objects; KMS keys can
only be used in the AWS region in which they were created. KMS keys are only stored on persistent disk
in encrypted form and in two separate storage systems to ensure durability. When a KMS key is needed
to fulfill an authorized customer request, it is retrieved from storage, decrypted on one of many AWS KMS
hardened security modules (HSM) in the region, then used only in memory to execute the cryptographic
operation (e.g., encrypt or decrypt). Future requests to use the KMS key each require the decryption of
the KMS key in memory for another one-time use.
AWS KMS endpoints are only accessible via TLS using the following cipher suites that support forward
secrecy:
• TLS_AES_128_GCM_SHA256
• TLS_AES_256_GCM_SHA384
• TLS_CHACHA20_POLY1305_SHA256
By design, no one can gain access to KMS key material. KMS keys are only ever present on hardened
security modules for the amount of time needed to perform cryptographic operations under them. AWS
employees have no tools to retrieve KMS keys from these hardened security modules. In addition, multi-
party access controls are enforced for operations on these hardened security modules that involve
changing the software configuration or introducing new hardened security modules into the service.
These multi-party access controls minimize the possibility of an unauthorized change to the hardened
security modules, exposing key material outside the service, or allowing unauthorized use of customer
keys. Additionally, key material used for disaster recovery processes by KMS are physically secured such
that no AWS employee can gain access. Access attempts to recovery key materials are reviewed by
authorized operators on a periodic basis. Roles and responsibilities for those cryptographic custodians
with access to systems that store or use key material are formally documented and acknowledged.
AWS Lambda
AWS Lambda lets customers run code without provisioning or managing servers on their own. AWS
Lambda uses a compute fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances across multiple
AZs in a region, which provides the high availability, security, performance, and scalability of the AWS
infrastructure.
AWS License Manager integrates with AWS services to simplify the management of licenses across
multiple AWS accounts, IT catalogs, and on-premises, through a single AWS account.
AWS OpsWorks for Puppet Enterprise is a configuration management service that hosts Puppet
Enterprise, a set of automation tools from Puppet for infrastructure and application management.
OpsWorks also maintains customers’ Puppet master server by automatically patching, updating, and
backing up customers’ servers. OpsWorks eliminates the need for customers to operate their own
configuration management systems or worry about maintaining its infrastructure. OpsWorks gives
customers’ access to all of the Puppet Enterprise features, which customers manage through the Puppet
console. It also works seamlessly with customers’ existing Puppet code.
AWS Organizations
AWS Organizations helps customers centrally govern their environment as customers grow and scale their
workloads on AWS. Whether customers are a growing startup or a large enterprise, Organizations helps
customers to centrally manage billing; control access, compliance, and security; and share resources
across customer AWS accounts.
Using AWS Organizations, customers can automate account creation, create groups of accounts to reflect
their business needs, and apply policies for these groups for governance. Customers can also simplify
billing by setting up a single payment method for all of their AWS accounts. Through integrations with
other AWS services, customers can use Organizations to define central configurations and resource
sharing across accounts in their organization.
AWS Outposts
AWS Outposts is a service that extends AWS infrastructure, AWS services, APIs and tools to any data
center, co-location space, or an on-premises facility for a consistent hybrid experience. AWS Outposts is
ideal for workloads that require low latency access to on-premises systems, local data processing or local
data storage. Outposts offer the same AWS hardware infrastructure, services, APIs and tools to build and
run applications on premises and in the cloud. AWS compute, storage, database and other services run
locally on Outposts and customers can access the full range of AWS services available in the Region to
build, manage and scale on-premises applications. Service Link is established between Outposts and the
AWS region by use of a secured VPN connection over the public internet or AWS Direct Connect.
AWS Outposts are configured with a Nitro Security Key (NSK) which is designed to encrypt customer
content and give customers the ability to mechanically remove content from the device. Customer
content is cryptographically shredded if a customer removes the NSK from an Outpost device.
Additional information about Security in AWS Outposts, including the shared responsibility model, can be
found in the AWS Outposts User Guide.
AWS RoboMaker
AWS RoboMaker is a service that makes it easy to develop, test, and deploy intelligent robotics
applications at scale. RoboMaker extends the most widely used open-source robotics software
framework, Robot Operating System (ROS), with connectivity to cloud services. This includes AWS
machine learning services, monitoring services, and analytics services that enable a robot to stream data,
navigate, communicate, comprehend, and learn. RoboMaker provides a robotics development
environment for application development, a robotics simulation service to accelerate application testing,
and a robotics fleet management service for remote application deployment, update, and management.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web
applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to engage AWS Support to benefit
from DDoS protection.
AWS Snowball
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts
of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data
transfers including high network costs, long transfer times, and security concerns. Transferring data with
Snowball is simple and secure.
The File Gateway allows customers to copy data to S3 and have those files appear as individual objects in
S3. Volume gateways store data directly in Amazon S3 and allow customers to snapshot their data so that
they can access previous versions of their data. These snapshots are captured as Amazon EBS Snapshots,
which are also stored in Amazon S3. Both Amazon S3 and Amazon Glacier redundantly store these
snapshots on multiple devices across multiple facilities, detecting and repairing any lost redundancy. The
Amazon EBS snapshot provides a point-in-time backup that can be restored off-cloud or on a gateway
running in Amazon EC2 or used to instantiate new Amazon EBS volumes. Data is stored within a single
region that customers specify.
With AWS Systems manager, customers can group resources, like Amazon EC2 instances, Amazon S3
buckets, or Amazon RDS instances, by application, view operational data for monitoring and
troubleshooting, and take action on groups of resources.
Customers can use AWS WAF to create custom rules that block common attack patterns, such as SQL
injection or cross-site scripting, and rules that are designed for their specific application. New rules can be
deployed within minutes, letting customers respond quickly to changing traffic patterns. Also, AWS WAF
includes a full-featured API that customers can use to automate the creation, deployment, and
maintenance of web security rules.
AWS Wickr
AWS Wickr is an end-to-end encrypted service that helps organizations collaborate securely through one-
to-one and group messaging, voice and video calling, file sharing, screen sharing, and more. AWS Wickr
encrypts messages, calls, and files with a 256-bit end-to-end encryption protocol. Only the intended
recipients and the customer organization can decrypt these communications, reducing the risk of
adversary-in-the-middle attacks.
AWS X-Ray
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built
using a microservices architecture. With X-Ray, customers or developers can understand how their
application and its underlying services are performing to identify and troubleshoot the root cause of
performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through the
customers’ application and shows a map of the application’s underlying components. Customers or
developers can use X-Ray to analyze both applications in development and in production.
FreeRTOS
FreeRTOS is an operating system for microcontrollers that makes small, low-power edge devices easy to
program, deploy, secure, connect, and manage. FreeRTOS extends the FreeRTOS kernel, a popular open-
source operating system for microcontrollers, with software libraries that make it easy to securely connect
the small, low-power devices to AWS cloud services like AWS IoT Core or to more powerful edge devices
running AWS IoT Greengrass.
VM Import/Export
VM Import/Export is a service that enables customers to import virtual machine images from their existing
environment to Amazon EC2 instances and export them back to their on premises environment. This
offering allows customers to leverage their existing investments in the virtual machines that customers
Overview
Amazon Web Services (AWS) designs its processes and procedures to meet its objectives for the AWS
System. Those objectives are based on the service commitments that AWS makes to user entities
(customers), the laws and regulations that govern the provision of the AWS System, and the financial,
operational and compliance requirements that AWS has established for the services.
The AWS services are subject to relevant regulations, as well as state privacy security laws and regulations
in the jurisdictions in which AWS operates.
Security, Availability and Confidentiality commitments to customers are documented and communicated
in Service Level Agreements (SLAs) and other customer agreements, as well as in the description of the
service offering provided on the AWS website. Security, Availability and Confidentiality commitments are
standardized and include, but are not limited to, the following:
• Security and confidentiality principles inherent to the fundamental design of the AWS System are
designed to appropriately restrict unauthorized internal and external access to data and customer
data is appropriately segregated from other customers.
• Security and confidentiality principles inherent to the fundamental design of the AWS System are
designed to safeguard data from within and outside of the boundaries of environments which
store a customer’s content to meet the service commitments.
• Availability principles inherent to the fundamental design of the AWS System are designed to
replicate critical system components across multiple Availability Zones and authoritative backups
are maintained and monitored to ensure successful replication to meet the service commitments.
• Privacy principles inherent to the fundamental design of the AWS System are designed to protect
the security and confidentiality of AWS customer content to meet the service commitments.
Amazon Web Services establishes operational requirements that support the achievement of security,
availability and confidentiality commitments, relevant laws and regulations, and other system
requirements. Such requirements are communicated in AWS’ system policies and procedures, system
design documentation, and contracts with customers. Information security policies define an
organization-wide approach to how systems and data are protected. These include policies around how
the service is designed and developed, how the system is operated, how the internal business systems
and networks are managed, and how employees are hired and trained. In addition to these policies,
standard operating procedures have been documented on how to carry out specific manual and
automated processes required in the operation and development of the Amazon Web Services System.
People
Amazon Web Services’ organizational structure provides a framework for planning, executing and
controlling business operations. Executive and senior leadership play important roles in establishing the
Company’s tone and core values. The organizational structure assigns roles and responsibilities to provide
for adequate staffing, security, efficiency of operations, and segregation of duties. Management has also
established points of authority and appropriate lines of reporting for key personnel.
The Company follows a structured on-boarding process to familiarize new employees with Amazon tools,
processes, systems, security practices, policies and procedures. Employees are provided with the
Company’s Code of Business Conduct and Ethics and additionally complete annual Security & Awareness
training to educate them as to their responsibilities concerning information security. Compliance audits
are performed so that employees understand and follow established policies.
Data
AWS customers retain control and ownership of their own data. Customers are responsible for the
development, operation, maintenance, and use of their content. AWS prevent customers from accessing
physical hosts or instances not assigned to them by filtering through the virtualization software.
When a storage device has reached the end of its useful life, AWS procedures include a decommissioning
process that is designed to prevent unauthorized access to assets. AWS uses techniques detailed in NIST
800-88 (“Guidelines for Media Sanitization”) as part of the decommissioning process. All production media
is securely decommissioned in accordance with industry-standard practices. Production media is not
removed from AWS control until it has been securely decommissioned.
Availability
The AWS Resiliency Program encompasses the processes and procedures by which AWS identifies,
responds to, and recovers from a major availability event or incident within the AWS services
environment. This program builds upon the traditional approach of addressing contingency management
which incorporates elements of business continuity and disaster recovery plans and expands this to
consider critical elements of proactive risk mitigation strategies, such as engineering physically separate
Availability Zones (AZs) and continuous infrastructure capacity planning.
AWS contingency plans and incident response playbooks are maintained and updated to reflect emerging
risks and lessons learned from past incidents. Service team response plans are tested and updated
through the due course of business, and the AWS Resiliency Plan is tested, reviewed, and approved by
senior leadership annually.
AWS has identified critical system components required to maintain the availability of the system and
recover service in the event of outage. Critical system components (example: code bases) are backed up
The AWS team responsible for capacity management continuously monitors service usage to project
infrastructure needs for availability commitments and requirements. AWS maintains a capacity planning
model to assess infrastructure usage and demands at least monthly, and usually more frequently (e.g.,
weekly). In addition, the AWS capacity planning model supports the planning of future demands to acquire
and implement additional resources based upon current resources and forecasted requirements.
Confidentiality
AWS is committed to protecting the security and confidentiality of its customers’ content, defined as
“Your Content” at https://aws.amazon.com/agreement/. AWS’ systems and services are designed to
enable authenticated AWS customers to access and manage their content. AWS notifies customers of
third-party access to a customer’s content on the third-party access page located at
https://aws.amazon.com/compliance/third-party-access. AWS may remove a customer’s content when
compelled to do so by a legal order, or where there is evidence of fraud or abuse as described in the
Customer Agreement (https://aws.amazon.com/agreement/) and Acceptable Use Policy
(https://aws.amazon.com/aup/). In executing the removal of a customer’s content due to the reasons
stated above, employees may render it inaccessible as the situation requires. For clarity, this capability to
render customer content inaccessible extends to encrypted content as well.
In the course of AWS system and software design, build, and test of product features, a customer’s
content is not used and remains in the production environment. A customer’s content is not required for
the AWS software development life cycle. When content is required for the development or test of a
service’s software, AWS service teams have tools to generate mock, random data.
AWS knows customers care about privacy and data security. That is why AWS gives customers ownership
and control over their content by design through tools that allow customers to determine where their
content is stored, secure their content in transit or at rest, and manage access to AWS services and
resources. AWS also implements technical and physical controls designed to prevent unauthorized access
to or disclosure of a customer’s content. As described in the Physical Security and Change Management
areas in Section III of this report, AWS employs a number of controls to safeguard data from within and
outside of the boundaries of environments which store a customer’s content. As a result of these
measures, access to a customer’s content is restricted to authorized parties.
AWS contingency plans and incident response playbooks have defined and tested tools and processes to
detect, mitigate, investigate, and assess security incidents. These plans and playbooks include guidelines
for responding to potential data breaches in accordance with contractual and regulatory requirements.
AWS security engineers follow a documented protocol when responding to potential data security
incidents. The protocol involves steps, which include validating the presence of customer content within
During the course of their response, the security engineers document relevant findings in internal tools
used to track the security issue. AWS Security Leadership is regularly apprised of all data security issue
investigations. In the event there are positive indicators that customer content was potentially accessed
by an unintended party, a security engineer engages AWS Security Leadership and the AWS Legal team to
review the findings. AWS Security Leadership and the Legal team review the findings and determine if a
notifiable data breach has occurred pursuant to contractual or regulatory obligations. If confirmed,
affected customers are notified in accordance with the applicable reporting requirement.
Vendors and third parties with restricted access, that engage in business with Amazon, are subject to
confidentiality commitments as part of their agreements with Amazon. Confidentiality commitments are
included in agreements with vendors and third parties with restricted access and are reviewed by AWS
and the third-party at time of contract creation or execution. AWS monitors the performance of third
parties through periodic reviews on a risk-based approach, which evaluate performance against
contractual obligations.
Internally, confidentiality requirements are communicated to employees through training and policies.
Employees are required to attend Amazon Security Awareness (ASA) training, which includes policies and
procedures related to protecting a customer’s content. Confidentiality requirements are included in the
Data Handling and Classification Policy. Policies are reviewed and updated at least annually.
AWS implements policies and controls to monitor access to resources that process or store customer
content. In addition, a Master Service Agreement (MSA) or Non-Disclosure Agreement (NDA) bind a
subcontractor to confidentiality in the unlikely event they are exposed to a customer’s content. The MSA
references both an NDA and a requirement to protect a customer’s content in the event they do not have
an NDA. AWS Legal maintains the most current MSA in a legal document portal. The portal serves as the
repository for contracts with the most current commitments, document owner, and date modified. A legal
review is also performed when the MSA is executed with a vendor.
Services and systems hosted by AWS are designed to retain and protect a customer’s content for the
duration of the customer agreement period, and in some cases, up to 30 days beyond termination. The
customer agreement, https://aws.amazon.com/agreement/, specifies the terms and conditions. AWS
services are designed to retain a customer’s content until the contractual obligation to retain a customer’s
content ends, or upon a customer-initiated action to remove or delete their content.
Once the contractual obligation to retain a customer’s content ends, or upon a customer-initiated action
to remove or delete their content, AWS services have processes and procedures to detect a deletion and
make the content inaccessible. After a delete event, automated actions act on deleted content to render
the content inaccessible.
Privacy
AWS classifies customer data into two categories: customer content and account information. AWS
defines customer content as software (including machine images), data, text, audio, video, or images that
a customer or any end user transfers to AWS for processing, storage, or hosting by AWS services in
Account information is information about a customer that a customer provides to AWS in connection with
the creation or administration of a customer account. For example, account information includes names,
usernames, phone numbers, email addresses, and billing information associated with a customer account.
Any information submitted by the customer that AWS needs in order to provide services to the customer
or in connection with the administration of customer accounts, is not in-scope for this report.
The AWS Privacy Notice is available from the AWS website at https://aws.amazon.com/privacy/. The AWS
Privacy Notice is reviewed by the AWS Legal team and is updated as required to reflect Amazon’s current
business practices and global regulatory requirements. The Privacy Notice describes how AWS collects
and uses a customer’s personal information in relation to AWS websites, applications, products, services,
events, and experiences. The Privacy Notice does not apply to customer content.
As part of the AWS account creation and activation process, AWS customers are informed of the AWS
Privacy Notice and are required to accept the Customer Agreement, including the terms and conditions
related to the collection, use, retention, disclosure, and disposal of their data. Customers are responsible
for determining what content to store within AWS, which may include personal information. Without the
acceptance of the Customer Agreement, customers cannot sign up to use the AWS services.
The AWS Customer Agreement informs customers of the AWS data security and privacy commitments
prior to activating an AWS account and is made available to customers to review at any time on the AWS
website.
The customer determines what data is entered into AWS services and has the ability to configure the
appropriate security and privacy settings for the data, including who can access and use the data. Further,
the customer is able to choose not to provide certain data. Additionally, the customer manages
notification or consent requirements, and maintains the accuracy of the data.
Additionally, the AWS Customer Agreement notes how AWS shares, secures, and retains customer
content. AWS also informs customers of updates to the Customer Agreement by making it available on its
website and providing the last updated date. Customers should check the Customer Agreement website
frequently for any changes to the Customer Agreement.
AWS does not store any customer cardholder data obtained from customers. Rather, AWS passes the
customer cardholder data and sends it immediately to the Amazon Payments Platform, the PCI-certified
platform that Amazon uses for all payment processing. This platform returns a unique identifier that AWS
stores and uses for all future processing. The Amazon Payments Platform sits completely outside of the
AWS boundary and is run by the larger Amazon entity. It is not an AWS service, but it is utilized by the
larger Amazon entity for payment processing. As such, the Amazon payment platform is not in-scope for
this report.
AWS provides authenticated customers the ability to access, update, and confirm their data. Denial of
access will be communicated using the AWS console. Customers can sign into to their AWS accounts
through the AWS console to view and update their data.
AWS (or Amazon) does not disclose customer information in response to government demands unless
required to do so to comply with a legally valid and binding order. AWS Legal reviews and maintains
records of all the information requests, which lists information on the types and volume of information
requested. Unless AWS is prohibited from doing so or there is clear indication of illegal conduct in
connection with the use of Amazon products or services, AWS notifies customers before disclosing
customer content so they can seek protection from disclosure. AWS shares customer content only as
described in the AWS Customer Agreement.
AWS may produce non-content and/or content information in response to valid and binding law
enforcement and governmental requests, such as subpoenas, court orders, and search warrants. “Non-
content information” means customer information such as name, address, email address, billing
information, date of account creation, and service usage information. “Content information” includes the
content that a customer transfers for processing, storage, or hosting in connection with AWS services and
any computational results. AWS records customer information requests to maintain a complete, accurate,
and timely record of such requests.
If required, customers are responsible for providing notice to the individuals whose data the customer
collects and uses within AWS. AWS is not responsible for providing such notice to or obtaining consent
from these individuals and is only responsible for communicating its privacy commitments to AWS
customers, which is provided during the account creation and activation process.
AWS has documented an incident response policy and plan which outlines an organized approach for
responding to security breaches and incidents. The AWS Security team is responsible for monitoring
systems, tracking issues, and documenting findings of security-related events. Records are maintained for
security breaches and incidents, which include status information required for supporting forensic
activities, trend analysis, and evaluation of incident details.
As part of the process, potential breaches of customer content are investigated and escalated to AWS
Security and AWS Legal. Customers can subscribe to the AWS Security Bulletins page, which provides
information regarding identified security issues. AWS notifies affected customers and regulators of
breaches and incidents as legally required in accordance with team processes.
AWS retains and disposes of customer content in accordance with the Customer Agreement and the AWS
Data Classification and Handling Policy. When a customer terminates their account or contract with AWS,
the account is put under isolation; after which within 90 days, customers can restore their accounts and
related content. AWS services hosting customer content are designed to retain customer content until
AWS maintains an externally posted list of third-party sub-processors that are currently engaged by AWS
to process customer data depending on the AWS region and AWS service the customer selects at
https://aws.amazon.com/compliance/sub-processors/. Before AWS authorizes and permits any new
third-party sub-processor to access any customer content, AWS will update the website to inform
customers. AWS maintains contracts with third-party sub-processors that define how access to customer
content is limited to the minimum levels necessary to provide the service described on the page and also
contain data protection, confidentiality commitments, and security requirements. AWS performs
application security reviews for each third-party sub-processor provider prior to integration with AWS to
ascertain and mitigate security risks. A typical security review considers privacy components, such as
retention period, use, and collection of data as applicable. The review starts with a system owner initiating
a review request to the dedicated AWS Vendor Security (AVS) team, and submitting detailed information
required for the review.
During this process, the AVS team determines the granularity of review required based on the type of
customer content that will be shared, design, threat model, and impact to AWS’ risk profile. They provide
security guidance, validate security assurance material, and meet with external parties to discuss their
penetration tests, Software Development Life Cycle, change management processes, and other operating
security controls. They work with the system owner to identify, prioritize, and remediate security findings.
The AVS team collaborates with AWS Legal as needed to validate that the content of the AVS reviews are
in-line with AWS privacy policies. The AVS team provides their final approval for the third-party system
after they have adequately assessed the risks and worked with the requester to implement security
controls to mitigate identified risks. These application security reviews are not only executed for new
third-party sub-processors, but also renewed on an annual basis with every third-party sub-processor.