Aws Soc3
Aws Soc3
We, as management of, Amazon Web Services, Inc., are responsible for:
• Identifying the Amazon Web Services System (System) and describing the boundaries of the
System, which are presented in Attachment A
• Identifying our principal service commitments and system requirements
• Identifying the risks that would threaten the achievement of our principal service commitments
and system requirements that are the objectives of our system, which are presented in
Attachment A
• Identifying, designing, implementing, operating, and monitoring effective controls over the
System to mitigate risks that threaten the achievement of the principal service commitments
and system requirements
• Selecting the trust services categories and associated criteria that are the basis of our assertion
We confirm to the best of our knowledge and belief that the controls over the System were effective
throughout the period October 1, 2022 to September 30, 2023, to provide reasonable assurance that the
service commitments and system requirements were achieved based on the criteria relevant to security,
availability, confidentiality, and privacy set forth in the AICPA’s TSP section 100, 2017 Trust Services
Criteria for Security, Availability, Processing Integrity, Confidentiality, and Privacy ( With Revised Points of
Focus – 2022).
Since 2006, Amazon Web Services (AWS) has provided flexible, scalable and secure IT infrastructure to
businesses of all sizes around the world. With AWS, customers can deploy solutions in a cloud computing
environment that provides compute power, storage, and other application services over the Internet as
their business needs demand. AWS affords businesses the flexibility to employ the operating systems,
application programs, and databases of their choice.
The scope of locations covered in this report includes the supporting data centers located in the following
regions:
Infrastructure
AWS operates the cloud infrastructure that customers may use to provision computing resources such as
processing and storage. The AWS infrastructure includes the facilities, network, and hardware as well as
some operational software (e.g., host operating system, virtualization software, etc.) that support the
provisioning and use of these resources. The AWS infrastructure is designed and managed in accordance
with security compliance standards and AWS best practices.
AWS Amplify
AWS Amplify is a set of tools and services that can be used together or on their own, to help front-end
web and mobile developers build scalable full stack applications, powered by AWS. With Amplify,
customers can configure app backend and connect applications in minutes, deploy static web apps in a
few clicks and easily manage app content outside of AWS console. Amplify supports popular web
frameworks including JavaScript, React, Angular, Vue, Next.js, and mobile platforms including Android,
iOS, React Native, Ionic, and Flutter.
AWS AppSync
AWS AppSync is a service that allows customers to easily develop and manage GraphQL APIs. Once
deployed, AWS AppSync automatically scales the API execution engine up and down to meet API request
volumes. AWS AppSync offers GraphQL setup, administration, and maintenance, with high availability
serverless infrastructure built in.
Amazon Augmented AI (excludes Public Workforce and Vendor Workforce for all features)
Amazon Augmented AI (A2I) is a machine learning service which makes it easy to build the workflows
required for human review. Amazon A2I brings human review to all developers, removing the
undifferentiated heavy lifting associated with building human review systems or managing large numbers
of human reviewers whether it runs on AWS or not. The public and vendor workforce options of this
service are not in scope for purposes of this report.
AWS Backup
AWS Backup is a backup service that makes it easy to centralize and automate the back up of data across
AWS services in the cloud as well as on premises using the AWS Storage Gateway. Using AWS Backup, the
customers can centrally configure backup policies and monitor backup activity for AWS resources, such as
Amazon EBS volumes, Amazon RDS databases, Amazon DynamoDB tables, Amazon EFS file systems, and
AWS Storage Gateway volumes. AWS Backup automates and consolidates backup tasks previously
performed service-by-service, removing the need to create custom scripts and manual processes.
AWS Batch
AWS Batch enables developers, scientists, and engineers to run batch computing jobs on AWS. AWS Batch
dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory
optimized instances) based on the volume and specific resource requirements of the batch jobs
submitted. AWS Batch plans, schedules, and executes customers’ batch computing workloads across the
full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.
Amazon Braket
Amazon Braket, the quantum computing service of AWS, is designed to help accelerate scientific research
and software development for quantum computing. Amazon Braket provides everything customers need
to build, test, and run quantum programs on AWS, including access to different types of quantum
computers and classical circuit simulators and a unified development environment for building and
executing quantum circuits. Amazon Braket also manages the classical infrastructure required for the
execution of hybrid quantum-classical algorithms. When customers choose to interact with quantum
computers provided by third-parties, Amazon Braket anonymizes the content, so that only content
necessary to process the quantum task is sent to the quantum hardware provider. No AWS account
information is shared and customer data is not stored outside of AWS.
AWS Chatbot
AWS Chatbot is an AWS service that enables DevOps and software development teams to use Slack or
Amazon Chime chat rooms to monitor and respond to operational events in their AWS Cloud. AWS
Chatbot processes AWS service notifications from Amazon Simple Notification Service (Amazon SNS), and
forwards them to Slack or Amazon Chime chat rooms so teams can analyze and act on them. Teams can
respond to AWS service events from a chat room where the entire team can collaborate, regardless of
location.
Amazon Chime
Amazon Chime is a communications service that lets customers meet, chat, and place business calls inside
and outside organizations, all using a single application. With Amazon Chime, customers can conduct and
attend online meetings with HD video, audio, screen sharing, meeting chat, dial—in numbers, and in-room
video conference support. Customer can use chat and chat rooms for persistent communications across
desktop and mobile devices. Customers are also able to administer enterprise users, manage policies, and
set up SSO or other advanced features in minutes using Amazon Chime management console.
AWS Cloud9
AWS Cloud9 is an integrated development environment, or IDE. The AWS Cloud9 IDE offers a rich code-
editing experience with support for several programming languages and runtime debuggers, and a built-
in terminal. It contains a collection of tools that customers use to code, build, run, test, and debug
software, and helps customers release software to the cloud. Customers access the AWS Cloud9 IDE
through a web browser. Customers can configure the IDE to their preferences. Customers can switch color
themes, bind shortcut keys, enable programming language-specific syntax coloring and code formatting,
and more.
Customers can register any application resource, such as databases, queues, microservices, and other
cloud resources, with custom names. Cloud Map then constantly checks the health of resources to make
sure the location is up-to-date. The application can then query the registry for the location of the
resources needed based on the application version and deployment environment.
AWS CloudFormation
AWS CloudFormation is a service to simplify provisioning of AWS resources such as Auto Scaling groups,
ELBs, Amazon EC2, Amazon VPC, Amazon Route 53, and others. Customers author templates of the
infrastructure and applications they want to run on AWS, and the AWS CloudFormation service
©2023 Amazon.com, Inc. or its affiliates
14
automatically provisions the required AWS resources and their relationships as defined in these
templates.
Amazon CloudFront [excludes content delivery through Amazon CloudFront Embedded Point of
Presences]
Amazon CloudFront is a fast content delivery network (CDN) web service that securely delivers data,
videos, applications and APIs to customers globally with low latency and high-transfer speeds. CloudFront
offers the most advanced security capabilities, including field level encryption and HTTPS support,
seamlessly integrated with AWS Shield, AWS Web Application Firewall and Route 53 to protect against
multiple types of attacks including network and application layer DDoS attacks. These services co-reside
at edge networking locations – globally scaled and connected via the AWS network backbone – providing
a more secure, performant, and available experience for the users.
CloudFront delivers customers' content through a worldwide network of Edge locations. When an end
user requests content that customers serve with CloudFront, the user is routed to the Edge location that
provides the lowest latency, so content is delivered with the best possible performance. If the content is
already in that Edge location, CloudFront delivers it immediately.
In addition to Edge locations, CloudFront also uses Amazon Cloud Extension (ACE). ACE is a CloudFront
infrastructure (single-rack version) deployed to a non-Amazon controlled facility, namely an internet
service provider (ISP) or partner network. Qualifying Network Operators can deliver CloudFront content
efficiently and cost effectively from within their network by deploying ACE in their data centers.
AWS CloudHSM
AWS CloudHSM is a service that allows customers to use dedicated hardware security module (HSM)
appliances within the AWS cloud. AWS CloudHSM is designed for applications where the use of HSM
appliances for encryption and key storage is mandatory.
AWS acquires these production HSM devices securely using the tamper evident authenticable bags from
the vendors. These tamper evident authenticable bag serial numbers and production HSM serial numbers
are verified against data provided out-of-band by the manufacturer and logged by approved individuals
in tracking systems .
AWS CloudHSM allows customers to store and use encryption keys within HSM appliances in AWS data
centers. With AWS CloudHSM, customers maintain full ownership, control, and access to keys and
sensitive data while Amazon manages the HSM appliances in close proximity to customer applications and
data. All HSM media is securely decommissioned and physically destroyed, verified by two personnel,
prior to leaving AWS Secure Zones.
AWS CloudShell
AWS CloudShell is a browser-based shell used to securely manage, explore, and interact with your AWS
resources. CloudShell is pre-authenticated with customer console credentials. Common development and
operations tools are pre-installed, so no local installation or configuration is required. With CloudShell,
customers can run scripts with the AWS Command Line Interface (AWS CLI), experiment with AWS service
AWS CloudTrail
AWS CloudTrail is a web service that records AWS activity for customers and delivers log files to a specified
Amazon S3 bucket. The recorded information includes the identity of the API caller, the time of the API
call, the source IP address of the API caller, the request parameters, and the response elements returned
by the AWS service.
AWS CloudTrail provides a history of AWS API calls for customer accounts, including API calls made via the
AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services (such as AWS
CloudFormation). The AWS API call history produced by AWS CloudTrail enables security analysis, resource
change tracking, and compliance auditing.
Amazon CloudWatch
Amazon CloudWatch is a monitoring and management service built for developers, system operators, site
reliability engineers (SRE), and IT managers. CloudWatch provides the customers with data and actionable
insights to monitor their applications, understand and respond to system-wide performance changes,
optimize resource utilization, and get a unified view of operational health. CloudWatch collects
monitoring and operational data in the form of logs, metrics, and events, providing the customers with a
unified view of AWS resources, applications and services that run on AWS, and on-premises servers.
AWS CodeBuild
AWS CodeBuild is a build service that compiles source code, runs tests, and produces software packages
that are ready to deploy. CodeBuild scales continuously and processes multiple builds concurrently, so
that customers’ builds are not left waiting in a queue. Customers can use prepackaged build environments
or can create custom build environments that use their own build tools. AWS CodeBuild eliminates the
need to set up, patch, update, and manage customers’ build servers and software.
AWS CodeCommit
AWS CodeCommit is a source control service that hosts secure Git-based repositories. It allows teams to
collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need for
customers to operate their own source control system or worry about scaling their infrastructure.
CodeCommit can be used to securely store anything from source code to binaries, and it works seamlessly
with the existing Git tools.
AWS CodePipeline
AWS CodePipeline is a continuous delivery service that helps customers automate release pipelines for
fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and
deploy phases of customers release process every time there is a code change, based on the release model
defined by the customer. This enables customers to rapidly and reliably deliver features and updates.
Customers can easily integrate AWS CodePipeline with third-party services such as GitHub or with their
own custom plugin.
Amazon Cognito
Amazon Cognito lets customers add user sign-up, sign-in, and manage permissions for mobile and web
applications. Customers can create their own user directory within Amazon Cognito. Customers can also
choose to authenticate users through social identity providers such as Facebook, Twitter, or Amazon; with
SAML identity solutions; or by using customers' own identity system. In addition, Amazon Cognito enables
customers to save data locally on users' devices, allowing customers' applications to work even when the
devices are offline. Customers can then synchronize data across users' devices so that their app
experience remains consistent regardless of the device they use.
Amazon Comprehend
Amazon Comprehend is a natural language processing (NLP) service that uses machine learning to find
insights and relationships in text. Amazon Comprehend uses machine learning to help the customers
uncover insights and relationships in their unstructured data without machine learning experience. The
service identifies the language of the text; extracts key phrases, places, people, brands, or events;
understands how positive or negative the text is; analyzes text using tokenization and parts of speech;
and automatically organizes a collection of text files by topic.
AWS Config
AWS Config enables customers to assess, audit, and evaluate the configurations of their AWS resources.
AWS Config continuously monitors and records AWS resource configurations and allows customers to
automate the evaluation of recorded configurations against desired configurations. With AWS Config,
customers can review changes in configurations and relationships between AWS resources, dive into
detailed resource configuration histories, and determine overall compliance against the configurations
Amazon Connect
Amazon Connect is an easy-to-use omnichannel cloud contact center that helps customers provide
superior customer service across voice, chat, and tasks at lower cost than traditional contact center
systems. Amazon Connect simplifies contact center operations, improves agent efficiency and lowers
costs. Customers can setup a contact center in minutes that can scale to support millions of customers
from the office or as a virtual contact center.
AWS DataSync
AWS DataSync is an online data transfer service that simplifies, automates and accelerates moving data
between on-premises storage and AWS Storage services, as well as between AWS Storage services.
DataSync can copy data between Network File System (NFS), Server Message Block (SMB) file servers, self-
managed object storage, AWS Snowcone, Amazon Simple Storage Service (Amazon S3) buckets, Amazon
EFS file systems and Amazon FSx for Windows File Server file systems. DataSync automatically handles
many of the tasks related to data transfers that can slow down migrations or burden customers’ IT
operations, including running customers own instances, handling encryption, managing scripts, network
optimization, and data integrity validation.
Amazon Detective
Amazon Detective allows customers to easily analyze, investigate, and quickly identify the root cause of
potential security issues or suspicious activity. Amazon Detective collects log data from customer’s AWS
©2023 Amazon.com, Inc. or its affiliates
18
resources and uses machine learning, statistical analysis, and graph theory to build a linked set of data
that enables customers to conduct faster and more efficient security investigations. AWS Security services
can be used to identify potential security issues or findings.
Amazon Detective can analyze trillions of events from multiple data sources and automatically creates a
unified, interactive view of the resources, users, and the interactions between them over time. With this
unified view, customers can visualize all the details and context in one place to identify the underlying
reasons for the findings, drill down into relevant historical activities, and quickly determine the root cause.
DevOps Guru uses ML models informed by years of Amazon.com and AWS operational excellence to
identify anomalous application behavior (for example, increased latency, error rates, resource constraints,
and others) and helps surface critical issues that could cause potential outages or service disruptions.
When DevOps Guru identifies a critical issue, it automatically sends an alert and provides a summary of
related anomalies, the likely root cause, and context for when and where the issue occurred. When
possible, DevOps Guru also helps provide recommendations on how to remediate the issue.
Customers can create a database table that can store and retrieve data and serve any requested traffic.
Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of
servers to handle the request capacity specified and the amount of data stored, while maintaining
consistent, fast performance. All data items are stored on Solid State Drives (SSDs) and are automatically
replicated across multiple availability zones in a region.
Amazon EBS volumes are presented as raw unformatted block devices that have been wiped prior to being
made available for use. Wiping occurs before reuse. If customers have procedures requiring that all data
be wiped via a specific method, customers can conduct a wipe procedure prior to deleting the volume for
compliance with customer requirements. Amazon EBS includes Data Lifecycle Manager, which provides a
simple, automated way to back up data stored on Amazon EBS volumes.
AWS prevents customers from accessing physical hosts or instances not assigned to them by filtering
through the virtualization software.
Amazon EC2 provides a complete firewall solution, referred to as a Security Group; this mandatory
inbound firewall is configured in a default deny-all mode and Amazon EC2 customers must explicitly open
the ports needed to allow inbound traffic.
Amazon provides a Time Sync function for time synchronization in EC2 Linux instances with the
Coordinated Universal Time (UTC). It is delivered over the Network Time Protocol (NTP) and uses a fleet
of redundant satellite-connected and atomic clocks in each region to provide a highly accurate reference
clock via the local 169.254.169.123 IP address. Irregularities in the Earth’s rate of rotation that cause UTC
to drift with respect to the International Celestial Reference Frame (ICRF), by an extra second, are called
leap second. Time Sync addresses this clock drift by smoothing out leap seconds over a period of time
(commonly called leap smearing) which makes it easy for customer applications to deal with leap seconds.
Amazon Elastic Container Service [both Fargate and EC2 launch types]
Amazon Elastic Container Service is a highly scalable, high performance container management service
that supports Docker containers and allows customers to easily run applications on a managed cluster of
Amazon EC2 instances. Amazon Elastic Container Service eliminates the need for customers to install,
operate, and scale customers' own cluster management infrastructure. With simple API calls, customers
can launch and stop Docker-enabled applications, query the complete state of customers' clusters, and
access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles.
Customers can use Amazon Elastic Container Service to schedule the placement of containers across
customers' clusters based on customers' resource needs and availability requirements.
Amazon Elastic Kubernetes Service (EKS) [both Fargate and EC2 launch types]
Amazon Elastic Kubernetes Service (EKS) makes it easy to deploy, manage, and scale containerized
applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for
the customer across multiple AWS availability zones to eliminate a single point of failure. Amazon EKS is
certified Kubernetes conformant so the customers can use existing tooling and plugins from partners and
the Kubernetes community. Applications running on any standard Kubernetes environment are fully
compatible and can be easily migrated to Amazon EKS.
The customer is responsible for choosing which of their Virtual Private Clouds (VPCs) they want a file
system to be accessed from by creating resources called mount targets. One mount target exists for each
availability zone, which exposes an IP address and DNS name for mounting the customer’s file system
onto their EC2 instances. Customers then log into their EC2 instance and issue a ‘mount’ command,
pointing at their mount target’ IP address or DNS name. A mount target is assigned one or more VPC
security groups to which it belongs. The VPC security groups define rules for what VPC traffic can reach
the mount targets and in turn can reach the file system.
Amazon ElastiCache
Amazon ElastiCache automates management tasks for in-memory cache environments, such as patch
management, failure detection, and recovery. It works in conjunction with other AWS services to provide
a managed in-memory cache. For example, an application running in Amazon EC2 can securely access an
Amazon ElastiCache Cluster in the same region with very slight latency.
Using the Amazon ElastiCache service, customers create a Cache Cluster, which is a collection of one or
more Cache Nodes, each running an instance of the Memcached, Redis Engine, or DAX Engine. A Cache
Node is a self-contained environment which provides a fixed-size chunk of secure, network-attached RAM.
Each Cache Node runs an instance of the Memcached, Redis Engine, or DAX Engine, and has its own DNS
name and port. Multiple types of Cache Nodes are supported, each with varying amounts of associated
memory.
Amazon EventBridge
Amazon EventBridge delivers a near real-time stream of events that describe changes in AWS resources.
Customers can configure routing rules to determine where to send collected data to build application
architectures that react in real time to the data sources. Amazon EventBridge becomes aware of
operational changes as they occur and responds to these changes by taking corrective action as necessary
by sending message to respond to the environment, activating functions, making changes and capturing
state information
Amazon Forecast
Amazon Forecast uses machine learning to combine time series data with additional variables to build
forecasts. With Amazon Forecast, customers can import time series data and associated data into Amazon
Forecast from their Amazon S3 database. From there, Amazon Forecast automatically loads the data,
inspects it, and identifies the key attributes needed for forecasting. Amazon Forecast then trains and
optimizes a customer’s custom model and hosts them in a highly available environment where it can be
used to generate business forecasts.
Amazon Forecast is protected by encryption. Any content processed by Amazon Forecast is encrypted
with customer keys through Amazon Key Management Service and encrypted at rest in the AWS Region
where a customer is using the service. Administrators can also control access to Amazon Forecast through
an AWS Identity and Access Management (IAM) permissions policy – ensuring that sensitive information
is kept secure and confidential.
FreeRTOS
FreeRTOS is an operating system for microcontrollers that makes small, low-power edge devices easy to
program, deploy, secure, connect, and manage. FreeRTOS extends the FreeRTOS kernel, a popular open
source operating system for microcontrollers, with software libraries that make it easy to securely connect
the small, low-power devices to AWS cloud services like AWS IoT Core or to more powerful edge devices
running AWS IoT Greengrass.
Amazon FSx
Amazon FSx provides third-party file systems. Amazon FSx provides the customers with the native
compatibility of third-party file systems with feature sets for workloads such as Windows-based storage,
high-performance computing (HPC), machine learning, and electronic design automation (EDA). The
customers don’t have to worry about managing file servers and storage, as Amazon FSx automates the
time-consuming administration tasks such as hardware provisioning, software configuration, patching,
©2023 Amazon.com, Inc. or its affiliates
24
and backups. Amazon FSx integrates the file systems with cloud-native AWS services, making them even
more useful for a broader set of workloads.
Amazon S3 Glacier
Amazon S3 Glacier is an archival storage solution for data that is infrequently accessed for which retrieval
times of several hours are suitable. Data in Amazon S3 Glacier is stored as an archive. Archives in Amazon
S3 Glacier can be created or deleted, but archives cannot be modified. Amazon S3 Glacier archives are
organized in vaults. All vaults created have a default permission policy that only permits access by the
account creator or users that have been explicitly granted permission. Amazon S3 Glacier enables
customers to set access policies on their vaults for users within their AWS Account. User policies can
express access criteria for Amazon S3 Glacier on a per vault basis. Customers can enforce Write Once Read
Many (WORM) semantics for users through user policies that forbid archive deletion.
AWS Glue
AWS Glue is an extract, transform, and load (ETL) service that makes it easy for customers to prepare and
load their data for analytics. The customers can create and run an ETL job with a few clicks in the AWS
Management Console.
Amazon GuardDuty
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect the customers’ AWS accounts and workloads. With the cloud, the
collection and aggregation of account and network activities is simplified, but it can be time consuming
for security teams to continuously analyze event log data for potential threats. With GuardDuty, the
customers now have an intelligent and cost-effective option for continuous threat detection in the AWS
Cloud.
AWS HealthLake
AWS HealthLake is a service offering healthcare and life sciences companies a complete view of individual
or patient population health data for query and analytics at scale. Using the HealthLake APIs, health
organizations can easily copy health data, such as imaging medical reports or patient notes, from on-
premises systems to a secure data lake in the cloud. HealthLake uses machine learning (ML) models to
automatically understand and extract meaningful medical information from the raw data, such as
medications, procedures, and diagnoses. HealthLake organizes and indexes information and stores it in
the Fast Healthcare Interoperability Resources (FHIR) industry standard format to provide a complete view
of each patient's medical history.
HealthOmics is comprised of three service components. Omics Storage efficiently ingests raw genomic
data into the Cloud, and it uses domain-specific compression to offer attractive storage prices to
customers. It also offers customers the ability to seamlessly access their data from various compute
environments. Omics Workflows runs bioinformatics workflows at scale in a fully-managed compute
environment. It supports three common bioinformatics domain-specific workflow languages. Omics
Analytics stores genomic variant and annotation data and allows customers to efficiently query and
analyze at scale.
VM Import/Export
VM Import/Export is a service that enables customers to import virtual machine images from their existing
environment to Amazon EC2 instances and export them back to their on premises environment. This
offering allows customers to leverage their existing investments in the virtual machines that customers
have built to meet their IT security, configuration management, and compliance requirements by bringing
those virtual machines into Amazon EC2 as ready-to-use instances. Customers can also export imported
instances back to their off-cloud virtualization infrastructure, allowing them to deploy workloads across
their IT infrastructure.
©2023 Amazon.com, Inc. or its affiliates
26
Amazon Inspector (Effective August 15, 2023)
Amazon Inspector is an automated vulnerability management service that continually scans AWS
workloads for software vulnerabilities and unintended network exposure. Amazon Inspector removes the
operational overhead associated with deploying and configuring a vulnerability management solution by
allowing customers to deploy Amazon Inspector across all accounts with a single step.
Customers can also organize their devices, monitor and troubleshoot device functionality, query the state
of any IoT device in the fleet, and send firmware updates over-the-air (OTA). AWS IoT Device Management
is agnostic to device type and OS, so customers can manage devices from constrained microcontrollers to
connected cars all with the same service. AWS IoT Device Management allows customers to scale their
fleets and reduce the cost and effort of managing large and diverse IoT device deployments.
Amazon Kendra
Amazon Kendra is an intelligent search service powered by machine learning. Kendra reimagines
enterprise search for customer websites and applications so employees and customers can easily find
content, even when it's scattered across multiple locations and content repositories.
AWS KMS is integrated with several AWS services so that users can request that resources in those
services are encrypted with unique DEKs provisioned by KMS that are protected by a KMS key the user
chooses at the time the resource is created. See in-scope services integrated with KMS at
https://aws.amazon.com/kms/. Integrated services use the plaintext DEK from AWS KMS in volatile
memory of service-controlled endpoints; they do not store the plaintext DEK to persistent disk. An
encrypted copy of the DEK is stored to persistent disk by the service and passed back to AWS KMS for
decryption each time the DEK is needed to decrypt content the customer requests. DEKs provisioned by
AWS KMS are encrypted with a 256-bit key unique to the customer’s account under a defined mode of
AES – Advanced Encryption Standard.
©2023 Amazon.com, Inc. or its affiliates
28
When a customer requests AWS KMS to create a KMS key, the service creates a key ID for the KMS key
and (optionally) key material, referred to as a backing key, which is tied to the key ID of the KMS key. The
256-bit backing key can only be used for encrypt or decrypt operations by the service. Customers can
choose to have a KMS key ID created and then securely import their own key material to associate with
the key ID. If the customer chooses to enable key rotation for a KMS key with a backing key that the service
generated, AWS KMS will create a new version of the backing key for each rotation event, but the key ID
remains the same. All future encrypt operations under the key ID will use the newest backing key, while
all previous versions of backing keys are retained to decrypt ciphertexts created under the previous
version of the key. Backing keys and customer-imported keys are encrypted under AWS-controlled keys
when created/imported and they are only ever stored on disk in encrypted form.
All requests to AWS KMS APIs are logged and available in the AWS CloudTrail of the requester and the
owner of the key. The logged requests provide information about who made the request, under which
KMS key, and describes information about the AWS resource that was protected through the use of the
KMS key. These log events are visible to the customer after turning on AWS CloudTrail in their account.
AWS KMS creates and manages multiple distributed replicas of KMS keys and key metadata automatically
to enable high availability and data durability. KMS keys themselves are regional objects; plaintext
versions of the KMS keys can only be used in the AWS region in which they were created. KMS keys are
only stored on persistent disk in encrypted form and in two separate storage systems to ensure durability.
When a plaintext KMS key is needed to fulfill an authorized customer request, it is retrieved from storage,
decrypted on one of many AWS KMS hardened security appliances in the region, then used only in
memory to execute the cryptographic operation (e.g., encrypt or decrypt). The plaintext key is then
marked for deletion so that it cannot be re-used. Future requests to use the KMS key each require the
decryption of the KMS key in memory for another one-time use.
AWS KMS endpoints are only accessible via TLS using the following cipher suites that support forward
secrecy:
• ECDHE-RSA-AES256-GCM-SHA384
• ECDHE-RSA-AES128-GCM-SHA256
• ECDHE-RSA-AES256-SHA384
• ECDHE-RSA-AES256-SHA
• ECDHE-RSA-AES128-SHA256
• ECDHE-RSA-3DES-CBC3-SHA
• DHE-RSA-AES256-SHA256 (ParamSize: 2048)
• DHE-RSA-AES128-SHA256 (ParamSize: 2048)
• DHE-RSA-AES256-SHA (ParamSize: 2048)
• DHE-RSA-AES128-SHA (ParamSize: 2048)
By design, no one can gain access to the plaintext KMS key material. Plaintext KMS keys are only ever
present on hardened security appliances for the amount of time needed to perform cryptographic
©2023 Amazon.com, Inc. or its affiliates
29
operations under them. AWS employees have no tools to retrieve plaintext keys from these hardened
security appliances. In addition, multi-party access controls are enforced for operations on these
hardened security appliances that involve changing the software configuration or introducing new
hardened security appliances into the service. These multi-party access controls minimize the possibility
of an unauthorized change to the hardened security appliances, exposing plaintext key material outside
the service, or allowing unauthorized use of customer keys. Additionally, key material used for disaster
recovery processes by KMS are physically secured such that no AWS employee can gain access. Access
attempts to recovery key materials are reviewed by authorized operators on a periodic basis. Roles and
responsibilities for those cryptographic custodians with access to systems that store or use key material
are formally documented and acknowledged.
AWS Lambda
AWS Lambda lets customers run code without provisioning or managing servers on their own. AWS
Lambda uses a compute fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances across multiple
Availability Zones in a region, which provides the high availability, security, performance, and scalability
of the AWS infrastructure.
Amazon Lex
Amazon Lex is a service for building conversational interfaces into any application using voice and text.
Amazon Lex provides the advanced deep learning functionalities of automatic speech recognition (ASR)
for converting speech to text, and natural language understanding (NLU) to recognize the intent of the
text, to enable customers to build applications with highly engaging user experiences and lifelike
conversational interactions. Amazon Lex scales automatically, so customers do not need to worry about
managing infrastructure.
AWS License Manager integrates with AWS services to simplify the management of licenses across
multiple AWS accounts, IT catalogs, and on-premises, through a single AWS account.
Amazon Macie
Amazon Macie is a data security and data privacy service that uses machine learning and pattern matching
to help customers discover, monitor, and protect their sensitive data in AWS.
Macie automates the discovery of sensitive data, such as personally identifiable information (PII) and
financial data, to provide customers with a better understanding of the data that organization stores in
Amazon Simple Storage Service (Amazon S3). Macie also provides customers with an inventory of the S3
buckets, and it automatically evaluates and monitors those buckets for security and access control. Within
minutes, Macie can identify and report overly permissive or unencrypted buckets for the organization.
If Macie detects sensitive data or potential issues with the security or privacy of customer content, it
creates detailed findings for customers to review and remediate as necessary. Customers can review and
analyze these findings directly in Macie, or monitor and process them by using other services, applications,
and systems.
Amazon MemoryDB for Redis is compatible with Redis, an open source data store, enabling customers to
quickly build applications using the same flexible Redis data structures, APIs, and commands that they
already use today. With Amazon MemoryDB for Redis, all of the customer’s data is stored in memory,
which enables the customer to achieve microsecond read and single-digit millisecond write latency and
high throughput. Amazon MemoryDB for Redis also stores data durably across multiple Availability Zones
(AZs) using a distributed transactional log to enable fast failover, database recovery, and node restarts.
Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB for Redis can be
used as a high-performance primary database for microservices applications eliminating the need to
separately manage both a cache and durable database.
Amazon MQ
Amazon MQ is a managed message broker service for Apache ActiveMQ that sets up and operates
message brokers in the cloud. Message brokers allow different software systems – often using different
programming languages, and on different platforms – to communicate and exchange information.
Messaging is the communications backbone that connects and integrates the components of distributed
applications, such as order processing, inventory management, and order fulfillment for e-commerce.
Amazon MQ manages the administration and maintenance of ActiveMQ, a popular open-source message
broker.
The dashboard displays relevant and timely information to help customers manage events in progress and
provides proactive notification to help customers plan for scheduled activities. With AWS Health
Dashboard, alerts are triggered by changes in the health of AWS resources, giving event visibility, and
guidance to help quickly diagnose and resolve issues.
AWS OpsWorks for Puppet Enterprise is a configuration management service that hosts Puppet
Enterprise, a set of automation tools from Puppet for infrastructure and application management.
OpsWorks also maintains customers’ Puppet master server by automatically patching, updating, and
backing up customers’ servers. OpsWorks eliminates the need for customers to operate their own
configuration management systems or worry about maintaining its infrastructure. OpsWorks gives
customers’ access to all of the Puppet Enterprise features, which customers manage through the Puppet
console. It also works seamlessly with customers’ existing Puppet code.
AWS Organizations
AWS Organizations helps customers centrally govern their environment as customers grow and scale their
workloads on AWS. Whether customers are a growing startup or a large enterprise, Organizations helps
customers to centrally manage billing; control access, compliance, and security; and share resources
across customer AWS accounts.
Using AWS Organizations, customers can automate account creation, create groups of accounts to reflect
their business needs, and apply policies for these groups for governance. Customers can also simplify
billing by setting up a single payment method for all of their AWS accounts. Through integrations with
other AWS services, customers can use Organizations to define central configurations and resource
sharing across accounts in their organization.
AWS Outposts
AWS Outposts is a service that extends AWS infrastructure, AWS services, APIs and tools to any data
center, co-location space, or an on-premises facility for a consistent hybrid experience. AWS Outposts is
ideal for workloads that require low latency access to on-premises systems, local data processing or local
data storage. Outposts offer the same AWS hardware infrastructure, services, APIs and tools to build and
run applications on premises and in the cloud. AWS compute, storage, database and other services run
locally on Outposts and customers can access the full range of AWS services available in the Region to
build, manage and scale on-premises applications. Service Link is established between Outposts and the
AWS region by use of a secured VPN connection over the public internet or AWS Direct Connect.
AWS Outposts are configured with a Nitro Security Key (NSK) which is designed to encrypt customer
content and give customers the ability to mechanically remove content from the device. Customer
content is cryptographically shredded if a customer removes the NSK from an Outpost device.
Additional information about Security in AWS Outposts, including the shared responsibility model, can be
found in the AWS Outposts User Guide.
Amazon Personalize
Amazon Personalize is a machine learning service that makes it easy for developers to create
individualized recommendations for customers using their applications. Amazon Personalize makes it easy
for developers to build applications capable of delivering a wide array of personalization experiences,
including specific product recommendations, personalized product re-ranking and customized direct
marketing. Amazon Personalize goes beyond rigid static rule- based recommendation systems and trains,
tunes, and deploys custom machine learning models to deliver highly customized recommendations to
customers across industries such as retail, media and entertainment.
Amazon Pinpoint
Amazon Pinpoint helps customers engage with their customers by sending email, SMS, and mobile push
messages. The customers can use Amazon Pinpoint to send targeted messages (such as promotional alerts
and customer retention campaigns), as well as direct messages (such as order confirmations and password
reset messages) to their customers.
Amazon Polly
Amazon Polly is a service that turns text into lifelike speech, allowing customers to create applications
that talk, and build entirely new categories of speech-enabled products. Amazon Polly is a Text-to-
Speech service that uses advanced deep learning technologies to synthesize speech that sounds like a
human voice.
Amazon QuickSight
Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easy to build
visualizations, perform ad-hoc analysis, and quickly get business insights from customers’ data. Using this
cloud-based service customers can connect to their data, perform advanced analysis, and create
visualizations and dashboards that can be accessed from any browser or mobile device.
Amazon Redshift
Amazon Redshift is a data warehouse service to analyze data using a customer’s existing Business
Intelligence (BI) tools. Amazon Redshift also includes Redshift Spectrum, allowing customers to directly
run SQL queries against Exabytes of unstructured data in Amazon S3.
Amazon Rekognition
The easy-to-use Rekognition API allows customers to automatically identify objects, people, text, scenes,
and activities, as well as detect any inappropriate content. Developers can quickly build a searchable
content library to optimize media workflows, enrich recommendation engines by extracting text in
images, or integrate secondary authentication into existing applications to enhance end-user security.
©2023 Amazon.com, Inc. or its affiliates
36
With a wide variety of use cases, Amazon Rekognition enables the customers to easily add the benefits of
computer vision to the business.
AWS RoboMaker
AWS RoboMaker is a service that makes it easy to develop, test, and deploy intelligent robotics
applications at scale. RoboMaker extends the most widely used open-source robotics software
framework, Robot Operating System (ROS), with connectivity to cloud services. This includes AWS
machine learning services, monitoring services, and analytics services that enable a robot to stream data,
navigate, communicate, comprehend, and learn. RoboMaker provides a robotics development
environment for application development, a robotics simulation service to accelerate application testing,
and a robotics fleet management service for remote application deployment, update, and management.
Amazon Route 53
Amazon Route 53 provides managed Domain Name System (DNS) web service. Amazon Route 53 connects
user requests to infrastructure running both inside and outside of AWS. Customers can use Amazon Route
53 to configure DNS health checks to route traffic to healthy endpoints or to independently monitor the
health of their application and its endpoints. Amazon Route 53 enables customers to manage traffic
©2023 Amazon.com, Inc. or its affiliates
37
globally through a variety of routing types, including Latency Based Routing, Geo DNS, and Weighted
Round Robin, all of these routing types can be combined with DNS Failover. Amazon Route 53 also offers
Domain Name Registration; customers can purchase and manage domain names such as example.com
and Amazon Route 53 will automatically configure DNS settings for their domains. Amazon Route 53 sends
automated requests over the internet to a resource, such as a web server, to verify that it is reachable,
available, and functional. Customers also can choose to receive notifications when a resource becomes
unavailable and choose to route internet traffic away from unhealthy resources.
Amazon SageMaker (excludes Studio Lab, Public Workforce and Vendor Workforce for all features)
Amazon SageMaker is a platform that enables developers and data scientists to quickly and easily build,
train, and deploy machine learning models at any scale. Amazon SageMaker removes the barriers that
typically “slow down” developers who want to use machine learning.
Amazon SageMaker removes the complexity that holds back developer success with the process of
building, training, and deploying machine learning models at scale. Amazon SageMaker includes modules
that can be used together or independently to build, train, and deploy a customer’s machine learning
models.
AWS Shield
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards web
applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations
that minimize application downtime and latency, so there is no need to engage AWS Support to benefit
from DDoS protection.
Amazon SQS’ main components consist of a frontend request-router fleet, a backend data-storage fleet,
a metadata cache fleet, and a dynamic workload management fleet. User queues are mapped to one or
more backend clusters. Requests to read, write, or delete messages come into the frontends. The
frontends contact the metadata cache to find out which backend cluster hosts that queue and then
connect to nodes in that cluster to service the request.
For authorization, Amazon SQS has its own resource-based permissions system that uses policies written
in the same language used for AWS IAM policies. User permissions for any Amazon SQS resource can be
given either through the Amazon SQS policy system or the AWS IAM policy system, which is authorized
by AWS Identity and Access Management Service. Such policies with a queue are used to specify which
AWS Accounts have access to the queue as well as the type of access and conditions.
An authenticated user can read an object only if the user has been granted read permissions in an Access
Control List (ACL) at the object level. An authenticated user can list the keys and create or overwrite
objects in a bucket only if the user has been granted read and write permissions in an ACL at the bucket
level. Bucket and object-level ACLs are independent; an object does not inherit ACLs from its bucket.
Permissions to read or modify the bucket or object ACLs are themselves controlled by ACLs that default
to creator-only access. Therefore, the customer maintains full control over who has access to its data.
Customers can grant access to their Amazon S3 data to other AWS users by AWS Account ID or email, or
DevPay Product ID. Customers can also grant access to their Amazon S3 data to all AWS users or to
everyone (enabling anonymous access).
Network devices supporting Amazon S3 are configured to only allow access to specific ports on other
Amazon S3 server systems. External access to data stored in Amazon S3 is logged and the logs are retained
for at least 90 days, including relevant access request information, such as the data accessor IP address,
object, and operation.
Developers implement workers to perform tasks. They run their workers either on cloud infrastructure,
such as Amazon EC2, or off-cloud. Tasks can be long-running, may fail, may timeout and may complete
with varying throughputs and latencies. Amazon SWF stores tasks for workers, assigns them when workers
are ready, tracks their progress, and keeps their latest state, including details on their completion. To
orchestrate tasks, developers write programs that get the latest state of tasks from Amazon SWF and use
it to initiate subsequent tasks in an ongoing manner. Amazon SWF maintains an application’s execution
state durably so that the application can be resilient to failures in individual application components.
Amazon SWF provides auditability by giving customers visibility into the execution of each step in the
application. The Management Console and APIs let customers monitor all running executions of the
application. The customer can zoom in on any execution to see the status of each task and its input and
output data. To facilitate troubleshooting and historical analysis, Amazon SWF retains the history of
executions for any number of days that the customer can specify, up to a maximum of 90 days.
The actual processing of tasks happens on compute resources owned by the end customer. Customers
are responsible for securing these compute resources, for example if a customer uses Amazon EC2 for
workers then they can restrict access to their instances in Amazon EC2 to specific AWS IAM users. In
addition, customers are responsible for encrypting sensitive data before it is passed to their workflows
and decrypting it in their workers.
Amazon SimpleDB
Amazon SimpleDB is a non-relational data store that allows customers to store and query data items via
web services requests. Amazon SimpleDB then creates and manages multiple geographically distributed
replicas of data automatically to enable high availability and data durability.
Data in Amazon SimpleDB is stored in domains, which are similar to database tables except that functions
cannot be performed across multiple domains. Amazon SimpleDB APIs provide domain-level controls that
only permit authenticated access by the domain creator.
Data stored in Amazon SimpleDB is redundantly stored in multiple physical locations as part of normal
operation of those services. Amazon SimpleDB provides object durability by protecting data across
multiple availability zones on the initial write and then actively doing further replication in the event of
device unavailability or detected bit-rot.
AWS Snowball
Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts
of data into and out of the AWS cloud. Using Snowball addresses common challenges with large-scale data
transfers including high network costs, long transfer times, and security concerns. Transferring data with
Snowball is simple and secure.
AWS Snowmobile
AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data
to AWS. Customers can transfer their Exabyte data via a 45-foot long ruggedized shipping container, pulled
by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including
video libraries, image repositories, or even a complete data center migration. After a customer’s data is
loaded, Snowmobile is driven back to AWS where their data is imported into Amazon S3 or Amazon
Glacier.
The File Gateway allows customers to copy data to S3 and have those files appear as individual objects in
S3. Volume gateways store data directly in Amazon S3 and allow customers to snapshot their data so that
they can access previous versions of their data. These snapshots are captured as Amazon EBS Snapshots,
which are also stored in Amazon S3. Both Amazon S3 and Amazon Glacier redundantly store these
snapshots on multiple devices across multiple facilities, detecting and repairing any lost redundancy. The
Amazon EBS snapshot provides a point-in-time backup that can be restored off-cloud or on a gateway
running in Amazon EC2, or used to instantiate new Amazon EBS volumes. Data is stored within a single
region that customers specify.
With AWS Systems manager, customers can group resources, like Amazon EC2 instances, Amazon S3
buckets, or Amazon RDS instances, by application, view operational data for monitoring and
troubleshooting, and take action on groups of resources.
Amazon Textract
Amazon Textract automatically extracts text and data from scanned documents. With Textract customers
can quickly automate document workflows, enabling customers to process large volumes of document
pages in a short period of time. Once the information is captured, customers can take action on it within
their business applications to initiate next steps for a loan application or medical claims processing.
Additionally, customers can create search indexes, build automated approval workflows, and better
maintain compliance with document archival rules by flagging data that may require redaction.
Amazon Timestream
Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational
applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster
and at as little as 1/10th the cost of relational databases. Amazon Timestream saves customers time and
cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical
data to a cost optimized storage tier based upon user defined policies. Amazon Timestream's purpose-
built query engine lets customers access and analyze recent and historical data together, without needing
to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon
Timestream has built-in time series analytics functions, helping customers identify trends and patterns in
data in real-time.
Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert
speech to text quickly. Amazon Transcribe can be used to transcribe customer service calls, to automate
closed captioning and subtitling, and to generate metadata for media assets to create a fully searchable
archive.
Amazon Transcribe automatically adds punctuation and formatting so that the output closely matches the
quality of manual transcription at a fraction of the time and expense.
Amazon Translate
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable
language translation. Neural machine translation is a form of language translation automation that uses
deep learning models to deliver more accurate and more natural sounding translation than traditional
statistical and rule- based translation algorithms. Amazon Translate allows customers to localize content
- such as websites and applications - for international users, and to easily translate large volumes of text
efficiently.
The objective of this architecture is to isolate AWS resources and data in one Amazon VPC from another
Amazon VPC, and to help prevent data transferred from outside the Amazon network except where the
customer has specifically configured internet connectivity options or via an IPsec VPN connection to their
off-cloud network.
• Virtual Private Cloud (VPC): An Amazon VPC is an isolated portion of the AWS cloud within which
customers can deploy Amazon EC2 instances into subnets that segment the VPC’s IP address
range (as designated by the customer) and isolate Amazon EC2 instances in one subnet from
another. Amazon EC2 instances within an Amazon VPC are accessible to customers via Internal
Gateway (IGW), Virtual Gateway (VGW), or VPC Peerings established to the Amazon VPC.
• IPsec VPN: An IPsec VPN connection connects a customer’s Amazon VPC to another network
designated by the customer. IPsec is a protocol suite for securing Internet Protocol (IP)
communications by authenticating and encrypting each IP packet of a data stream. Amazon VPC
customers can create an IPsec VPN connection to their Amazon VPC by first establishing an
Internet Key Exchange (IKE) security association between their Amazon VPC VPN gateway and
another network gateway using a pre-shared key as the authenticator. Upon establishment, IKE
negotiates an ephemeral key to secure future IKE messages. An IKE security association cannot
be established unless there is complete agreement among the parameters. Next, using the IKE
ephemeral key, two keys in total are established between the VPN gateway and customer
gateway to form an IPsec security association. Traffic between gateways is encrypted and
decrypted using this security association. IKE automatically rotates the ephemeral keys used to
encrypt traffic within the IPsec security association on a regular basis to ensure confidentiality of
communications.
AWS WAF
AWS WAF is a web application firewall that helps protect customer web applications from common web
exploits that could affect application availability, compromise security, or consume excessive resources.
Customers can use AWS WAF to create custom rules that block common attack patterns, such as SQL
injection or cross-site scripting, and rules that are designed for their specific application. New rules can be
deployed within minutes, letting customers respond quickly to changing traffic patterns. Also, AWS WAF
includes a full-featured API that customers can use to automate the creation, deployment, and
maintenance of web security rules.
Amazon WorkDocs
Amazon WorkDocs is a secure content creation, storage and collaboration service. Users can share files,
provide rich feedback, and access their files on WorkDocs from any device. WorkDocs encrypts data in
transit and at rest, and offers powerful management controls, active directory integration, and near real-
time visibility into file and user actions. The WorkDocs SDK allows users to use the same AWS tools they
are already familiar with to integrate WorkDocs with AWS products and services, their existing solutions,
third-party applications, or build their own.
Amazon WorkMail
Amazon WorkMail is a managed business email and calendaring service with support for existing desktop
and mobile email clients. It allows access to email, contacts, and calendars using Microsoft Outlook, a
browser, or native iOS and Android email applications. Amazon WorkMail can be integrated with a
customer’s existing corporate directory and the customer controls both the keys that encrypt the data
and the location (AWS Region) under which the data is stored.
Customers can create an organization in Amazon WorkMail, select the Active Directory they wish to
integrate with, and choose their encryption key to apply to all customer content. After setup and
validation of their mail domain, users from the Active Directory are selected or added, enabled for Amazon
WorkMail, and given an email address identity inside the customer owned mail domain.
Amazon WorkSpaces
Amazon WorkSpaces is a managed desktop computing service in the cloud. Amazon WorkSpaces enables
customers to deliver a high-quality desktop experience to end-users as well as help meet compliance and
security policy requirements. When using Amazon WorkSpaces, an organization’s data is neither sent to
nor stored on end-user devices. The PCoIP protocol used by Amazon WorkSpaces uses an interactive video
stream to provide the desktop experience to the user while the data remains in the AWS cloud or in the
organization’s off-cloud environment.
When Amazon WorkSpaces is integrated with a corporate Active Directory, each WorkSpace joins the
Active Directory domain, and can be managed like any other desktop in the organization. This means that
customers can use Active Directory Group Policies to manage their Amazon WorkSpaces and can specify
configuration options that control the desktop, including those that restrict users’ abilities to use local
storage on their devices. Amazon WorkSpaces also integrates with customers’ existing RADIUS server to
enable multi-factor authentication (MFA).
Overview
Amazon Web Services (AWS) designs its processes and procedures to meet its objectives for the AWS
System. Those objectives are based on the service commitments that AWS makes to user entities
(customers), the laws and regulations that govern the provision of the AWS System, and the financial,
operational and compliance requirements that AWS has established for the services.
The AWS services are subject to relevant regulations, as well as state privacy security laws and regulations
in the jurisdictions in which AWS operates.
Security, Availability and Confidentiality commitments to customers are documented and communicated
in Service Level Agreements (SLAs) and other customer agreements, as well as in the description of the
service offering provided on the AWS website. Security, Availability and Confidentiality commitments are
standardized and include, but are not limited to, the following:
• Security and confidentiality principles inherent to the fundamental design of the AWS System are
designed to appropriately restrict unauthorized internal and external access to data and customer
data is appropriately segregated from other customers.
• Security and confidentiality principles inherent to the fundamental design of the AWS System are
designed to safeguard data from within and outside of the boundaries of environments which
store a customer’s content to meet the service commitments.
• Availability principles inherent to the fundamental design of the AWS System are designed to
replicate critical system components across multiple Availability Zones and authoritative backups
are maintained and monitored to ensure successful replication to meet the service commitments.
• Privacy principles inherent to the fundamental design of the AWS System are designed to protect
the security and confidentiality of AWS customer content to meet the service commitments.
Amazon Web Services establishes operational requirements that support the achievement of security,
availability and confidentiality commitments, relevant laws and regulations, and other system
requirements. Such requirements are communicated in AWS’ system policies and procedures, system
design documentation, and contracts with customers. Information security policies define an
organization-wide approach to how systems and data are protected. These include policies around how
the service is designed and developed, how the system is operated, how the internal business systems
and networks are managed, and how employees are hired and trained. In addition to these policies,
©2023 Amazon.com, Inc. or its affiliates
47
standard operating procedures have been documented on how to carry out specific manual and
automated processes required in the operation and development of the Amazon Web Services System.
As an Infrastructure as a Service (IaaS) System, the AWS System is designed based on a shared
responsibility model where both AWS and the customers are responsible for aspects of security,
availability and confidentiality. Details of the responsibilities of customers can be found on the AWS
website and in the Customer Agreement.
People
Amazon Web Services’ organizational structure provides a framework for planning, executing and
controlling business operations. Executive and senior leadership play important roles in establishing the
Company’s tone and core values. The organizational structure assigns roles and responsibilities to provide
for adequate staffing, security, efficiency of operations, and segregation of duties. Management has also
established points of authority and appropriate lines of reporting for key personnel.
The Company follows a structured on-boarding process to familiarize new employees with Amazon tools,
processes, systems, security practices, policies and procedures. Employees are provided with the
Company’s Code of Business Conduct and Ethics and additionally complete annual Security & Awareness
training to educate them as to their responsibilities concerning information security. Compliance audits
are performed so that employees understand and follow established policies.
Data
AWS customers retain control and ownership of their own data. Customers are responsible for the
development, operation, maintenance, and use of their content. AWS prevent customers from accessing
physical hosts or instances not assigned to them by filtering through the virtualization software.
When a storage device has reached the end of its useful life, AWS procedures include a decommissioning
process that is designed to prevent unauthorized access to assets. AWS uses techniques detailed in NIST
800-88 (“Guidelines for Media Sanitization”) as part of the decommissioning process. All production media
is securely decommissioned in accordance with industry-standard practices. Production media is not
removed from AWS control until it has been securely decommissioned.
Availability
The AWS Resiliency Program encompasses the processes and procedures by which AWS identifies,
responds to and recovers from a major availability event or incident within the AWS services environment.
This program builds upon the traditional approach of addressing contingency management which
incorporates elements of business continuity and disaster recovery plans and expands this to consider
critical elements of proactive risk mitigation strategies such as engineering physically separate Availability
Zones (AZs) and continuous infrastructure capacity planning.
AWS contingency plans and incident response playbooks are maintained and updated to reflect emerging
risks and lessons learned from past incidents. Service team response plans are tested and updated
through the due course of business, and the AWS Resiliency plan is tested, reviewed, and approved by
senior leadership annually.
©2023 Amazon.com, Inc. or its affiliates
48
AWS has identified critical system components required to maintain the availability of the system and
recover service in the event of outage. Critical system components (example: code bases) are backed up
across multiple, isolated locations known as Availability Zones. Each Availability Zone runs on its own
physically distinct, independent infrastructure, and is engineered to be highly reliable. Common points of
failure like generators and cooling equipment are not shared across Availability Zones. Additionally,
Availability Zones are physically separate, and designed such that even extremely uncommon disasters
such as fires, tornados or flooding should only affect a single Availability Zone. AWS replicates critical
system components across multiple Availability Zones and authoritative backups are maintained and
monitored to ensure successful replication.
AWS continuously monitors service usage to project infrastructure needs to support availability
commitments and requirements. AWS maintains a capacity planning model to assess infrastructure usage
and demands at least monthly, and usually more frequently (e.g., weekly). In addition, the AWS capacity
planning model supports the planning of future demands to acquire and implement additional resources
based upon current resources and forecasted requirements
Confidentiality
AWS is committed to protecting the security and confidentiality of its customers’ content, defined as
“Your Content” at https://aws.amazon.com/agreement/. AWS’ systems and services are designed to
enable authenticated AWS customers to access and manage their content. AWS notifies customers of
third-party access to a customer’s content on the third-party access page located at
https://aws.amazon.com/compliance/third-party-access. AWS may remove a customer’s content when
compelled to do so by a legal order, or where there is evidence of fraud or abuse as described in the
Customer Agreement (https://aws.amazon.com/agreement/) and Acceptable Use Policy
(https://aws.amazon.com/aup/). In executing the removal of a customer’s content due to the reasons
stated above, employees may render it inaccessible as the situation requires. For clarity, this capability to
render customer content inaccessible extends to encrypted content as well.
In the course of AWS system and software design, build, and test of product features, a customer’s
content is not used and remains in the production environment. A customer’s content is not required for
the AWS software development life cycle. When content is required for the development or test of a
service’s software, AWS service teams have tools to generate mock, random data.
AWS knows customers care about privacy and data security. That is why AWS gives customers ownership
and control over their content by design through tools that allow customers to determine where their
content is stored, secure their content in transit or at rest, and manage access to AWS services and
resources. AWS also implements technical and physical controls designed to prevent unauthorized access
to or disclosure of a customer’s content. As described in the Physical Security and Change Management
areas in Section III of this report, AWS employs a number of controls to safeguard data from within and
outside of the boundaries of environments which store a customer’s content. As a result of these
measures, access to a customer’s content is restricted to authorized parties.
AWS contingency plans and incident response playbooks have defined and tested tools and processes to
detect, mitigate, investigate, and assess security incidents. These plans and playbooks include guidelines
for responding to potential data breaches in accordance with contractual and regulatory requirements.
©2023 Amazon.com, Inc. or its affiliates
49
AWS security engineers follow a protocol when responding to potential data security incidents. The
protocol involves steps, which include validating the presence of customer content within the AWS service
(without actually viewing the data), determining the encryption status of a customer’s content, and
determining improper access to a customer’s content to the extent possible.
During the course of their response, the security engineers document relevant findings in internal tools
used to track the security issue. AWS Security Leadership is regularly apprised of all data security issue
investigations. In the event there are positive indicators that customer content was potentially accessed
by an unintended party, a security engineer engages AWS Security Leadership and the AWS Legal team to
review the findings. AWS Security Leadership and the Legal team review the findings and determine if a
notifiable data breach has occurred pursuant to contractual or regulatory obligations. If confirmed,
affected customers are notified in accordance with the applicable reporting requirement.
Vendors and third parties with restricted access, that engage in business with Amazon, are subject to
confidentiality commitments as part of their agreements with Amazon. Confidentiality commitments are
included in agreements with vendors and third parties with restricted access are reviewed by AWS and
the third party at time of contract creation or renewal. AWS monitors the performance of third parties
through periodic reviews on a risk-based approach, which evaluate performance against contractual
obligations.
AWS communicates its confidentiality commitments to customers on its public website located at
https://aws.amazon.com/compliance/third-party-access/ for contractors and
https://aws.amazon.com/compliance/sub-processors/ for sub-processors. The effective date of the
policy is communicated there and updated periodically. Before AWS authorizes and permits any new
subcontractor to access any customer content, AWS will update this website to inform customers. Vendor
confidentiality commitments are governed by the terms of the contract between AWS and the vendor.
Internally, confidentiality requirements are communicated to employees through training and policies.
Employees are required to attend Amazon Security Awareness (ASA) training, which includes policies and
procedures related to protecting a customer’s content. Confidentiality requirements are included in the
Data Handling and Classification Policy. Policies are reviewed and updated at least annually.
Privacy
AWS classifies customer data into two categories: customer content and account information. AWS
defines customer content as software (including machine images), data, text, audio, video, or images that
a customer or any end user transfers to AWS for processing, storage, or hosting by AWS services in
connection with that customer's account, and any computational results that a customer or any end user
derives from the foregoing through their use of AWS services. For example, customer content includes
content that a customer or any end user stores in Amazon Simple Storage Service (S3). The terms of the
AWS Customer Agreement (https://aws.amazon.com/agreement/) and AWS Service Terms
(https://aws.amazon.com/service-terms/) apply to customer content.
Account information is information about a customer that a customer provides to AWS in connection with
the creation or administration of a customer account. For example, account information includes names,
user names, phone numbers, email addresses, and billing information associated with a customer
The AWS Privacy Notice is available from the AWS website at https://aws.amazon.com/privacy/. The AWS
Privacy Notice is reviewed by the AWS Legal team, and is updated as required to reflect Amazon’s current
business practices and global regulatory requirements. The Privacy Notice describes how AWS collects
and uses a customer’s personal information in relation to AWS websites, applications, products, services,
events, and experiences. The Privacy Notice does not apply to customer content.
As part of the AWS account creation and activation process, AWS customers are informed of the AWS
Privacy Notice and are required to accept the Customer Agreement, including the terms and conditions
related to the collection, use, retention, disclosure, and disposal of their data. Customers are responsible
for determining what content to store within AWS, which may include personal information. Without the
acceptance of the Customer Agreement, customers cannot sign up to use the AWS services.
The AWS Customer Agreement informs customers of the AWS data security and privacy commitments
prior to activating an AWS account and is made available to customers to review at any time on the AWS
website.
The customer determines what data is entered into AWS services and has the ability to configure the
appropriate security and privacy settings for the data, including who can access and use the data. Further,
the customer is able to choose not to provide certain data. Additionally, the customer manages
notification or consent requirements, and maintains the accuracy of the data.
Additionally, the AWS Customer Agreement notes how AWS shares, secures, and retains customer
content. AWS also informs customers of updates to the Customer Agreement by making it available on its
website and providing the last updated date. Customers should check the Customer Agreement website
frequently for any changes to the Customer Agreement.
AWS does not store any customer cardholder data obtained from customers. Rather, AWS passes the
customer cardholder data and sends it immediately to the Amazon Payments Platform, the PCI-certified
platform that Amazon uses for all payment processing. This platform returns a unique identifier that AWS
stores and uses for all future processing. The Amazon Payments Platform sits completely outside of the
AWS boundary and is run by the larger Amazon entity. It is not an AWS service, but it is utilized by the
larger Amazon entity for payment processing. As such, the Amazon payment platform is not in-scope for
this report.
AWS offers customers the ability to update their communication preferences through the AWS console
or via the AWS Email Preference Center. When customers update their communication preferences using
their email, their updated preferences are saved. Customers can unsubscribe from AWS marketing emails
within the AWS console. AWS Customers will still receive important account-related notifications from
AWS, such as monthly billing statements, or if there are significant changes to a service that customers
use.
AWS provides authenticated customers the ability to access, update, and confirm their data. Denial of
access will be communicated using the AWS console. Customers can sign into to their AWS accounts
through the AWS console to view and update their data.
©2023 Amazon.com, Inc. or its affiliates
51
AWS (or Amazon) does not disclose customer information in response to government demands unless
we're required to do so to comply with a legally valid and binding order. AWS Legal reviews and maintains
records of all the information requests, which lists information on the types and volume of information
requested. Unless AWS is prohibited from doing so or there is clear indication of illegal conduct in
connection with the use of Amazon products or services, AWS notifies customers before disclosing
customer content so they can seek protection from disclosure. AWS shares customer content only as
described in the AWS Customer Agreement.
AWS may produce non-content and/or content information in response to valid and binding law
enforcement and governmental requests, such as subpoenas, court orders, and search warrants. “Non-
content information” means customer information such as name, address, email address, billing
information, date of account creation, and service usage information. Content information” includes the
content that a customer transfers for processing, storage, or hosting in connection with AWS services and
any computational results. AWS records customer information requests to maintain a complete, accurate,
and timely record of such requests.
If required, customers are responsible for providing notice to the individuals whose data the customer
collects and uses within AWS. AWS is not responsible for providing such notice to or obtaining consent
from these individuals and is only responsible for communicating its privacy commitments to AWS
customers, which is provided during the account creation and activation process.
AWS has documented an incident response policy and plan which outlines an organized approach for
responding to security breaches and incidents. The AWS Security team is responsible for monitoring
systems, tracking issues, and documenting findings of security-related events. Records are maintained for
security breaches and incidents, which includes status information required for supporting forensic
activities, trend analysis, and evaluation of incident details.
As part of the process, potential breaches of customer content are investigated and escalated to AWS
Security and AWS Legal. Affected customers and regulators are notified of breaches and incidents where
legally required. Customers can subscribe to the AWS Security Bulletins page, which provides information
regarding identified security issues. AWS notifies affected customers and regulators of breaches and
incidents as legally required in accordance with team processes.
AWS retains and disposes of customer content in accordance with the Customer Agreement and the AWS
Data Classification and Handling Policy. When a customer terminates their account or contract with AWS,
the account is put under isolation; after which within 90 days, customers can restore their accounts and
related content. AWS services hosting customer content are designed to retain customer content until
the contractual obligation to retain a customers’ content ends or a customer-initiated action to remove
or delete the content is taken. When a customer requests data to be deleted, AWS utilizes automated
processes to detect that request and make the content inaccessible. After the deletion is complete,
automated actions are taken on deleted content to render the content unreadable.
AWS performs application security reviews for Third-Party systems that integrate with AWS, to ascertain
security risks are identified and mitigated. A typical security review considers privacy components such as
retention period, use, and collection of data as applicable. The review starts with a system owner initiating
a review request to the dedicated AWS Vendor Security (AVS) team, and submitting detailed information
©2023 Amazon.com, Inc. or its affiliates
52
about the artifacts being reviewed. A Security review is required if an AWS Team engages with a new
external party for collecting data or modifications to existing systems.
During this process, the AVS team determines the granularity of review required based on the artifact’s
design, threat model, and impact to AWS’ risk profile. They provide security guidance, validate security
assurance material, and meet with external parties to discuss their penetration tests, Software
Development Life Cycle, change management processes, and other operating security controls. They work
with the system owner to identify, prioritize, and remediate security findings. The AVS team collaborates
with AWS Legal as needed to validate that changes are in-line with AWS privacy policies. The AVS team
provides their final approval after they have adequately assessed the risks and worked with the requester
to implement security controls to mitigate identified risks.