Exam Question AWS (Test)
Exam Question AWS (Test)
Question #: : 500
A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files
into an Amazon FSx for Windows File Server file system. File permissions must be preserved to ensure that access
rights do not change.
Hide Answer
Suggested Answer: AD
Community vote distribution
AD (93%)
7%
Question #: : 501
A company wants to ingest customer payment data into the company's data lake in Amazon S3. The company
receives payment data every minute on average. The company wants to analyze the payment data in real time.
Then the company wants to ingest the data into the data lake.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
• B. Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
• C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the
data in real time.
• D. Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Hide Answer
Suggested Answer: A
Question #: : 502
A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a
single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images
are stored on an Amazon Elastic Block Store (Amazon EBS) volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance and resilience of the
website? (Choose two.)
• A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
• B. Share the website images by using an NFS share from the primary EC2 instance. Mount this share on
the other EC2 instances.
• C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is
mounted on every EC2 instance.
• D. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision
new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the
website
• E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision
new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
Hide Answer
Suggested Answer: DE
Question #: : 503
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the
service to monitor data in customer AWS accounts. The new feature will call AWS APIs in customer accounts to
describe Amazon EC2 instances and read Amazon CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
• A. Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch
permissions and a trust policy to the company’s account.
• B. Create a serverless API that implements a token vending machine to provide temporary AWS
credentials for a role with read-only EC2 and CloudWatch permissions.
• C. Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch
permissions. Encrypt and store customer access and secret keys in a secrets management system.
• D. Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with
read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a
secrets management system.
Hide Answer
Suggested Answer: A
Question #: : 504
A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The
company's networking team has its own AWS account to manage the cloud network.
Hide Answer
Suggested Answer: C
Question #: : 505
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an
Auto Scaling group that uses On-Demand billing. If a job fails on one instance, another instance will reprocess
the job. The batch jobs run between 12:00 AM and 06:00 AM local time every day.
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
• A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling
group that the batch job uses.
• B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the
instances in the Auto Scaling group that the batch job uses.
• C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a
policy to scale out based on CPU usage.
• D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to
scale out based on CPU usage.
Hide Answer
Suggested Answer: C
Question #: : 506
A social media company is building a feature for its website. The feature will give users the ability to upload photos.
The company expects significant increases in demand during large events and must ensure that the website can
handle the upload traffic from users.
Hide Answer
Suggested Answer: C
A company has a web application for travel ticketing. The application is based on a database that runs in a single
data center in North America. The company wants to expand the application to serve a global user base. The
company needs to deploy the application to multiple AWS Regions. Average latency must be less than 1 second
on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions. However, the
company must maintain a single primary reservation database that is globally consistent.
Hide Answer
Suggested Answer: B
Question #: : 508
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the
us-west-1 Region. The company manually backs up the workloads to create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the
us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company
also wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
• A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Copy the image on demand.
• B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
• C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for
the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the
backup data to us-west-2.
• D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2
instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to
run twice daily.
• E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2
instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
Hide Answer
Suggested Answer: BC
Question #: : 509
A company operates a two-tier application for image processing. The application uses two Availability Zones, each
with one public subnet and one private subnet. An Application Load Balancer (ALB) for the web tier uses the
public subnets. Amazon EC2 instances for the application tier use the private subnets.
Users report that the application is running more slowly than expected. A security audit of the web server log files
shows that the application is receiving millions of illegitimate requests from a small number of IP addresses. A
solutions architect needs to resolve the immediate performance problem while the company investigates a more
permanent solution.
Question #: : 510
A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region.
Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in
ap-southeast-2.
Hide Answer
Suggested Answer: B
Question #: : 511
A company is developing software that uses a PostgreSQL database schema. The company needs to configure
multiple development environments and databases for the company's developers. On average, each development
environment is used for half of the 8-hour workday.
Hide Answer
Suggested Answer: B
Question #: : 512
A company uses AWS Organizations with resources tagged by account. The company also uses AWS Backup to
back up its AWS infrastructure resources. The company needs to back up all AWS resources.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Config to identify all untagged resources. Tag the identified resources programmatically.
Use tags in the backup plan.
• B. Use AWS Config to identify all resources that are not running. Add those resources to the backup
vault.
• C. Require all AWS account owners to review their resources to identify the resources that need to be
backed up.
• D. Use Amazon Inspector to identify all noncompliant resources.
Hide Answer
Suggested Answer: A
Question #: : 513
A social media company wants to allow its users to upload images in an application that is hosted in the AWS
Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed
on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The
company is seeking a highly available solution that maximizes scalability.
Hide Answer
Suggested Answer: A
Question #: : 514
A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the
application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for scalability. The company must
configure the Amazon EKS control plane with endpoint private access set to true and endpoint public access set
to false to maintain security compliance. The company must also put the data plane in private subnets. However,
the company has received error notifications because the node cannot join the cluster.
Hide Answer
Suggested Answer: B
Question #: : 515
A company is migrating an on-premises application to AWS. The company wants to use Amazon Redshift as a
solution.
Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
• A. Supporting data APIs to access data with traditional, containerized, and event-driven applications
• B. Supporting client-side and server-side encryption
• C. Building analytics workloads during specified hours and when the application is not active
• D. Caching data to reduce the pressure on the backend database
• E. Scaling globally to support petabytes of data and tens of millions of requests per minute
• F. Creating a secondary replica of the cluster by using the AWS Management Console
Hide Answer
Suggested Answer: BCE
Question #: : 516
A company provides an API interface to customers so the customers can retrieve their financial information. Еhe
company expects a larger number of requests during peak usage times of the year.
The company requires the API to respond consistently with low latency to ensure customer satisfaction. The
company needs to provide a compute host for the API.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
• B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
• C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
• D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Hide Answer
Suggested Answer: B
Question #: : 517
A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3 bucket for archival
purposes.
Which solution will meet this requirement with the MOST operational efficiency?
• A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
• B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an
S3 bucket from the group for archival purposes.
• C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon
EventBridge to run the Systems Manager document against all servers that are in the account daily.
• D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch
logs subscription that pushes any incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set
Amazon S3 as the destination.
Hide Answer
Suggested Answer: D
Community vote distribution
A (90%)
10%
by nosense at May 17, 2023, 2:35 p.m.
Question #: : 518
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A
solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
• A. Enable storage autoscaling in RDS
• B. Increase the RDS database instance size
• C. Change the RDS database instance storage type to Provisioned IOPS
• D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous
instance
Hide Answer
Suggested Answer: A
Question #: : 519
A consulting company provides professional services to customers worldwide. The company provides solutions
and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage
and deploy a common set of solutions and tools for customers to use for self-service purposes.
Hide Answer
Suggested Answer: B
Question #: : 520
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use
Amazon DynamoDB for backend data storage. The application traffic will be unpredictable. The company expects
that the application read and write throughput to the database will be moderate to high. The company needs to
scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
• A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class.
Set DynamoDB auto scaling to a maximum defined capacity.
• B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
• C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent
Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
• D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access
(DynamoDB Standard-IA) table class.
Hide Answer
Suggested Answer: B
Question #: : 521
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team
account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an
Amazon DynamoDB table in the team's own AWS account.
The company is deploying a central inventory reporting application into a shared AWS account. The application
must be able to read items from all the teams' DynamoDB tables.
Hide Answer
Suggested Answer: C
Question #: : 522
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The
company's workload is not consistent throughout the day. The company wants Amazon EKS to scale in and out
according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
• A. Use an AWS Lambda function to resize the EKS cluster.
• B. Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
• C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
• D. Use Amazon API Gateway and connect it to Amazon EKS.
• E. Use AWS App Mesh to observe network activity.
Hide Answer
Suggested Answer: BC
Community vote distribution
BC (100%)
by nosense at May 19, 2023, 11:56 a.m.
Question #: : 523
A company runs a microservice-based serverless web application. The application must be able to retrieve data
from multiple Amazon DynamoDB tables A solutions architect needs to give the application the ability to retrieve
the data with no impact on the baseline performance of the application.
Which solution will meet these requirements in the MOST operationally efficient way?
• A. AWS AppSync pipeline resolvers
• B. Amazon CloudFront with Lambda@Edge functions
• C. Edge-optimized Amazon API Gateway with AWS Lambda functions
• D. Amazon Athena Federated Query with a DynamoDB connector
Hide Answer
Suggested Answer: A
Question #: : 524
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to
IAM permissions. The company has AWS CloudTrail turned on.
Which solution will meet these requirements with the LEAST effort?
• A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
• B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
• C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
• D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Hide Answer
Suggested Answer: C
Community vote distribution
C (64%)
D (36%)
by alexandercamachop at June 7, 2023, 7:35 p.m.
Question #: : 525
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs
to recommend a solution that will give the company access to its usage cost programmatically. The company must
be able to access cost data for the current year and forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
• B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
• C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
• D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Hide Answer
Suggested Answer: D
Question #: : 526
A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database
administrator recently failed over the application's Amazon Aurora PostgreSQL database writer instance as part
of a scaling exercise. The failover resulted in 3 minutes of downtime for the application.
Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
• A. Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
• B. Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the
application to use the secondary cluster's writer endpoint.
• C. Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
• D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Hide Answer
Suggested Answer: D
Question #: : 527
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture
consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling
groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora global database cluster that
extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
Hide Answer
Suggested Answer: B
A data analytics company wants to migrate its batch processing system to AWS. The company receives thousands
of small data files periodically during the day through FTP. An on-premises batch job processes the data files
overnight. However, the batch job takes hours to finish running.
The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to
the FTP clients that send the files. The solution must delete the incoming data files after the files have been
processed successfully. Processing for each file needs to take 3-8 minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
• A. Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3
Glacier Flexible Retrieval. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job
to process the objects nightly from S3 Glacier Flexible Retrieval. Delete the objects after the job has processed the
objects.
• B. Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic
Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to
invoke the job to process the files nightly from the EBS volume. Delete the files after the job has processed the
files.
• C. Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block
Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use an Amazon S3 event notification when
each file arrives to invoke the job in AWS Batch. Delete the files after the job has processed the files.
• D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard.
Create an AWS Lambda function to process the files and to delete the files after they are processed. Use an S3
event notification to invoke the Lambda function when the files arrive.
Hide Answer
Suggested Answer: B
Question #: : 529
A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases.
The company wants to use AWS Cloud solutions to increase security and reduce operational overhead for the
databases.
Hide Answer
Suggested Answer: A
Question #: : 530
A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The
company uses Amazon Route 53 to point the application traffic to multiple Network Load Balancers (NLBs) in
different AWS Regions. The company needs to improve application performance and decrease latency for the
online game in preparation for user growth.
Hide Answer
Suggested Answer: D
531-555
ーーーーー
Question #: : 531
A company needs to integrate with a third-party data feed. The data feed sends a webhook to notify an external
service when new data is ready for consumption. A developer wrote an AWS Lambda function to retrieve data
when the company receives a webhook callback. The developer must make the Lambda function available for the
third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third party
for the webhook.
• B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL
to the third party for the webhook.
• C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda
function. Provide the public hostname of the SNS topic to the third party for the webhook.
• D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda
function. Provide the public hostname of the SQS queue to the third party for the webhook.
Hide Answer
Suggested Answer: B
Question #: : 532
A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon
API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide
individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
• A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
• B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a
different Region.
• C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the
API Gateway endpoint.
• D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
• E. Create multiple API endpoints for each customer in API Gateway.
• F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).
Hide Answer
Suggested Answer: CFD
Question #: : 533
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable
information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The
company needs to automatically detect PII in S3 buckets and to notify the company’s security team.
Hide Answer
Suggested Answer: C
Question #: : 534
A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the
logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized
account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for
frequent analysis, retained for an additional 60 days for backup purposes, and deleted 90 days after creation.
Hide Answer
Suggested Answer: B
Question #: : 535
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets
that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.
Hide Answer
Suggested Answer: D
ーーーーー
Question #: : 536
A company wants to provide data scientists with near real-time read-only access to the company's production
Amazon RDS for PostgreSQL database. The database is currently configured as a Single-AZ database. The data
scientists use complex queries that will not affect the production database. The company needs a solution that is
highly available.
Hide Answer
Suggested Answer: C
Question #: : 537
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The
application architecture has an Application Load Balancer, an Amazon EC2 web server that hosts user session
states, and a MySQL database that runs on an EC2 instance. The company expects sudden increases in application
traffic. The company wants to be able to scale to meet future application capacity demands and to ensure high
availability across all three Availability Zones.
Hide Answer
Suggested Answer: B
Question #: : 538
A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The
company wants to roll out content in a phased manner across multiple countries. The company needs to ensure
that viewers who are outside the countries to which the company rolls out content are not able to view the content.
Which solution will meet these requirements?
• A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error
message.
• B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up
a custom error message.
• C. Encrypt the data for the content that the company distributes. Set up a custom error message.
• D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
Hide Answer
Suggested Answer: A
Question #: : 539
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR) configuration. The
company's core production business application uses Microsoft SQL Server Standard, which runs on a virtual
machine (VM). The application has a recovery point objective (RPO) of 30 seconds or fewer and a recovery time
objective (RTO) of 60 minutes. The DR solution needs to minimize costs wherever possible.
Hide Answer
Suggested Answer: D
Question #: : 540
A company has an on-premises server that uses an Oracle database to process and store customer information.
The company wants to use an AWS database service to achieve higher availability and to improve application
performance. The company also wants to offload reporting from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
• A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple
AWS Regions. Point the reporting functions toward a separate DB instance from the primary DB instance.
• B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in
the same zone as the primary DB instance. Direct the reporting functions to the read replica.
• C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct
the reporting functions to use the reader instance in the cluster deployment.
• D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database.
Direct the reporting functions to the reader instances.
Hide Answer
Suggested Answer: D
Question #: : 541
A company wants to build a web application on AWS. Client access requests to the website are not predictable
and can be idle for a long time. Only customers who have paid a subscription fee can have the ability to sign in
and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
• A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an
Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
• B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load
Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept
RESTful APIs. Send the API calls to the Lambda function.
• C. Create an Amazon Cognito user pool to authenticate users.
• D. Create an Amazon Cognito identity pool to authenticate users.
• E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.
• F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the
frontend web content.
Hide Answer
Suggested Answer: ACE
Question #: : 542
A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company
wants only premium customers to have access to the media streams and file content. The company stores all
content in an Amazon S3 bucket. The company also delivers content on demand to customers for a specific
purpose, such as movie rentals or music downloads.
Hide Answer
Suggested Answer: B
Question #: : 543
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company
recently purchased a Savings Pian. Because of changes in the company’s business requirements, the company has
decommissioned a large number of EC2 instances. The company wants to use its Savings Plan discounts on its
other AWS accounts.
Hide Answer
Suggested Answer: AE
Question #: : 544
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint
is a custom domain name that points to an Amazon Route 53 alias record. A solutions architect needs to create a
solution that has minimal effects on customers and minimal data loss to release the new version of APIs.
Hide Answer
Suggested Answer: A
ーーーーー
Question #: : 545
A company wants to direct its users to a backup static error page if the company's primary website is unavailable.
The primary website's DNS records are hosted in Amazon Route 53. The domain is pointing to an Application
Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead.
Hide Answer
Suggested Answer: B
Question #: : 546
A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief
information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the
use of physical backup tapes. The company must preserve the existing investment in the on-premises backup
applications and workflows.
Hide Answer
Suggested Answer: D
Question #: : 547
A company has data collection sensors at different locations. The data collection sensors stream a high volume of
data to the company. The company wants to design a platform on AWS to ingest and process high-volume
streaming data. The solution must be scalable and support data collection in near real time. The company must
store the data in Amazon S3 for future reporting.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
• B. Use AWS Glue to deliver streaming data to Amazon S3.
• C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
• D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Hide Answer
Suggested Answer: A
Question #: : 548
A company has separate AWS accounts for its finance, data analytics, and development departments. Because of
costs and security concerns, the company wants to control which services each AWS account can use.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Systems Manager templates to control which AWS services each department can use.
• B. Create organization units (OUs) for each department in AWS Organizations. Attach service control
policies (SCPs) to the OUs.
• C. Use AWS CloudFormation to automatically provision only the AWS services that each department
can use.
• D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the
usage of specific AWS services.
Hide Answer
Suggested Answer: B
Question #: : 549
A company has created a multi-tier application for its ecommerce website. The website uses an Application Load
Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on
Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing
information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy
that maximizes security without increasing operational overhead.
Hide Answer
Suggested Answer: B
Question #: : 550
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment
variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the
environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
• A. Add AWS KMS permissions in the Lambda resource policy.
• B. Add AWS KMS permissions in the Lambda execution role.
• C. Add AWS KMS permissions in the Lambda function policy.
• D. Allow the Lambda execution role in the AWS KMS key policy.
• E. Allow the Lambda resource policy in the AWS KMS key policy.
Hide Answer
Suggested Answer: BD
Question #: : 551
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in
Amazon S3. The reports are frequently accessed during the first week after production and must be stored for
several years. The reports must be retrievable within 6 hours.
Hide Answer
Suggested Answer: B
Question #: : 552
A company needs to optimize the cost of its Amazon EC2 instances. The company also needs to change the type
and family of its EC2 instances every 2-3 months.
Hide Answer
Suggested Answer: D
Question #: : 553
A solutions architect needs to review a company's Amazon S3 buckets to discover personally identifiable
information (PII). The company stores the PII data in the us-east-1 Region and us-west-2 Region.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
• B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is
in Amazon S3.
• C. Configure Amazon Inspector to analyze the data that is in Amazon S3.
• D. Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
Hide Answer
Suggested Answer: A
Question #: : 554
A company's SAP application has a backend SQL Server database in an on-premises environment. The company
wants to migrate its on-premises application and database server to AWS. The company needs an instance type
that meets the high demands of its SAP database. On-premises performance data shows that both the SAP
application and the database have high memory utilization.
Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by mrsoa at Aug. 6, 2023, 1:25 a.m.
ーーーーー
Question #: : 555
A company runs an application in a VPC with public and private subnets. The VPC extends across multiple
Availability Zones. The application runs on Amazon EC2 instances in private subnets. The application uses an
Amazon Simple Queue Service (Amazon SQS) queue.
A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and
the SQS queue.
Hide Answer
Suggested Answer: A
556-599
Question #: : 556
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web
application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB
tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly
accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials
in the template.
Hide Answer
Suggested Answer: B
Question #: : 557
A solutions architect manages an analytics application. The application stores large amounts of semistructured
data in an Amazon S3 bucket. The solutions architect wants to use parallel data processing to process the data
more quickly. The solutions architect also wants to use information that is stored in an Amazon Redshift database
to enrich the data.
Hide Answer
Suggested Answer: D
Question #: : 558
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company
needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the
VPCs each month.
Hide Answer
Suggested Answer: C
ーーー
Question #: : 559
A company hosts multiple applications on AWS for different product lines. The applications use different compute
resources, including Amazon EC2 instances and Application Load Balancers. The applications run in different
AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each
product line have tagged each compute resource in the individual accounts.
The company wants more details about the cost for each product line from the consolidated billing feature in
Organizations.
Hide Answer
Suggested Answer: BE
Question #: : 560
A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The
solutions architect has organized the company's accounts into organizational units (OUs).
The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also
needs to notify the company's operations team of any changes.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify
the changes to the OU hierarchy.
• B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to
identify the changes to the OU hierarchy.
• C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization
trail to identify the changes to the OU hierarchy.
• D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection
operation on a stack to identify the changes to the OU hierarchy.
Hide Answer
Suggested Answer: A
Question #: : 561
A company's website handles millions of requests each day, and the number of requests continues to increase. A
solutions architect needs to improve the response time of the web application. The solutions architect determines
that the application needs to decrease latency when retrieving product details from the Amazon DynamoDB table.
Which solution will meet these requirements with the LEAST amount of operational overhead?
• A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
• B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route
all read requests through Redis.
• C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application.
Route all read requests through Memcached.
• D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and
populate Amazon ElastiCache. Route all read requests through ElastiCache.
Hide Answer
Suggested Answer: A
ーーー
Question #: : 562
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC
do not travel across the internet.
Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)
• A. Create a route table entry for the endpoint.
• B. Create a gateway endpoint for DynamoDB.
• C. Create an interface endpoint for Amazon EC2.
• D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
• E. Create a security group entry in the endpoint's security group to provide access.
Hide Answer
Suggested Answer: AB
ーーー
Question #: : 563
A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS) clusters and on-
premises Kubernetes clusters. The company wants to view all clusters and workloads from a central location.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
• B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
• C. Use AWS Systems Manager to collect and view the cluster information.
• D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes
commands.
Hide Answer
Suggested Answer: B
Question #: : 564
A company is building an ecommerce application and needs to store sensitive customer information. The company
needs to give customers the ability to complete purchase transactions on the website. The company also needs to
ensure that sensitive customer data is protected, even from database administrators.
Hide Answer
Suggested Answer: B
Question #: : 565
A company has an on-premises MySQL database that handles transactional data. The company is migrating the
database to the AWS Cloud. The migrated database must maintain compatibility with the company's applications
that use the database. The migrated database also must scale automatically during periods of increased demand.
Hide Answer
Suggested Answer: C
Question #: : 566
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances
host applications that use a hierarchical directory structure. The applications need to read and write rapidly and
concurrently to shared storage.
Hide Answer
Suggested Answer: A
ーーー
Question #: : 567
A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a
building. The sensors will feed a database through HTTP requests that will add up usage for each tenant. The
solutions architect must use managed services when possible. The workload will receive more features in the future
as the solutions architect adds independent components.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process
the data, and store the data in an Amazon DynamoDB table.
• B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to
receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
• C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process
the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
• D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances
to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file
system to store the processed data.
Hide Answer
Suggested Answer: A
Question #: : 568
A solutions architect is designing the storage architecture for a new web application used for storing and viewing
engineering drawings. All application components will be deployed on the AWS infrastructure.
The application design must support caching to minimize the amount of time that users wait for the engineering
drawings to load. The application must be able to store petabytes of data.
Which combination of storage and caching should the solutions architect use?
• A. Amazon S3 with Amazon CloudFront
• B. Amazon S3 Glacier with Amazon ElastiCache
• C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
• D. AWS Storage Gateway with Amazon ElastiCache
Hide Answer
Suggested Answer: A
Question #: : 569
An Amazon EventBridge rule targets a third-party API. The third-party API has not received any incoming traffic.
A solutions architect needs to determine whether the rule conditions are being met and if the rule's target is being
invoked.
Hide Answer
Suggested Answer: A
ーーー
Question #: : 570
A company has a large workload that runs every Friday evening. The workload runs on Amazon EC2 instances
that are in two Availability Zones in the us-east-1 Region. Normally, the company must run no more than two
instances at all times. However, the company wants to scale up to six instances each Friday to handle a regularly
repeating increased workload.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a reminder in Amazon EventBridge to scale the instances.
• B. Create an Auto Scaling group that has a scheduled action.
• C. Create an Auto Scaling group that uses manual scaling.
• D. Create an Auto Scaling group that uses automatic scaling.
Hide Answer
Suggested Answer: A
A company is creating a REST API. The company has strict requirements for the use of TLS. The company
requires TLSv1.3 on the API endpoints. The company also requires a specific public third-party certificate
authority (CA) to sign the TLS certificate.
Hide Answer
Suggested Answer: A
ーーー
Question #: : 572
A company runs an application on AWS. The application receives inconsistent amounts of usage. The application
uses AWS Direct Connect to connect to an on-premises MySQL-compatible database. The on-premises database
consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use
auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Provision an Amazon DynamoDB database with default read and write capacity settings.
• B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
• C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity
unit (ACU).
• D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Hide Answer
Suggested Answer: C
Question #: : 573
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce
startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements
for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up.
Hide Answer
Suggested Answer: C
Question #: : 574
A financial services company launched a new application that uses an Amazon RDS for MySQL database. The
company uses the application to track stock market trends. The company needs to operate the application for only
2 hours at the end of each week. The company needs to optimize the cost of running the database.
Hide Answer
Suggested Answer: A
Question #: : 575
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application
Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The
company wants the data in the database to be highly available. The company also needs increased capacity for
read workloads.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create an Amazon DynamoDB database table configured with global tables.
• B. Create an Amazon RDS database with Multi-AZ deployments.
• C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
• D. Create an Amazon RDS database configured with cross-Region read replicas.
Hide Answer
Suggested Answer: B
Question #: : 576
A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS
Lambda. The users of this web application will be geographically distributed, and the company wants to reduce
the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
• A. Private endpoint
• B. Regional endpoint
• C. Interface VPC endpoint
• D. Edge-optimized endpoint
Hide Answer
Suggested Answer: D
ーーー
Question #: : 577
A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs
to ensure that clients use a TLS certificate when accessing the company's website. The company wants to automate
the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Use a CloudFront security policy to create a certificate.
• B. Use a CloudFront origin access control (OAC) to create a certificate.
• C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
• D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
Hide Answer
Suggested Answer: D
Question #: : 578
A company deployed a serverless application that uses Amazon DynamoDB as a database layer. The application
has experienced a large increase in users. The company wants to improve database response time from
milliseconds to microseconds and to cache requests to the database.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use DynamoDB Accelerator (DAX).
• B. Migrate the database to Amazon Redshift.
• C. Migrate the database to Amazon RDS.
• D. Use Amazon ElastiCache for Redis.
Hide Answer
Suggested Answer: A
Question #: : 579
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives traffic only on
weekdays during business hours. The company wants to optimize costs and reduce operational overhead based on
this usage.
Hide Answer
Suggested Answer: C
Question #: : 580
A company uses locally attached storage to run a latency-sensitive application on premises. The company is using
a lift and shift method to move the application to the AWS Cloud. The company does not want to change the
application architecture.
Hide Answer
Suggested Answer: B
Question #: : 581
A company runs a stateful production application on Amazon EC2 instances. The application requires at least two
EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The
solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
• A. Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
• B. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.
• C. Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability
Zone.
• D. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.
Hide Answer
Suggested Answer: D
Question #: : 582
An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its website on premises
and in the AWS Cloud. The company's on-premises data center is near the us-west-1 Region. The company uses
the eu-central-1 Region to host the website. The company wants to minimize load time for the website as much
as possible.
Hide Answer
Suggested Answer: A
Question #: : 583
A company has 5 PB of archived data on physical tapes. The company needs to preserve the data on the tapes for
another 10 years for compliance purposes. The company wants to migrate to AWS in the next 6 months. The data
center that stores the tapes has a 1 Gbps uplink internet connectivity.
Hide Answer
Suggested Answer: C
Question #: : 584
A company is deploying an application that processes large quantities of data in parallel. The company plans to
use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of
nodes from sharing the same underlying hardware.
Hide Answer
Suggested Answer: A
Question #: : 585
A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2 capacity in a failover
AWS Region. Business requirements state that the DR strategy must meet capacity in the failover Region.
Hide Answer
Suggested Answer: C
Question #: : 586
A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU
correlates to the five businesses that the company owns. The company's research and development (R&D)
business is separating from the company and will need its own organization. A solutions architect creates a
separate new management account for this purpose.
What should the solutions architect do next in the new management account?
• A. Have the R&D AWS account be part of both organizations during the transition.
• B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left
the prior organization.
• C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D
AWS account to the new R&D AWS account.
• D. Have the R&D AWS account join the new organization. Make the new management account a
member of the prior organization.
Hide Answer
Suggested Answer: C
Question #: : 587
A company is designing a solution to capture customer activity in different web applications to process analytics
and make predictions. Customer activity in the web applications is unpredictable and can increase suddenly. The
company requires a solution that integrates with other web applications. The solution must include an
authorization step for security purposes.
Hide Answer
Suggested Answer: D
Question #: : 588
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft
SQL Server Enterprise Edition. The company's current recovery point objective (RPO) and recovery time
objective (RTO) are 24 hours.
Hide Answer
Suggested Answer: B
Question #: : 589
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer that has sticky sessions enabled. The web server currently hosts the user session state. The company
wants to ensure high availability and avoid user session state loss in the event of a web server outage.
Hide Answer
Suggested Answer: D
Question #: : 590
A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for
MySQL DB instance. The company sized the RDS DB instance to meet the company's average daily workload.
Once a month, the database performs slowly when the company runs queries for a report. The company wants to
have the ability to run reports and maintain the performance of the daily workloads.
Hide Answer
Suggested Answer: A
Question #: : 591
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The
application includes microservices that manage customers and place orders. The company needs to route
incoming requests to the appropriate microservices.
Hide Answer
Suggested Answer: C
Community vote distribution
B (64%)
D (36%)
by ralfj at Aug. 31, 2023, 4:25 p.m.
Question #: : 592
A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be
able to access these images quickly. The company must deny access to users from specific countries. The company
wants to minimize costs as much as possible.
Hide Answer
Suggested Answer: C
Question #: : 593
A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions
architect needs to ensure that failures do not result in performance degradation or loss of data locally and within
an AWS Region. The solution needs to provide high availability at the node level and at the Region level.
Hide Answer
Suggested Answer: A
Question #: : 594
A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its application. During the
migration testing phase, a technical team observes that the application takes a long time to launch and load
memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
• A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2
On-Demand Instances available during the next testing phase.
• B. Launch EC2 Spot Instances to support the application and to scale the application so it is available
during the next testing phase.
• C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.
• D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances
during the next testing phase.
Hide Answer
Suggested Answer: C
Question #: : 595
A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its
applications experience sudden traffic increases on random days of the week. The company wants to maintain
application performance during sudden traffic increases.
Hide Answer
Suggested Answer: C
Question #: : 596
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly
sales event, database usage increases and causes database connection issues for the application. The traffic is
unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to
maintain performance when there is an unpredictable increase in traffic.
Hide Answer
Suggested Answer: C
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda.
The company’s employees report issues with high latency when they begin using the application each day. The
company wants to reduce latency.
Hide Answer
Suggested Answer: B
Question #: : 598
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS
Cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB file share.
Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically
throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
• A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
• B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
• C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
• D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon
S3. Provide access to analysts.
• E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
• F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
Hide Answer
Suggested Answer: CEF
Question #: : 599
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB
instances to build and run a payment processing application. The company will run the application in its on-
premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with
the company's operational team to build the application.
Which activities are the responsibility of the company's operational team? (Choose three.)
• A. Providing resilient power and network connectivity to the Outposts racks
• B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
• C. Physical security and access controls of the data center environment
• D. Availability of the Outposts infrastructure including the power supplies, servers, and networking
equipment within the Outposts racks
• E. Physical maintenance of Outposts components
• F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
Hide Answer
Suggested Answer: ACE
600-620
Question #: : 600
A company is planning to migrate a TCP-based application into the company's VPC. The application is publicly
accessible on a nonstandard TCP port through a hardware appliance in the company's data center. This public
endpoint can process up to 3 million requests per second with low latency. The company requires the same level
of performance for the new public endpoint in AWS.
Hide Answer
Suggested Answer: A
Question #: : 601
A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to
migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora PostgreSQL
DB cluster.
• B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.
• C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
• D. Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a new
Aurora PostgreSQL DB cluster.
Hide Answer
Suggested Answer: B
Community vote distribution
B (79%)
A (21%)
by taustin2 at Sept. 22, 2023, 10:27 p.m
Question #: : 602
A company's infrastructure consists of hundreds of Amazon EC2 instances that use Amazon Elastic Block Store
(Amazon EBS) storage. A solutions architect must ensure that every EC2 instance can be recovered after a disaster.
What should the solutions architect do to meet this requirement with the LEAST amount of effort?
• A. Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS
CloudFormation template to launch new EC2 instances from the EBS storage.
• B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk
to set the environment based on the EC2 template and attach the EBS storage.
• C. Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup
API or the AWS CLI to speed up the restore process for multiple EC2 instances.
• D. Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2
instance and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores
with the copied AMIs and attach the EBS storage.
Hide Answer
Suggested Answer: C
Question #: : 603
A company recently migrated to the AWS Cloud. The company wants a serverless solution for large-scale parallel
on-demand processing of a semistructured dataset. The data consists of logs, media files, sales transactions, and
IoT sensor data that is stored in Amazon S3. The company wants the solution to process thousands of items in the
dataset in parallel.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Use the AWS Step Functions Map state in Inline mode to process the data in parallel.
• B. Use the AWS Step Functions Map state in Distributed mode to process the data in parallel.
• C. Use AWS Glue to process the data in parallel.
• D. Use several AWS Lambda functions to process the data in parallel.
Hide Answer
Suggested Answer: B
Question #: : 604
A company will migrate 10 PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink
to the internet. Other on-premises applications share the uplink. The company can use 80% of the internet
bandwidth for this one-time migration task.
Hide Answer
Suggested Answer: A
Question #: : 605
A company has several on-premises Internet Small Computer Systems Interface (ISCSI) network storage servers.
The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect
must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with
a minimal number of infrastructure changes.
Which solution will meet these requirements?
• A. Deploy an Amazon S3 File Gateway.
• B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
• C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
• D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
Hide Answer
Suggested Answer: C
Question #: : 606
A solutions architect is designing an application that will allow business users to upload objects to Amazon S3.
The solution needs to maximize object durability. Objects also must be readily available at any time and for any
length of time. Users will access objects frequently within the first 30 days after the objects are uploaded, but users
are much less likely to access objects that are older than 30 days.
Hide Answer
Suggested Answer: B
A company has migrated a two-tier application from its on-premises data center to the AWS Cloud. The data tier
is a Multi-AZ deployment of Amazon RDS for Oracle with 12 TB of General Purpose SSD Amazon Elastic Block
Store (Amazon EBS) storage. The application is designed to process and store documents in the database as binary
large objects (blobs) with an average document size of 6 MB.
The database size has grown over time, reducing the performance and increasing the cost of storage. The company
must improve the database performance and needs a solution that is highly available and resilient.
Hide Answer
Suggested Answer: C
Question #: : 608
A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations
around the world. The application consists of backend web services that are exposed over HTTPS on port 443.
The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The retail
locations communicate with the web application over the public internet. The company allows each retail location
to register the IP address that the retail location has been allocated by its local ISP.
The company's security team recommends to increase the security of the application endpoint by restricting access
to only the IP addresses registered by the retail locations.
What should a solutions architect do to meet these requirements?
• A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update
the IP addresses in the rule to include the registered IP addresses.
• B. Deploy AWS Firewall Manager to manage the ALConfigure firewall rules to restrict traffic to the
ALModify the firewall rules to include the registered IP addresses.
• C. Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization
function on the ALB to validate that incoming requests are from the registered IP addresses.
• D. Configure the network ACL on the subnet that contains the public interface of the ALB. Update the
ingress rules on the network ACL with entries for each of the registered IP addresses.
Hide Answer
Suggested Answer: A
Question #: : 609
A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest
data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent
access to portions of the data that contain sensitive information.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an IAM role that includes permissions to access Lake Formation tables.
• B. Create data filters to implement row-level security and cell-level security.
• C. Create an AWS Lambda function that removes sensitive information before Lake Formation ingests
the data.
• D. Create an AWS Lambda function that periodically queries and removes sensitive information from
Lake Formation tables.
Hide Answer
Suggested Answer: C
A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon
S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be
transmitted over the public internet. Servers in the company's on-premises data center will consume the output
from an application that runs on the EC2 instances.
Hide Answer
Suggested Answer: B
Question #: : 611
A company has an application with a REST-based interface that allows data to be received in near-real time from
a third-party vendor. Once received, the application processes and stores the data for further analysis. The
application is running on Amazon EC2 instances.
The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application.
When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to
process all requests.
Which design should a solutions architect recommend to provide a more scalable solution?
• A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
• B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit
for the third-party vendor.
• C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in
an Auto Scaling group behind an Application Load Balancer.
• D. Repackage the application as a container. Deploy the application using Amazon Elastic Container
Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.
Hide Answer
Suggested Answer: A
Question #: : 612
A company has an application that runs on Amazon EC2 instances in a private subnet. The application needs to
process sensitive information from an Amazon S3 bucket. The application must not use the internet to connect to
the S3 bucket.
Hide Answer
Suggested Answer: A
Question #: : 613
A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS
cluster stores sensitive information in the Kubernetes secrets object. The company wants to ensure that the
information is encrypted.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use the container application to encrypt the information by using AWS Key Management Service
(AWS KMS).
• B. Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS).
• C. Implement an AWS Lambda function to encrypt the information by using AWS Key Management
Service (AWS KMS).
• D. Use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key
Management Service (AWS KMS).
Hide Answer
Suggested Answer: B
Question #: : 614
A company is designing a new multi-tier web application that consists of the following components:
• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can access them.
Question #: : 615
A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS).
The application has a microservices architecture. The company needs to implement a solution that collects,
aggregates, and summarizes metrics and logs from the application in a centralized location.
Hide Answer
Suggested Answer: C
Question #: : 616
A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network
Load Balancer. The company stores the product’s objects in an Amazon S3 bucket.
The company recently experienced malicious attacks against its systems. The company needs a solution that
continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.
The solution must also report suspicious activity and display the information on a dashboard.
Hide Answer
Suggested Answer: A
Question #: : 617
A company wants to migrate an on-premises data center to AWS. The data center hosts a storage server that stores
data in an NFS-based file system. The storage server holds 200 GB of data. The company needs to migrate the
data without interruption to existing services. Multiple resources in AWS must be able to access the data by using
the NFS protocol.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
• A. Create an Amazon FSx for Lustre file system.
• B. Create an Amazon Elastic File System (Amazon EFS) file system.
• C. Create an Amazon S3 bucket to receive the data.
• D. Manually use an operating system copy command to push the data into the AWS destination.
• E. Install an AWS DataSync agent in the on-premises data center. Use a DataSync task between the on-
premises location and AWS.
Hide Answer
Suggested Answer: AB
Question #: : 618
A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB
file share mounted as a volume in the us-east-1 Region. The company has a recovery point objective (RPO) of 5
minutes for planned system maintenance or unplanned service disruptions. The company needs to replicate the
file system to the us-west-2 Region. The replicated data must not be deleted by any user for 5 years.
Hide Answer
Suggested Answer: C
Question #: : 619
A solutions architect is designing a security solution for a company that wants to provide developers with
individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because
the individual developers will have AWS account root user-level access to their own accounts, the solutions
architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer
accounts is not modified.
Which action meets these requirements?
• A. Create an IAM policy that prohibits changes to CloudTrail. and attach it to the root user.
• B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option
enabled.
• C. Create a service control policy (SCP) that prohibits changes to CloudTrail, and attach it the developer
accounts.
• D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an
Amazon Resource Name (ARN) in the management account.
Hide Answer
Suggested Answer: C
Question #: : 620
A company is planning to deploy a business-critical application in the AWS Cloud. The application requires
durable storage with consistent, low-latency performance.
Which type of storage should a solutions architect recommend to meet these requirements?
• A. Instance store volume
• B. Amazon ElastiCache for Memcached cluster
• C. Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume
• D. Throughput Optimized HDD Amazon Elastic Block Store (Amazon EBS) volume
Hide Answer
Suggested Answer: C
621-640
ーーーーー
Question #: : 621
An online photo-sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 Region.
The company needs to store a copy of all new photos in the us-east-1 Region.
Which solution will meet this requirement with the LEAST operational effort?
• A. Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the
existing S3 bucket to the second S3 bucket.
• B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-
east-1 in the CORS rule's AllowedOrigin element.
• C. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule
to save photos into the second S3 bucket.
• D. Create a second S3 bucket in us-east-1. Configure S3 event notifications on object creation and update
events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
Hide Answer
Suggested Answer: A
Question #: : 622
A company is creating a new web application for its subscribers. The application will consist of a static single page
and a persistent database layer. The application will have millions of users for 4 hours in the morning, but the
application will have only a few thousand users during the rest of the day. The company's data architects have
requested the ability to rapidly evolve their schema.
Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)
• A. Deploy Amazon DynamoDB as the database solution. Provision on-demand capacity.
• B. Deploy Amazon Aurora as the database solution. Choose the serverless DB engine mode.
• C. Deploy Amazon DynamoDB as the database solution. Ensure that DynamoDB auto scaling is enabled.
• D. Deploy the static content into an Amazon S3 bucket. Provision an Amazon CloudFront distribution
with the S3 bucket as the origin.
• E. Deploy the web servers for static content across a fleet of Amazon EC2 instances in Auto Scaling
groups. Configure the instances to periodically refresh the content from an Amazon Elastic File System (Amazon
EFS) volume.
Hide Answer
Suggested Answer: CD
ーーーーー
Question #: : 623
A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The
company must protect the REST APIs from SQL injection and cross-site scripting attacks.
What is the MOST operationally efficient solution that meets these requirements?
• A. Configure AWS Shield.
• B. Configure AWS WAF.
• C. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS Shield in CloudFront.
• D. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront.
Hide Answer
Suggested Answer: A
ーーーーー
Question #: : 624
A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their
access to on-premises resources through Active Directory user groups on the corporate network. However, the
company does not want users to have to maintain another identity to access the resources. A solutions architect
must manage user access to the AWS resources while preserving access to the on-premises resources.
Hide Answer
Suggested Answer: D
Question #: : 625
A company is hosting a website behind multiple Application Load Balancers. The company has different
distribution rights for its content around the world. A solutions architect needs to ensure that users are served the
correct content without violating distribution rights.
Which configuration should the solutions architect choose to meet these requirements?
• A. Configure Amazon CloudFront with AWS WAF.
• B. Configure Application Load Balancers with AWS WAF
• C. Configure Amazon Route 53 with a geolocation policy
• D. Configure Amazon Route 53 with a geoproximity routing policy
Hide Answer
Suggested Answer: A
Question #: : 626
A company stores its data on premises. The amount of data is growing beyond the company's available capacity.
The company wants to migrate its data from the on-premises location to an Amazon S3 bucket. The company
needs a solution that will automatically validate the integrity of the data after the transfer.
Hide Answer
Suggested Answer: B
Question #: : 627
A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and
receive 1 million requests each day on average. The company wants to maximize availability while minimizing the
operational overhead that is related to the management of the two servers.
Hide Answer
Suggested Answer: A
Question #: : 628
A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's
applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The
company wants to report on incomplete multipart uploads for cost compliance purposes.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure AWS Config with a rule to report the incomplete multipart upload object count.
• B. Create a service control policy (SCP) to report the incomplete multipart upload object count.
• C. Configure S3 Storage Lens to report the incomplete multipart upload object count.
• D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
Hide Answer
Suggested Answer: C
Question #: : 629
A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database
version for security compliance reasons. Because the database contains critical data, the company wants a quick
solution to upgrade and test functionality without losing any data.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.
• B. Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for
MySQL.
• C. Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version
of Amazon RDS for MySQL.
• D. Use Amazon RDS Blue/Green Deployments to deploy and test production changes.
Hide Answer
Suggested Answer: D
ーーーーー
Question #: : 630
A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete.
If the job is interrupted, it has to restart from the beginning.
How should the solutions architect address this issue in the MOST cost-effective manner?
• A. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
• B. Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
• C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon
EventBridge scheduled event.
• D. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by
an Amazon EventBridge scheduled event.
Hide Answer
Suggested Answer: C
Question #: : 631
A social media company wants to store its database of user profiles, relationships, and interactions in the AWS
Cloud. The company needs an application to monitor any changes in the database. The application needs to
analyze the relationships between the data entities and to provide recommendations to users.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes
in the database.
• B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the
database.
• C. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon
Kinesis Data Streams to process changes in the database.
• D. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune
Streams to process changes in the database.
Hide Answer
Suggested Answer: B
ーーーーー
Question #: : 632
A company is creating a new application that will store a large amount of data. The data will be analyzed hourly
and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones.
The needed amount of storage space will continue to grow for the next 6 months.
Which storage solution should a solutions architect recommend to meet these requirements?
• A. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the
application instances.
• B. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on
the application instances.
• C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on
the application instances.
• D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared
between the application instances.
Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by AF_1221 at Nov. 1, 2023, 4:07 p.m.
ーーーーー
Question #: : 633
A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance.
Increases in traffic are causing performance problems. The company determines that database queries are the
primary reason for the slow performance.
Hide Answer
Suggested Answer: C
Question #: : 634
A company collects 10 GB of telemetry data daily from various machines. The company stores the data in an
Amazon S3 bucket in a source data account.
The company has hired several consulting agencies to use this data for analysis. Each agency needs read access to
the data for its analysts. The company must share the data from the source data account by choosing a solution
that maximizes security and operational efficiency.
Hide Answer
Suggested Answer: C
Question #: : 635
A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares.
Applications that run on Amazon EC2 instances access the file shares. The company needs a storage disaster
recovery (DR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be
accessed by using the same protocols as the primary Region.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicate the S3 bucket
to the secondary Region.
• B. Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the
secondary Region. Create a new FSx for ONTAP instance from the backup.
• C. Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate
data from the primary Region to the secondary Region.
• D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume.
Replicate the volume to the secondary Region.
Hide Answer
Suggested Answer: C
Question #: : 636
A development team is creating an event-based application that uses AWS Lambda functions. Events will be
generated when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple
Notification Service (Amazon SNS) configured as the event target from Amazon S3.
What should a solutions architect do to process the events from Amazon S3 in a scalable way?
• A. Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon
ECS) before the event runs in Lambda.
• B. Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon
EKS) before the event runs in Lambda
• C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS).
Configure the SOS queue to trigger a Lambda function.
• D. Create an SNS subscription that sends the event to AWS Server Migration Service (AWS SMS).
Configure the Lambda function to poll from the SMS event.
Hide Answer
Suggested Answer: C
Question #: : 637
A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service
will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data
that needs to be persisted in a backend database is currently less than 1 GB with unpredictable future growth.
Data can be queried using simple key-value requests.
Which combination ofAWS services would meet these requirements? (Choose two.)
• A. AWS Fargate
• B. AWS Lambda
• C. Amazon DynamoDB
• D. Amazon EC2 Auto Scaling
• E. MySQL-compatible Amazon Aurora
Hide Answer
Suggested Answer: BC
Question #: : 638
A company collects and shares research data with the company's employees all over the world. The company wants
to collect and store the data in an Amazon S3 bucket and process the data in the AWS Cloud. The company will
share the data with the company's employees. The company needs a secure solution in the AWS Cloud that
minimizes operational overhead.
Hide Answer
Suggested Answer: D
ーーーーー
Question #: : 639
A company is building a new furniture inventory application. The company has deployed the application on a fleet
ofAmazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load
Balancer (ALB) in their VPC.
A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for
some requests.
What should the solutions architect do to resolve this issue?
• A. Disable session affinity (sticky sessions) on the ALB
• B. Replace the ALB with a Network Load Balancer
• C. Increase the number of EC2 instances in each Availability Zone
• D. Adjust the frequency of the health checks on the ALB's target group
Hide Answer
Suggested Answer: A
ーーーーー
Question #: : 640
A company has an application workflow that uses an AWS Lambda function to download and decrypt files from
Amazon S3. These files are encrypted using AWS Key Management Service (AWS KMS) keys. A solutions
architect needs to design a solution that will ensure the required permissions are set correctly.
Hide Answer
Suggested Answer: BE
641-660
Question #: : 641
A company wants to monitor its AWS costs for financial review. The cloud operations team is designing an
architecture in the AWS Organizations management account to query AWS Cost and Usage Reports for all
member accounts. The team must run this query once a month and provide a detailed analysis of the bill.
Which solution is the MOST scalable and cost-effective way to meet these requirements?
• A. Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use
Amazon EMR for analysis.
• B. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use
Amazon Athena for analysis.
• C. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3 Use Amazon
Redshift for analysis.
• D. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use
Amazon QuickSight tor analysis.
Hide Answer
Suggested Answer: B
Question #: : 642
A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in
the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the
application can scale out and in as traffic increases and decreases.
Hide Answer
Suggested Answer: A
Question #: : 643
A company runs several websites on AWS for its different brands. Each website generates tens of gigabytes of web
traffic logs each day. A solutions architect needs to design a scalable solution to give the company's developers the
ability to analyze traffic patterns across all the company's websites. This analysis by the developers will occur on
demand once a week over the course of several months. The solution must support queries with standard SQL.
Hide Answer
Suggested Answer: A
Question #: : 644
An international company has a subdomain for each country that the company operates in. The subdomains are
formatted as example.com, country1.example.com, and country2.example.com. The company's workloads are
behind an Application Load Balancer. The company wants to encrypt the website data that is in transit.
Hide Answer
Suggested Answer: AE
Question #: : 645
A company is required to use cryptographic keys in its on-premises key manager. The key manager is outside of
the AWS Cloud because of regulatory and compliance requirements. The company wants to manage encryption
and decryption by using cryptographic keys that are retained outside of the AWS Cloud and that support a variety
of external key managers from different vendors.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS CloudHSM key store backed by a CloudHSM cluster.
• B. Use an AWS Key Management Service (AWS KMS) external key store backed by an external key
manager.
• C. Use the default AWS Key Management Service (AWS KMS) managed key store.
• D. Use a custom key store backed by an AWS CloudHSM cluster.
Hide Answer
Suggested Answer: B
Question #: : 646
A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The
workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to
enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously.
The workload requires access latency within 1 ms. After processing has completed, engineers will need access to
the dataset for manual postprocessing.
Hide Answer
Suggested Answer: C
Question #: : 647
A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to
users across the world. The application needs to be highly available with an automated failover across AWS
Regions. The company wants to minimize the latency of users without relying on IP address caching on user
devices.
Hide Answer
Suggested Answer: A
Question #: : 648
A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The
company has a high performance computing (HPC) environment in its data center and wants to expand its
forecasting capabilities.
A solutions architect must identify a highly available cloud storage solution that can handle large amounts of
sustained throughput. Files that are stored in the solution should be accessible to thousands of compute instances
that will simultaneously access and process the entire dataset.
Hide Answer
Suggested Answer: B
Question #: : 649
An ecommerce company runs a PostgreSQL database on premises. The database stores data by using high IOPS
Amazon Elastic Block Store (Amazon EBS) block storage. The daily peak I/O transactions per second do not
exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision
disk IOPS performance independent of disk storage capacity.
Question #: : 650
A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The
company's online application uses the database to process transactions. The data analysis team uses the same
production database to run reports for analytical processing. The company wants to reduce operational overhead
by moving to managed services wherever possible.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
• B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
• C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
• D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes
Hide Answer
Suggested Answer: A
Question #: : 651
A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available
for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images need
to be archived but must be available instantly upon request. After 5 years, only auditors can access the images.
The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this process.
A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle
rule.
Hide Answer
Suggested Answer: C
Question #: : 652
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the
process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical
data workload.
Hide Answer
Suggested Answer: B
Question #: : 653
A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an
organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a
specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user
who created the resource.
Hide Answer
Suggested Answer: B
Question #: : 654
A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance
to run multiple processes to host the application. The processes include an Apache web server that serves static
content. The Apache web server makes requests to a PHP application that uses a local Redis server for user
sessions.
The company wants to redesign the architecture to be highly available and to use AWS managed solutions.
Hide Answer
Suggested Answer: D
Question #: : 655
A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group.
The company designed the application to work with session affinity (sticky sessions) for a better user experience.
The application must be available publicly over the internet as an endpoint. A WAF must be applied to the
endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.
Hide Answer
Suggested Answer: CE
Question #: : 656
A company runs a website that stores images of historical events. Website users need the ability to search and view
images based on the year that the event in the image occurred. On average, users request each image only once
or twice a year. The company wants a highly available solution to store and deliver the images to users.
Hide Answer
Suggested Answer: C
Question #: : 657
A company has multiple AWS accounts in an organization in AWS Organizations that different business units use.
The company has multiple offices around the world. The company needs to update security group rules to allow
new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize
the management of security group rules to minimize the administrative overhead that updating CIDR ranges
requires.
Which solution will meet these requirements MOST cost-effectively?
• A. Create VPC security groups in the organization's management account. Update the security groups
when a CIDR range update is necessary.
• B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.
• C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group
update across the organization. Use an AWS Lambda function to update the prefix list automatically when the
CIDR ranges change.
• D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager
common security group policy for the whole organization. Select the previously created security groups as primary
groups in the policy.
Hide Answer
Suggested Answer: B
Question #: : 658
A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high
performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads
and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access
from the file system.
Which solution will meet these requirements with the LEAST latency? (Choose two.)
• A. Deploy compute optimized EC2 instances into a cluster placement group.
• B. Deploy compute optimized EC2 instances into a partition placement group.
• C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
• D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
• E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
Hide Answer
Suggested Answer: AE
Question #: : 659
A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The
existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.
Which AWS service should a solutions architect use to meet these requirements?
• A. AWS DataSync with a VPC endpoint
• B. AWS Direct Connect
• C. AWS Snowball Edge Storage Optimized
• D. AWS Storage Gateway
Hide Answer
Suggested Answer: C
Question #: : 660
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application
peak hours occur at the same time each day. Application users report slow application performance at the start of
peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure
that the application works properly at the start of peak hours.
661-680
Question #: : 661
A company runs applications on AWS that connect to the company's Amazon RDS database. The applications
scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its
applications that connect to the database.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database.
Change the applications to use the DynamoDB endpoint.
• B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS
Proxy endpoint.
• C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the
applications to use the custom proxy endpoint.
• D. Use an AWS Lambda function to provide connection pooling with a target group configuration for
the database. Change the applications to use the Lambda function.
Hide Answer
Suggested Answer: B
Question #: : 662
A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block
Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase
additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon
EBS Elastic Volumes to reduce the size of the EBS volumes.
• B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of
the EBS volumes.
• C. Delete all expired and unused snapshots to reduce snapshot costs.
• D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the
snapshots according to the company's snapshot policy requirements.
Hide Answer
Suggested Answer: D
Question #: : 663
A company is developing a new application on AWS. The application consists of an Amazon Elastic Container
Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS
for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The
company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the
data in the S3 bucket.
Hide Answer
Suggested Answer: A
Question #: : 664
A company has a web application that runs on premises. The application experiences latency issues during peak
hours. The latency issues occur twice each month. At the start of a latency issue, the application's CPU utilization
immediately increases to 10 times its normal amount.
The company wants to migrate the application to AWS to improve latency. The company also wants to scale the
application automatically when application demand increases. The company will use AWS Elastic Beanstalk for
application deployment.
Hide Answer
Suggested Answer: B
Question #: : 665
A company has customers located across the world. The company wants to use automation to secure its systems
and network infrastructure. The company's security team must be able to track and audit all incremental changes
to the infrastructure.
Hide Answer
Suggested Answer: B
Question #: : 666
A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a
stateless Python application and a MySQL database. The website serves only a small amount of traffic. The
company is concerned about the reliability of the instance and needs to migrate to a highly available architecture.
The company cannot modify the application code.
Which combination of actions should a solutions architect take to achieve high availability for the website?
(Choose two.)
• A. Provision an internet gateway in each Availability Zone in use.
• B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
• C. Migrate the database to Amazon DynamoDB, and enable DynamoDB auto scaling.
• D. Use AWS DataSync to synchronize the database data across multiple EC2 instances.
• E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances
that are distributed across two Availability Zones.
Hide Answer
Suggested Answer: BE
A company is moving its data and applications to AWS during a multiyear migration project. The company wants
to securely access data on Amazon S3 from the company's AWS Region and from the company's on-premises
location. The data must not traverse the internet. The company has established an AWS Direct Connect
connection between its Region and its on-premises location.
Hide Answer
Suggested Answer: A
Question #: : 668
A company created a new organization in AWS Organizations. The organization has multiple accounts for the
company's development teams. The development team members use AWS IAM Identity Center (AWS Single
Sign-On) to access the accounts. For each of the company's applications, the development teams must use a
predefined application name to tag resources that are created.
A solutions architect needs to design a solution that gives the development team the ability to create resources
only if the application name tag has an approved value.
Hide Answer
Suggested Answer: D
Question #: : 669
A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage
the master user password by rotating the password every 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every
30 days.
• B. Use the modify-db-instance command in the AWS CLI to change the password.
• C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
• D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate
password rotation.
Hide Answer
Suggested Answer: C
Question #: : 670
A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours
once a week. The company knows how many read and write operations the application performs to the table each
second during the tests. The company does not currently use DynamoDB for any other use case. A solutions
architect needs to optimize the costs for the table.
Which solution will meet these requirements?
• A. Choose on-demand mode. Update the read and write capacity units appropriately.
• B. Choose provisioned mode. Update the read and write capacity units appropriately.
• C. Purchase DynamoDB reserved capacity for a 1-year term.
• D. Purchase DynamoDB reserved capacity for a 3-year term.
Hide Answer
Suggested Answer: A
Question #: : 671
A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments
of its AWS costs. The company recently identified unusual spending.
The company needs a solution to prevent unusual spending. The solution must monitor costs and notify
responsible stakeholders in the event of unusual spending.
Hide Answer
Suggested Answer: C
Question #: : 672
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign.
The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine
whether to process the data further in the data pipeline.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
• B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
• C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
• D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use
SQL to query the data.
Hide Answer
Suggested Answer: D
Question #: : 673
A company runs an SMB file server in its data center. The file server stores large files that the company frequently
accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files
with a maximum retrieval time of 24 hours.
Hide Answer
Suggested Answer: D
Question #: : 674
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a
database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic
increases. The database experiences a heavy read load during periods of high traffic.
Which actions should a solutions architect take to resolve these performance issues? (Choose two.)
• A. Turn on auto scaling for the DB instance.
• B. Create a read replica for the DB instance. Configure the application to send read traffic to the read
replica.
• C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send
read traffic to the standby DB instance.
• D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the
ElastiCache cluster.
• E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the
same Availability Zone as the DB instance.
Hide Answer
Suggested Answer: AC
Question #: : 675
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an
application. The company creates one snapshot of each EBS volume every day to meet compliance requirements.
The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots.
The solution must not change the administrative rights of the storage administrator user.
Which solution will meet these requirements with the LEAST administrative effort?
• A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance.
Use the AWS CLI from the new EC2 instance to delete snapshots.
• B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator
user.
• C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
• D. Lock the EBS snapshots to prevent deletion.
Hide Answer
Suggested Answer: C
Question #: : 676
A company's application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and
databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and
from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to
Amazon OpenSearch Service for analysis.
Hide Answer
Suggested Answer: B
Question #: : 677
A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon
EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.
The company needs a dedicated EKS cluster for development work. The company will use the development cluster
infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.
Hide Answer
Suggested Answer: D
Question #: : 678
A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The
company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal
effort for any data that must be encrypted.
Hide Answer
Suggested Answer: A
Question #: : 679
A company wants to back up its on-premises virtual machines (VMs) to AWS. The company's backup solution
exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days
and must be automatically deleted after 30 days.
Hide Answer
Suggested Answer: CEF
Question #: : 680
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon
EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original
S3 bucket consistently. The copied files should be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create
a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has
changed.
• B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event
notification to invoke the function when files are created and changed in Amazon S3. Configure the function to
copy files to the file system and the destination S3 bucket.
• C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create
a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
• D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create
a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and
the mounted file system.
Hide Answer
Suggested Answer: D
681-699
Question #: : 681
A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes.
The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS).
The company must be able to control rotation of the encryption keys.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a customer managed key. Use the key to encrypt the EBS volumes.
• B. Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
• C. Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
• D. Use an AWS owned key to encrypt the EBS volumes.
Hide Answer
Suggested Answer: C
A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must
automatically identify noncompliant resources and enforce compliance policies on findings.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon
EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of
unencrypted EBS volumes.
• B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block
Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and
remediation of unencrypted EBS volumes.
• C. Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use
AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
• D. Use Amazon inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes.
Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
Hide Answer
Suggested Answer: B
Question #: : 683
A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node
MySQL database and a multi-node web tier. The company must minimize changes to the application during the
migration. The company wants to improve application resiliency after the migration.
Hide Answer
Suggested Answer: CE
Question #: : 684
A company wants to migrate its web applications from on premises to AWS. The company is located close to the
eu-central-1 Region. Because of regulations, the company cannot launch some of its applications in eu-central-1.
The company wants to achieve single-digit millisecond latency.
Hide Answer
Suggested Answer: B
Question #: : 685
A company’s ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a
private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database
performance and ensure that the Lambda invocations do not overload the database with too many connections.
Hide Answer
Suggested Answer: B
Question #: : 686
A company is creating an application. The company stores data from tests of the application in multiple on-
premises locations.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The
number of accounts and VPCs will increase during the next year. The network architecture must simplify the
administration of new connections and must provide the ability to scale.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the
on-premises locations.
• B. Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection
to connect all VPCs and on-premises locations.
• C. Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments
for the on-premises connections.
• D. Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.
Hide Answer
Suggested Answer: D
Question #: : 687
A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each
month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company
has no machine learning (ML) experience and wants to use a managed service for the training and predictions.
Hide Answer
Suggested Answer: CD
Question #: : 688
A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On)
and AWS Control Tower are configured for the accounts. The company wants to manage multiple user
permissions across all the accounts.
The permissions will be used by multiple IAM users and must be split between the developer and administrator
teams. Each team requires different permissions. The company wants a solution that includes new users that are
hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Create a custom IAM
policy for each group to set fine-grained permissions.
• B. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed
IAM policies to each user as needed for fine-grained permissions.
• C. Create individual users in IAM Identity Center. Create new developer and administrator groups in
IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign
the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users
are hired, add them to the appropriate group.
• D. Create individual users in IAM Identity Center. Create new permission sets that include the
appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant additional IAM
permissions to the users from within specific accounts. When new users are hired, add them to IAM Identity
Center and assign them to the accounts.
Hide Answer
Suggested Answer: B
Question #: : 689
A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The
company also wants to minimize the cost and configuration effort required to operate the volume encryption check.
Hide Answer
Suggested Answer: C
Community vote distribution
D (100%)
by Andy_09 at Feb. 5, 2024, 3:43 p.m.
Question #: : 690
A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses
a fleet of Amazon EC2 Spot Instances to transcode the file format. The company needs to scale throughput when
the company uploads data from the on-premises data center to Amazon S3 and when the company downloads
data from Amazon S3 to the EC2 instances.
Hide Answer
Suggested Answer: AC
Question #: : 691
A solutions architect is designing a shared storage solution for a web application that is deployed across multiple
Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The
company plans to make frequent changes to the content. The solution must have strong consistency in returning
the new content as soon as the changes occur.
Hide Answer
Suggested Answer: AD
Question #: : 692
A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route
53 will be used to distribute traffic between these Regions.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
• A. Create an A record with a latency policy.
• B. Create an A record with a geolocation policy.
• C. Create a CNAME record with a failover policy.
• D. Create a CNAME record with a geoproximity policy.
Hide Answer
Suggested Answer: D
Question #: : 693
A company has a web application that includes an embedded NoSQL database. The application runs on Amazon
EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling
group in a single Availability Zone.
A recent increase in traffic requires the application to be highly available and for the database to be eventually
consistent.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Replace the ALB with a Network Load Balancer. Maintain the embedded NoSQL database with its
replication service on the EC2 instances.
• B. Replace the ALB with a Network Load Balancer. Migrate the embedded NoSQL database to Amazon
DynamoDB by using AWS Database Migration Service (AWS DMS).
• C. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the
embedded NoSQL database with its replication service on the EC2 instances.
• D. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the
embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
Hide Answer
Suggested Answer: A
Question #: : 694
A company is building a shopping application on AWS. The application offers a catalog that changes once each
month and needs to scale with traffic volume. The company wants the lowest possible latency from the application.
Data from each user's shopping cart needs to be highly available. User session data must be available even if the
user is disconnected and reconnects.
What should a solutions architect do to ensure that the shopping cart data is preserved at all times?
• A. Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for
access to the catalog in Amazon Aurora.
• B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and
shopping cart data from the user's session.
• C. Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping
cart data from the user's session.
• D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the
catalog and shopping cart. Configure automated snapshots.
Hide Answer
Suggested Answer: B
Question #: : 695
A company is building a microservices-based application that will be deployed on Amazon Elastic Kubernetes
Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the
application is observable to identify performance issues in the future.
Hide Answer
Suggested Answer: A
Question #: : 696
A company needs to provide customers with secure access to its data. The company processes customer data and
stores the results in an Amazon S3 bucket.
All the data is subject to strong regulations and security requirements. The data must be encrypted at rest. Each
customer must be able to access only their data from their AWS account. Company employees must not be able
to access the data.
Which solution will meet these requirements?
• A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-
side. In the private certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
• B. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the
data server-side. In the S3 bucket policy, deny decryption of data for all principals except an IAM role that the
customer provides.
• C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the
data server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.
• D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-
side. In the public certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
Hide Answer
Suggested Answer: D
Question #: : 697
A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security
mandate requires the solutions architect to launch all Amazon EC2 instances in a private subnet. However, when
the solutions architect launches an EC2 instance that runs a web server on ports 80 and 443 in a private subnet,
no external internet traffic can connect to the server.
Question #: : 698
A company is deploying a new application to Amazon Elastic Kubernetes Service (Amazon EKS) with an AWS
Fargate cluster. The application needs a storage solution for data persistence. The solution must be highly
available and fault tolerant. The solution also must be shared between multiple application containers.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS
worker nodes are placed. Register the volumes in a StorageClass object on an EKS cluster. Use EBS Multi-Attach
to share the data between containers.
• B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a
StorageClass object on an EKS cluster. Use the same file system for all containers.
• C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in a StorageClass
object on an EKS cluster. Use the same volume for all containers.
• D. Create Amazon Elastic File System (Amazon EFS) file systems in the same Availability Zones where
EKS worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS
Lambda function to synchronize the data between file systems.
Hide Answer
Suggested Answer: A
Question #: : 699
A company has an application that uses Docker containers in its local data center. The application runs on a
container host that stores persistent data in a volume on the host. The container instances use the stored persistent
data.
The company wants to move the application to a fully managed service because the company does not want to
manage any servers or storage infrastructure.
Hide Answer
Suggested Answer: B
700-799
■
Question #: : 700
A gaming company wants to launch a new internet-facing application in multiple AWS Regions. The application
will use the TCP and UDP protocols for communication. The company needs to provide high availability and
minimum latency for global users.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
• A. Create internal Network Load Balancers in front of the application in each Region.
• B. Create external Application Load Balancers in front of the application in each Region.
• C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
• D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
• E. Configure Amazon CloudFront to handle the traffic and route requests to the application in each
Region
Hide Answer
Suggested Answer: BC
Explaim:
Use AWS Global Accelerator:
AWS Global Accelerator is a service that improves the availability and performance of your applications with local
or global users.
Configure AWS Global Accelerator with TCP and UDP listeners to route traffic to the application deployed in
multiple AWS Regions.
AWS Global Accelerator intelligently routes traffic to the nearest healthy endpoint based on latency and health
checks, minimizing latency for global users and providing high availability.
Ensure that the internet-facing application endpoints in each AWS Region are registered as accelerators in AWS
Global Accelerator.
Deploy Network Load Balancers (NLBs):
Use Network Load Balancers to distribute TCP and UDP traffic to the application instances within each AWS
Region.
Network Load Balancers are highly scalable and capable of handling millions of requests per second with low
latency.
Configure NLBs with target groups containing the application instances within each AWS Region to ensure high
availability and fault tolerance.
By combining AWS Global Accelerator with Network Load Balancers, the gaming company can achieve high
availability and minimum latency for global users accessing the application using both TCP and UDP protocols.
AWS Global Accelerator intelligently routes traffic to the nearest healthy endpoint, while Network Load Balancers
distribute traffic within each AWS Region, ensuring scalability and fault tolerance.
Sử dụng Bộ tăng tộc toàn cầu AWS:
AWS Global Accelerator là dịch vụ giúp cầi thiện tính khầ dụng và hiệu suầt cụa ửng dụng cụa bần với ngửới dùng
địa phửớng hoầc toàn cầu.
Định cầu hình AWS Global Accelerator với trình nghe TCP và UDP đệ định tuyện lửu lửớng truy cầp đện ửng
dụng đửớc triện khai ớ nhiệu Khu vửc AWS.
AWS Global Accelerator định tuyện lửu lửớng truy cầp đện điệm cuội hoầt động tột gần nhầt một cách thông
minh dửa trên độ trệ và kiệm tra tình trầng, giầm thiệu độ trệ cho ngửới dùng toàn cầu và mang lầi tính sần sàng
cao.
Đầm bầo rầng các điệm cuội ửng dụng kệt nội Internet ớ tửng Khu vửc AWS đửớc đăng ký làm bộ tăng tộc trong
AWS Global Accelerator.
Triện khai Cân bầng tầi mầng (NLB):
Sử dụng Cân bầng tầi mầng đệ phân phội lửu lửớng TCP và UDP đện các phiên bần ửng dụng trong tửng Khu
vửc AWS.
Cân bầng tầi mầng có khầ năng mớ rộng cao và có khầ năng xử lý hàng triệu yêu cầu mội giây với độ trệ thầp.
Định cầu hình NLB với các nhóm mục tiêu chửa phiên bần ửng dụng trong tửng Khu vửc AWS đệ đầm bầo tính
khầ dụng và khầ năng chịu lội cao.
Bầng cách kệt hớp AWS Global Accelerator với Network Load Balancer, công ty game có thệ đầt đửớc độ sần
sàng cao và độ trệ tội thiệu cho ngửới dùng toàn cầu truy cầp ửng dụng bầng cầ giao thửc TCP và UDP. AWS
Global Accelerator định tuyện lửu lửớng truy cầp đện điệm cuội hoầt động tột gần nhầt một cách thông minh,
trong khi Cân bầng tầi mầng phân phội lửu lửớng trong tửng Khu vửc AWS, đầm bầo khầ năng mớ rộng và khầ
năng chịu lội.
■
Question #: : 701
A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer
(ALB). The application's users have reported sporadic performance, which appears to be related to DDoS attacks
originating from random IP addresses. The city needs a solution that requires minimal configuration changes and
provides an audit trail for the DDoS sources.
Hide Answer
Suggested Answer: B
Giầi trình:
AWS Shield Advanced cung cầp khầ năng bầo vệ DDoS toàn diện cho các tài nguyên AWS, bao gộm cầ các phiên
bần Amazon EC2 sử dụng Cân bầng tầi ửng dụng (ALB). Nó cung cầp khầ năng phát hiện và giầm nhệ nâng cao
đệ bầo vệ chộng lầi các cuộc tần công theo khội, cần kiệt trầng thái và lớp ửng dụng.
Thay đội cầu hình tội thiệu: Việc kích hoầt AWS Shield Advanced cho ALB yêu cầu thay đội cầu hình tội thiệu.
Nó liên quan đện việc đăng ký dịch vụ và kích hoầt dịch vụ cho các tài nguyên AWS mong muộn, chầng hần nhử
ALB. Không cần phầi sửa đội mã ửng dụng hoầc kiện trúc.
Bần kiệm tra các nguộn DDoS: AWS Shield Advanced cung cầp nhầt ký và báo cáo tần công chi tiệt, bao gộm
thông tin vệ các nguộn tần công DDoS. Thành phộ có thệ sử dụng các nhầt ký và báo cáo này đệ phân tích các
kiệu tần công, xác định địa chỉ IP gây ra các cuộc tần công và thửc hiện các biện pháp thích hớp đệ giầm thiệu
chúng. Quá trình kiệm tra này giúp hiệu rõ bần chầt và phầm vi cụa các cuộc tần công DDoS và tầo điệu kiện cho
các biện pháp bầo mầt chụ động.
■
Question #: : 702
A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices.
The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas
deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-
throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the
devices back to AWS.
Hide Answer
Suggested Answer: C
Explain:
Designed for HPC: FSx for Lustre is a high-performance parallel file system specifically built for HPC workloads.
It offers low latency and high throughput, ideal for the company's needs.
Direct Data Import: Importing data directly into the FSx for Lustre file system eliminates the need for
intermediate storage solutions like S3 or EFS, minimizing latency.
Đửớc thiệt kệ cho HPC: FSx for Lustre là hệ thộng tệp song song hiệu suầt cao đửớc xây dửng riêng cho khội
lửớng công việc HPC. Nó cung cầp độ trệ thầp và thông lửớng cao, lý tửớng cho nhu cầu cụa công ty.
Nhầp dử liệu trửc tiệp: Nhầp dử liệu trửc tiệp vào hệ thộng tệp FSx for Lustre giúp loầi bộ nhu cầu vệ các giầi
pháp lửu trử trung gian nhử S3 hoầc EFS, giầm thiệu độ trệ.
■
Question #: : 703
A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data
to Amazon S3.
Hide Answer
Suggested Answer: D
Question #: : 704
An online video game company must maintain ultra-low latency for its game servers. The game servers run on
Amazon EC2 instances. The company needs a solution that can handle millions of UDP internet traffic requests
each second.
Hide Answer
Suggested Answer: A
Question #: : 705
A company runs a three-tier application in a VPC. The database tier uses an Amazon RDS for MySQL DB instance.
The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster.
The company needs a solution that replicates the data changes that happen during the migration to the new
database.
Hide Answer
Suggested Answer: AE
AWS DMS Schema Conversion Tool (SCT) helps convert the database schema from MySQL to PostgreSQL,
ensuring compatibility and proper structure in the target Aurora PostgreSQL DB cluster.
D. Defining an AWS DMS task with change data capture (CDC) to migrate the data:
By defining an AWS DMS task with CDC enabled, the data changes that occur during the migration process will
be captured and replicated to the target Aurora PostgreSQL DB cluster in real-time.
This ensures that any changes made to the MySQL database during the migration process are replicated to the
Aurora PostgreSQL DB cluster, allowing for a seamless transition without data loss or inconsistencies.
Combining these two steps ensures that both the database schema and data changes are properly replicated from
the RDS for MySQL DB instance to the Aurora PostgreSQL DB cluster, facilitating a smooth migration process
while minimizing downtime and ensuring data consistency.
Giầi trình:
A. Sử dụng Chuyện đội lửớc độ AWS DMS đệ chuyện đội các đội tửớng cớ sớ dử liệu:
Công cụ chuyện đội lửớc độ AWS DMS (SCT) giúp chuyện đội lửớc độ cớ sớ dử liệu tử MySQL sang PostgreSQL,
đầm bầo tính tửớng thích và cầu trúc phù hớp trong cụm cớ sớ dử liệu Aurora PostgreSQL đích.
D. Xác định tác vụ AWS DMS bầng tính năng thu thầp dử liệu thay đội (CDC) đệ di chuyện dử liệu:
Bầng cách xác định tác vụ AWS DMS có bầt CDC, nhửng thay đội dử liệu xầy ra trong quá trình di chuyện sẽ
đửớc ghi lầi và sao chép sang cụm cớ sớ dử liệu Aurora PostgreSQL mục tiêu trong thới gian thửc.
Điệu này đầm bầo rầng mội thay đội đửớc thửc hiện đội với cớ sớ dử liệu MySQL trong quá trình di chuyện đệu
đửớc sao chép sang cụm cớ sớ dử liệu Aurora PostgreSQL, cho phép chuyện đội liện mầch mà không mầt dử liệu
hoầc không nhầt quán.
Việc kệt hớp hai bửớc này sẽ đầm bầo rầng cầ sớ độ cớ sớ dử liệu và các thay đội vệ dử liệu đệu đửớc sao chép
chính xác tử phiên bần RDS cho MySQL DB sang cụm cớ sớ dử liệu Aurora PostgreSQL, tầo điệu kiện cho quá
trình di chuyện suôn sệ động thới giầm thiệu thới gian ngửng hoầt động và đầm bầo tính nhầt quán cụa dử liệu.
■
Question #: : 706
A company hosts a database that runs on an Amazon RDS instance that is deployed to multiple Availability Zones.
The company periodically runs a script against the database to report new entries that are added to the database.
The script that runs against the database negatively affects the performance of a critical application. The company
needs to improve application performance with minimal costs.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Add functionality to the script to identify the instance that has the fewest active connections.
Configure the script to read from that instance to report the total new entries.
• B. Create a read replica of the database. Configure the script to query only the read replica to report the
total new entries.
• C. Instruct the development team to manually export the new entries for the day in the database at the
end of each day.
• D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.
Hide Answer
Suggested Answer: B
Giầi trình:
Tầo Bần sao chỉ có quyện độc: Bầng cách tầo bần sao chỉ có quyện độc cụa phiên bần cớ sớ dử liệu Amazon RDS,
công ty có thệ giầm tầi khội lửớng công việc có cửớng độ độc cao tử cớ sớ dử liệu chính sang bần sao chỉ có quyện
độc. Điệu này giúp cầi thiện hiệu suầt cụa ửng dụng quan trộng đửớc lửu trử trên cớ sớ dử liệu chính.
Chi phí hoầt động tội thiệu: Việc thiệt lầp bần sao chỉ có quyện độc trong Amazon RDS rầt đớn giần và yêu cầu
chi phí hoầt động tội thiệu. Sau khi cầu hình bần sao chỉ có quyện độc, nó sẽ tử động sao chép dử liệu tử phiên
bần cớ sớ dử liệu chính. Không cần phầi sửa đội tầp lệnh hoầc thêm logic bộ sung đệ xác định phiên bần có ít kệt
nội hoầt động nhầt vì bần sao chỉ có quyện độc sẽ tử động có sần đệ truy vần.
Bầng cách định cầu hình tầp lệnh báo cáo đệ chỉ truy vần bần sao đã độc, công ty có thệ giầm tầi cho cớ sớ dử liệu
chính và giầm thiệu tác động đện hiệu suầt cụa ửng dụng quan trộng, tầt cầ đệu có chi phí hoầt động tội thiệu.
Giầi pháp này đầm bầo rầng tầp lệnh báo cáo có thệ tiệp tục hoầt động mà không ầnh hửớng tiêu cửc đện cớ sớ
dử liệu chính hoầc ửng dụng quan trộng.
■
Question #: : 707
abnormal :bầt thửớng
A company is using an Application Load Balancer (ALB) to present its application to the internet. The company
finds abnormal traffic access patterns across the application. A solutions architect needs to improve visibility into
the infrastructure to help the company understand these abnormalities better.
What is the MOST operationally efficient solution that meets these requirements?
• A. Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information.
• B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
• C. Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the
relevant information.
• D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic
access log information.
Hide Answer
Suggested Answer: C
Explanation:
ALB Access Logging: By enabling access logging on the Application Load Balancer (ALB), the company can
capture detailed information about incoming requests, including client IP addresses, request times, request paths,
response codes, etc.
Amazon S3: Enabling ALB access logging sends the log files to an Amazon S3 bucket, providing a durable and
scalable storage solution for the logs.
Amazon Athena: Amazon Athena allows for interactive querying of data stored in Amazon S3 using standard SQL
syntax. By creating a table in Athena that points to the ALB access logs stored in S3, the company can easily query
and analyze the logs to gain insights into abnormal traffic access patterns without the need for manual processing
or parsing.
Option A (Create a table in Amazon Athena for AWS CloudTrail logs) does not directly address the abnormal
traffic access patterns observed across the application. CloudTrail logs provide information about AWS API
activity and management events, which may not be relevant for diagnosing application-level abnormalities.
Option C (Manually searching through ALB access log files) is highly inefficient and impractical, especially for
large volumes of log data. It would require significant manual effort and is prone to errors.
Option D (Using Amazon EMR on a dedicated EC2 instance to directly query the ALB) introduces unnecessary
complexity and operational overhead. Amazon EMR is a managed big data processing service primarily used for
processing large datasets, which might be overkill for this scenario. Additionally, setting up and managing an EMR
cluster adds operational complexity compared to leveraging native AWS services like Athena.
Tùy chộn B. Bầt ghi nhầt ký truy cầp ALB vào Amazon S3. Tầo một bầng trong Amazon Athena và truy vần nhầt
ký.
Giầi trình:
Ghi nhầt ký truy cầp ALB: Bầng cách bầt ghi nhầt ký truy cầp trên Cân bầng tầi ửng dụng (ALB), công ty có thệ
nầm bầt thông tin chi tiệt vệ các yêu cầu đện, bao gộm địa chỉ IP cụa khách hàng, thới gian yêu cầu, đửớng dần
yêu cầu, mã phần hội, v.v.
Amazon S3: Kích hoầt tính năng ghi nhầt ký truy cầp ALB sẽ gửi tệp nhầt ký đện bộ chửa Amazon S3, cung cầp
giầi pháp lửu trử bện bỉ và có thệ mớ rộng cho nhầt ký.
Amazon Athena: Amazon Athena cho phép truy vần tửớng tác dử liệu đửớc lửu trử trong Amazon S3 bầng cú
pháp SQL tiêu chuần. Bầng cách tầo một bầng trong Athena trộ đện nhầt ký truy cầp ALB đửớc lửu trử trong S3,
công ty có thệ dệ dàng truy vần và phân tích nhầt ký đệ hiệu rõ hớn vệ các kiệu truy cầp lửu lửớng truy cầp bầt
thửớng mà không cần xử lý hoầc phân tích cú pháp thụ công.
Tùy chộn A (Tầo bầng trong Amazon Athena cho nhầt ký AWS CloudTrail) không giầi quyệt trửc tiệp các mầu
truy cầp lửu lửớng truy cầp bầt thửớng đửớc quan sát trên ửng dụng. Nhầt ký CloudTrail cung cầp thông tin vệ
các sử kiện quần lý và hoầt động API AWS, nhửng sử kiện này có thệ không liên quan đện việc chần đoán nhửng
bầt thửớng ớ cầp ửng dụng.
Tùy chộn C (Tìm kiệm thụ công thông qua tệp nhầt ký truy cầp ALB) rầt kém hiệu quầ và không thửc tệ, đầc biệt
đội với khội lửớng lớn dử liệu nhầt ký. Nó sẽ đòi hội nộ lửc thụ công đáng kệ và dệ bị lội.
Tùy chộn D (Sử dụng Amazon EMR trên phiên bần EC2 chuyên dụng đệ truy vần trửc tiệp ALB) gây ra sử phửc
tầp và chi phí vần hành không cần thiệt. Amazon EMR là dịch vụ xử lý dử liệu lớn đửớc quần lý, chụ yệu đửớc sử
dụng đệ xử lý các tầp dử liệu lớn, có thệ là quá mửc cần thiệt trong trửớng hớp này. Ngoài ra, việc thiệt lầp và
quần lý cụm EMR sẽ tăng thêm độ phửc tầp trong vần hành so với việc tần dụng các dịch vụ AWS gộc nhử Athena.
■
Question #: : 708
A company wants to use NAT gateways in its AWS environment. The company's Amazon EC2 instances in private
subnets must be able to connect to the public internet through the NAT gateways.
Hide Answer
Suggested Answer: D
Placing NAT gateways in public subnets ensures that they have access to the internet gateway and can route traffic
to the public internet. Private subnets do not have direct access to the internet gateway, which is why NAT
gateways need to be in public subnets.
The private EC2 instances in the private subnets can then route their internet-bound traffic through the public
NAT gateways located in the public subnets.
Options A and B are incorrect because placing NAT gateways in private subnets would not allow them to access
the internet gateway, preventing them from performing the required translation of private IP addresses to public
IP addresses.
Option D is incorrect because NAT gateways should be placed in public subnets to access the internet gateway.
Placing them in public subnets does not make them "public" NAT gateways; they are still private NAT gateways
because they serve private instances in private subnets.
Therefore, Option C is the correct solution to ensure that Amazon EC2 instances in private subnets can connect
to the public internet through the NAT gateways.
Cộng NAT đửớc sử dụng đệ cho phép các phiên bần trong mầng con riêng tử truy cầp Internet trong khi vần ớ
chệ độ riêng tử. Hộ làm nhử vầy bầng cách dịch địa chỉ IP riêng cụa các phiên bần sang địa chỉ IP công cộng trửớc
khi gửi lửu lửớng truy cầp lên internet.
Việc đầt các cộng NAT trong mầng con công cộng đầm bầo rầng chúng có quyện truy cầp vào cộng internet và có
thệ định tuyện lửu lửớng truy cầp đện internet công cộng. Mầng con riêng tử không có quyện truy cầp trửc tiệp
vào cộng internet, đó là lý do tầi sao cộng NAT cần phầi nầm trong mầng con công cộng.
Sau đó, các phiên bần EC2 riêng tử trong mầng con riêng tử có thệ định tuyện lửu lửớng truy cầp trên Internet
thông qua các cộng NAT công cộng nầm trong mầng con công cộng.
Tùy chộn A và B không chính xác vì việc đầt các cộng NAT trong mầng con riêng tử sẽ không cho phép chúng
truy cầp vào cộng internet, ngăn chúng thửc hiện việc dịch địa chỉ IP riêng tử sang địa chỉ IP công cộng theo yêu
cầu.
Tùy chộn D không chính xác vì cộng NAT phầi đửớc đầt trong mầng con công cộng đệ truy cầp cộng internet.
Việc đầt chúng trong các mầng con công cộng không làm cho chúng trớ thành các cộng NAT "công khai"; chúng
vần là các cộng NAT riêng vì chúng phục vụ các phiên bần riêng trong mầng con riêng.
Do đó, Tùy chộn C là giầi pháp phù hớp đệ đầm bầo rầng các phiên bần Amazon EC2 trong mầng con riêng tử
có thệ kệt nội với Internet công cộng thông qua cộng NAT.
■
Question #: : 709
prohibit: cầm
A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four
AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production
account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction
accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the
prohibited types.
Which solutions to deploy the SCP will meet these requirements? (Choose two.)
• A. Attach the SCP to the root OU for the organization.
• B. Attach the SCP to the three nonproduction Organizations member accounts.
• C. Attach the SCP to the Organizations management account.
• D. Create an OU for the production account. Attach the SCP to the OU. Move the production member
account into the new OU.
• E. Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.
Hide Answer
Suggested Answer: DE
E. Create an OU for the nonproduction accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.
This approach groups the non-production accounts under a dedicated OU and then applies the SCP to the OU.
This can be useful for managing policies for multiple accounts at once.
A. Attach the SCP to the root OU: This would apply the restriction to all accounts in the organization, including
the production account, which is not desirable.
C. Attach the SCP to the Organizations management account: The management account is used for managing
the organization itself, not for controlling individual accounts' actions.
D. Create an OU for the production account and attach SCP: This solution creates unnecessary complexity by
creating a separate OU for a single account. Additionally, it doesn't address restricting launches in the non-
production accounts.
In conclusion, either attaching the SCP directly to the non-production accounts (B) or creating a dedicated OU
for them and attaching the SCP to the OU (E) will achieve the desired outcome of restricting EC2 instance
launches of specific sizes in those accounts. Choose the approach that best suits your organizational structure and
management preferences.
B. Đính kèm SCP vào ba tài khoần thành viên cụa Tộ chửc phi sần xuầt.
Điệu này trửc tiệp áp dụng chính sách cho các tài khoần cụ thệ mà bần muộn có các hần chệ.
E. Tầo OU cho các tài khoần phi sần xuầt. Gần SCP vào OU. Di chuyện các tài khoần thành viên không sần xuầt
vào OU mới.
Cách tiệp cần này nhóm các tài khoần phi sần xuầt vào một OU chuyên dụng và sau đó áp dụng SCP cho OU.
Điệu này có thệ hửu ích cho việc quần lý chính sách cho nhiệu tài khoần cùng một lúc.
A. Đính kèm SCP vào OU gộc: Điệu này sẽ áp dụng hần chệ cho tầt cầ các tài khoần trong tộ chửc, bao gộm cầ
tài khoần sần xuầt, điệu này là không mong muộn.
C. Đính kèm SCP vào tài khoần quần lý cụa Tộ chửc: Tài khoần quần lý đửớc sử dụng đệ quần lý chính tộ chửc
đó chử không phầi đệ kiệm soát hành động cụa các tài khoần cá nhân.
D. Tầo OU cho tài khoần sần xuầt và đính kèm SCP: Giầi pháp này tầo ra sử phửc tầp không cần thiệt bầng cách
tầo OU riêng cho một tài khoần. Ngoài ra, nó không giầi quyệt việc hần chệ khới chầy trong các tài khoần phi sần
xuầt.
Tóm lầi, việc gần SCP trửc tiệp vào các tài khoần phi sần xuầt (B) hoầc tầo OU dành riêng cho chúng và gần SCP
vào OU (E) sẽ đầt đửớc kệt quầ mong muộn là hần chệ việc khới chầy phiên bần EC2 có kích thửớc cụ thệ trong
các tài khoần đó . Chộn cách tiệp cần phù hớp nhầt với cớ cầu tộ chửc và sớ thích quần lý cụa bần.
■
Question #: : 710
A company’s website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to
security concerns, the company requires a private and secure connection between its EC2 resources and Amazon
S3.
Hide Answer
Suggested Answer: A
Bầng cách tầo Điệm cuội VPC cho Amazon S3 trong Đám mây riêng ầo (VPC) cụa công ty, bần có thệ thiệt lầp
kệt nội riêng tử và an toàn giửa các phiên bần EC2 và S3 mà không cần phầi truy cầp Internet công cộng. Điệm
cuội VPC cho phép các phiên bần EC2 trong VPC truy cầp S3 bầng địa chỉ IP riêng, tăng cửớng bầo mầt bầng
cách duy trì lửu lửớng truy cầp trong mầng AWS.
■
Question #: : 711
cluster’ cụm
An ecommerce company runs its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster
in Multi-AZ mode for the underlying database. During a recent promotional campaign, the application
experienced heavy read load and write load. Users experienced timeout issues when they attempted to access the
application.
A solutions architect needs to make the application architecture more scalable and highly available.
Which solution will meet these requirements with the LEAST downtime?
• A. Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda
function to log the state change events of the Aurora cluster. Add the Lambda function as a target for the
EventBridge rule. Add additional reader nodes to fail over to.
• B. Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database
Activity Streams on the cluster to track the cluster status.
• C. Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for
the Aurora cluster.
• D. Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by
using AWS Database Migration Service (AWS DMS) with a write-around approach.
Hide Answer
Suggested Answer: B
Creating an Amazon RDS Proxy target group: Amazon RDS Proxy helps to manage connections to the Aurora
cluster, improving scalability and fault tolerance for read-heavy applications. It provides features like connection
pooling and automated failover, which can help improve application availability.
This solution addresses the scalability and availability requirements of the application architecture with minimal
downtime. It leverages Aurora's Multi-AZ configuration and enhances it by adding more read replicas to handle
heavy read loads efficiently. Additionally, using RDS Proxy helps to manage connections more effectively and
provides fault tolerance capabilities.
Options A, B, and D may introduce additional complexity or involve changes that could potentially cause
downtime or disrupt the application. For example:
Option A: Involves setting up EventBridge rules and Lambda functions, which may not directly address the
scalability and availability requirements of the database.
Option B: Activating zero-downtime restart (ZDR) and using Database Activity Streams may not directly address
the heavy read load issue or improve scalability.
Option D: Adding an ElastiCache for Redis cache with data replication from Aurora via DMS introduces a separate
caching layer, which may not directly address the scalability and availability requirements of the database and
could introduce additional complexity. Additionally, using a write-around approach may not be suitable for all use
cases and may not provide the desired level of consistency.
Thêm phiên bần trình độc bộ sung: Bầng cách thêm nhiệu phiên bần trình độc hớn vào cụm Aurora, bần phân
phội tầi độc trên nhiệu phiên bần, giúp xử lý tầi độc nầng hiệu quầ hớn và cầi thiện khầ năng mớ rộng.
Tầo nhóm mục tiêu Amazon RDS Proxy: Amazon RDS Proxy giúp quần lý các kệt nội đện cụm Aurora, cầi thiện
khầ năng mớ rộng và khầ năng chịu lội cho các ửng dụng có cửớng độ độc cao. Nó cung cầp các tính năng nhử
tộng hớp kệt nội và chuyện đội dử phòng tử động, có thệ giúp cầi thiện tính khầ dụng cụa ửng dụng.
Giầi pháp này giầi quyệt các yêu cầu vệ khầ năng mớ rộng và tính khầ dụng cụa kiện trúc ửng dụng với thới gian
ngửng hoầt động ớ mửc tội thiệu. Nó tần dụng cầu hình Multi-AZ cụa Aurora và nâng cao cầu hình này bầng cách
thêm nhiệu bần sao chỉ có quyện độc hớn đệ xử lý tầi độc nầng một cách hiệu quầ. Ngoài ra, việc sử dụng RDS
Proxy giúp quần lý kệt nội hiệu quầ hớn và cung cầp khầ năng chịu lội.
Các tùy chộn A, B và D có thệ gây ra sử phửc tầp hớn hoầc liên quan đện nhửng thay đội có khầ năng gây ra thới
gian ngửng hoầt động hoầc làm gián đoần ửng dụng. Ví dụ:
Tùy chộn A: Liên quan đện việc thiệt lầp các quy tầc EventBridge và hàm Lambda, có thệ không trửc tiệp giầi
quyệt các yêu cầu vệ khầ năng mớ rộng và tính khầ dụng cụa cớ sớ dử liệu.
Tùy chộn B: Kích hoầt khới động lầi không có thới gian ngửng hoầt động (ZDR) và sử dụng Luộng hoầt động cớ
sớ dử liệu có thệ không trửc tiệp giầi quyệt đửớc vần đệ tầi độc nầng hoầc cầi thiện khầ năng mớ rộng.
Tùy chộn D: Việc thêm bộ nhớ đệm ElastiCache cho Redis bầng tính năng sao chép dử liệu tử Aurora qua DMS
sẽ tầo ra một lớp bộ nhớ đệm riêng biệt, có thệ không giầi quyệt trửc tiệp các yêu cầu vệ khầ năng mớ rộng và
tính khầ dụng cụa cớ sớ dử liệu và có thệ gây ra sử phửc tầp hớn nửa. Ngoài ra, việc sử dụng phửớng pháp việt
xung quanh có thệ không phù hớp với mội trửớng hớp sử dụng và có thệ không mang lầi mửc độ nhầt quán nhử
mong muộn.
■
Question #: : 712
A company is designing a web application on AWS. The application will use a VPN connection between the
company’s existing data centers and the company's VPCs.
The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to
communicate with the on-premises services from a VPC.
Which solution will meet these requirements in the MOST secure manner?
• A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.
• B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.
• C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
• D. Create a Route 53 public hosted zone. Create a record for each service to allow service communication
Hide Answer
Suggested Answer: C
Create a resolver rule: With the resolver rule, you specify the domain names for which DNS queries should be
forwarded to the outbound endpoint. In this case, you would define the domain names corresponding to the on-
premises services.
Associate the resolver rule with the VPC: By associating the resolver rule with the VPC, you ensure that DNS
queries from resources within the VPC for the specified domain names are forwarded to the on-premises DNS
resolvers via the outbound endpoint.
Tầo điệm cuội gửi đi cụa Bộ giầi quyệt Route 53: Điệm cuội gửi đi này cho phép liên lầc tử VPC đện các mầng tầi
chộ. Nó cho phép các truy vần DNS bầt nguộn tử tài nguyên trong VPC đửớc chuyện tiệp đện bộ phân giầi DNS
trong môi trửớng tầi chộ một cách an toàn.
Tầo quy tầc trình phân giầi: Với quy tầc trình phân giầi, bần chỉ định tên miện mà truy vần DNS sẽ đửớc chuyện
tiệp đện điệm cuội gửi đi. Trong trửớng hớp này, bần sẽ xác định tên miện tửớng ửng với các dịch vụ tầi chộ.
Liên kệt quy tầc trình phân giầi với VPC: Bầng cách liên kệt quy tầc trình phân giầi với VPC, bần đầm bầo rầng
các truy vần DNS tử tài nguyên trong VPC cho các tên miện đửớc chỉ định sẽ đửớc chuyện tiệp đện trình phân
giầi DNS tầi chộ thông qua điệm cuội gửi đi.
Giầi pháp này tần dụng Trình phân giầi Route 53 đệ định tuyện các truy vần DNS tử VPC đện môi trửớng tầi chộ
một cách an toàn, đầm bầo rầng các bần ghi DNS riêng cho các dịch vụ tầi chộ có thệ đửớc giầi quyệt một cách
an toàn.
Option B: Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.
Explanation: Inbound endpoints are used to route DNS queries from on-premises networks into the VPC. They
are not intended for resolving DNS queries originating from resources within the VPC to on-premises services.
Therefore, using an inbound endpoint would not address the requirement for the web application to communicate
with on-premises services using private DNS records.
Option C: Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
Explanation: While private hosted zones can be used to define custom DNS records for private communication
within a VPC, they are not suitable for resolving DNS queries to on-premises services. Private hosted zones are
specific to AWS environments and do not extend to on-premises networks. Therefore, this option would not
enable resolution of on-premises service names from the VPC.
Option D: Create a Route 53 public hosted zone. Create a record for each service to allow service communication.
Explanation: Public hosted zones are intended for resolving DNS queries from the public internet and are not
suitable for private communication within a VPC or with on-premises services. Additionally, exposing on-
premises service names in a public hosted zone could pose security risks by exposing internal infrastructure details
to the public internet.
In summary, options B, C, and D are not the most appropriate solutions because they do not address the
requirement for secure communication between the web application in the VPC and the on-premises services
using private DNS records. Option A, which involves creating an outbound endpoint and resolver rule, is the
correct solution as it enables DNS resolution from the VPC to on-premises services securely.
Tùy chộn B: Tầo điệm cuội gửi đện Bộ giầi quyệt Tuyện 53. Tầo quy tầc giầi quyệt. Liên kệt quy tầc trình phân
giầi với VPC.
Giầi thích: Điệm cuội gửi đện đửớc dùng đệ định tuyện các truy vần DNS tử mầng tầi chộ đện VPC. Chúng không
nhầm mục đích giầi quyệt các truy vần DNS bầt nguộn tử tài nguyên trong VPC tới các dịch vụ tầi chộ. Do đó,
việc sử dụng điệm cuội gửi đện sẽ không giầi quyệt đửớc yêu cầu ửng dụng web giao tiệp với các dịch vụ tầi chộ
bầng bần ghi DNS riêng.
Tùy chộn C: Tầo vùng lửu trử riêng cụa Tuyện 53. Liên kệt vùng lửu trử riêng với VPC.
Giầi thích: Mầc dù có thệ sử dụng các vùng lửu trử riêng đệ xác định bần ghi DNS tùy chỉnh cho hoầt động liên
lầc riêng tử trong VPC nhửng chúng không phù hớp đệ giầi quyệt các truy vần DNS cho các dịch vụ tầi chộ. Các
vùng đửớc lửu trử riêng dành riêng cho môi trửớng AWS và không mớ rộng sang mầng tầi chộ. Do đó, tùy chộn
này sẽ không cho phép phân giầi tên dịch vụ tầi chộ tử VPC.
Tùy chộn D: Tầo vùng lửu trử công khai Tuyện 53. Tầo bần ghi cho tửng dịch vụ đệ cho phép liên lầc dịch vụ.
Giầi thích: Các vùng đửớc lửu trử công khai nhầm mục đích giầi quyệt các truy vần DNS tử Internet công cộng
và không phù hớp cho hoầt động liên lầc riêng tử trong VPC hoầc với các dịch vụ tầi chộ. Ngoài ra, việc tiệt lộ tên
dịch vụ tầi chộ trong vùng lửu trử công cộng có thệ gây ra rụi ro bầo mầt bầng cách tiệt lộ thông tin chi tiệt vệ cớ
sớ hầ tầng nội bộ trên Internet công cộng.
Tóm lầi, các tùy chộn B, C và D không phầi là giầi pháp phù hớp nhầt vì chúng không giầi quyệt đửớc yêu cầu
liên lầc an toàn giửa ửng dụng web trong VPC và các dịch vụ tầi chộ sử dụng bần ghi DNS riêng. Tùy chộn A, bao
gộm việc tầo quy tầc trình phân giầi và điệm cuội gửi đi, là giầi pháp phù hớp vì nó cho phép phân giầi DNS tử
VPC đện các dịch vụ tầi chộ một cách an toàn.
■
Question #: : 713
determine:xác định
A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple
countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less
than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata
to determine which photos to display to each user.
Hide Answer
Suggested Answer: D
DynamoDB for storing metadata: DynamoDB is a highly scalable, fully managed NoSQL database service. Storing
photo metadata and their S3 locations in DynamoDB allows for efficient querying and retrieval of photo
information based on user requests. DynamoDB's flexible scaling and low latency retrieval make it suitable for
storing and querying metadata.
Option A (Using DynamoDB with DAX) may not be the most cost-effective solution as DynamoDB can be more
expensive compared to S3, especially considering the volume of data involved in storing photos.
Option C (Using S3 Standard with Lifecycle policy) and Option D (Using S3 Glacier with Lifecycle policy and
OpenSearch Service) are not the most cost-effective solutions considering the varying access patterns of the
photos. Storing all photos in a single storage class without automatically optimizing for access patterns may lead
to higher costs, especially if frequently accessed photos are stored in more expensive storage classes.
Therefore, Option B is the most appropriate solution as it effectively addresses the requirements of the photo
hosting service while ensuring cost-effectiveness.
Phân bầc thông minh cụa Amazon S3: Lớp lửu trử này tử động tội ửu hóa chi phí lửu trử bầng cách di chuyện các
đội tửớng giửa hai tầng truy cầp: truy cầp thửớng xuyên và truy cầp không thửớng xuyên. Vì một sộ ầnh đửớc
xem nhiệu trong nhiệu tháng trong khi nhửng ầnh khác đửớc xem trong thới gian ngần hớn, Phân bầc thông minh
quần lý hiệu quầ việc lửu trử cầ hai loầi ầnh, đầm bầo hiệu quầ vệ chi phí.
DynamoDB đệ lửu trử siêu dử liệu: DynamoDB là dịch vụ cớ sớ dử liệu NoSQL đửớc quần lý toàn diện và có khầ
năng mớ rộng cao. Việc lửu trử siêu dử liệu ầnh và vị trí S3 cụa chúng trong DynamoDB cho phép truy vần và
truy xuầt thông tin ầnh hiệu quầ dửa trên yêu cầu cụa ngửới dùng. Khầ năng thay đội quy mô linh hoầt và khầ
năng truy xuầt có độ trệ thầp cụa DynamoDB giúp nó phù hớp đệ lửu trử và truy vần siêu dử liệu.
Tùy chộn A (Sử dụng DynamoDB với DAX) có thệ không phầi là giầi pháp tiệt kiệm chi phí nhầt vì DynamoDB
có thệ đầt hớn so với S3, đầc biệt khi xét đện khội lửớng dử liệu liên quan đện việc lửu trử ầnh.
Tùy chộn C (Sử dụng Tiêu chuần S3 với chính sách Vòng đới) và Tùy chộn D (Sử dụng S3 Glacier với chính sách
Vòng đới và Dịch vụ OpenSearch) không phầi là giầi pháp tiệt kiệm chi phí nhầt khi xét đện các kiệu truy cầp ầnh
khác nhau. Việc lửu trử tầt cầ ầnh trong một lớp lửu trử duy nhầt mà không tử động tội ửu hóa cho các kiệu truy
cầp có thệ dần đện chi phí cao hớn, đầc biệt nệu nhửng bửc ầnh đửớc truy cầp thửớng xuyên đửớc lửu trử trong
các lớp lửu trử đầt tiện hớn.
Vì vầy, Phửớng án B là giầi pháp phù hớp nhầt vì nó giầi quyệt hiệu quầ các yêu cầu cụa dịch vụ lửu trử ầnh mà
vần đầm bầo hiệu quầ vệ chi phí.
■
Question #: : 714
A company runs a highly available web application on Amazon EC2 instances behind an Application Load Balancer.
The company uses Amazon CloudWatch metrics.
As the traffic to the web application increases, some EC2 instances become overloaded with many outstanding
requests. The CloudWatch metrics show that the number of requests processed and the time to receive the
responses from some EC2 instances are both higher compared to other EC2 instances. The company does not
want new requests to be forwarded to the EC2 instances that are already overloaded.
Hide Answer
Suggested Answer: C
Question #: : 715
A company uses Amazon EC2, AWS Fargate, and AWS Lambda to run multiple workloads in the company's AWS
account. The company wants to fully make use of its Compute Savings Plans. The company wants to receive
notification when coverage of the Compute Savings Plans drops.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a
coverage threshold to send notifications to the appropriate email message recipients.
• B. Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple
Email Service (Amazon SES) to email the report to the appropriate email message recipients.
• C. Create an AWS Budgets report for the Savings Plans budget. Set the frequency to daily.
• D. Create a Savings Plans alert subscription. Enable all notification options. Enter an email address to
receive notifications.
Hide Answer
Suggested Answer: B
Coverage Threshold: With AWS Budgets, you can set up a budget specifically for Savings Plans coverage. By
configuring a coverage threshold, you can define the minimum coverage percentage that you want to maintain. If
the coverage drops below this threshold, AWS Budgets will automatically trigger a notification.
Email Notifications: AWS Budgets supports sending notifications via email when budget thresholds are breached.
This ensures that the appropriate stakeholders are promptly informed when the coverage of Savings Plans drops
below the specified threshold.
This solution offers the most operational efficiency because it leverages AWS Budgets, which is specifically
designed for managing cost and usage budgets in AWS. It automates the monitoring process and sends
notifications directly to the relevant stakeholders when coverage drops, minimizing manual intervention and
ensuring timely awareness of any deviations from expected Savings Plans coverage.
Option B: Creating a Lambda function to run a coverage report against the Savings Plans and using Amazon SES
to email the report is less efficient because it involves custom development and maintenance overhead. You would
need to regularly update and maintain the Lambda function to generate the coverage report accurately, which
adds complexity and operational burden.
Option C: Creating an AWS Budgets report for the Savings Plans budget with a daily frequency is less efficient
because it only generates a report without actively monitoring the coverage in real-time. While you can view
historical data, it does not provide immediate notifications when the coverage drops below the threshold, resulting
in delayed awareness of potential issues.
Option D: Creating a Savings Plans alert subscription and enabling all notification options is less efficient because
it lacks granularity in defining specific thresholds for coverage. Additionally, it may lead to unnecessary
notifications for minor fluctuations in coverage, potentially causing alert fatigue. This option also doesn't allow
for customization of notification recipients or thresholds.
Ngân sách AWS: Ngân sách AWS là dịch vụ cho phép bần đầt ngân sách sử dụng và chi phí tùy chỉnh. Bần có thệ
tầo ngân sách đệ theo dõi chi tiêu và mửc sử dụng cụa mình cho các dịch vụ AWS khác nhau, bao gộm cầ Savings
Plans.
Ngửớng bầo hiệm: Với AWS Budgets, bần có thệ thiệt lầp ngân sách cụ thệ cho phầm vi bầo hiệm cụa Savings
Plans. Bầng cách định cầu hình ngửớng phụ sóng, bần có thệ xác định tỷ lệ phần trăm phụ sóng tội thiệu mà bần
muộn duy trì. Nệu mửc độ phù hớp giầm xuộng dửới ngửớng này, Ngân sách AWS sẽ tử động kích hoầt thông
báo.
Thông báo qua email: AWS Budgets hộ trớ gửi thông báo qua email khi vửớt ngửớng ngân sách. Điệu này đầm
bầo rầng các bên liên quan phù hớp sẽ đửớc thông báo kịp thới khi phầm vi áp dụng cụa Kệ hoầch Tiệt kiệm giầm
xuộng dửới ngửớng quy định.
Giầi pháp này mang lầi hiệu quầ hoầt động cao nhầt vì nó tần dụng Ngân sách AWS, đửớc thiệt kệ đầc biệt đệ
quần lý ngân sách chi phí và mửc sử dụng trong AWS. Nó tử động hóa quá trình giám sát và gửi thông báo trửc
tiệp đện các bên liên quan khi phầm vi bầo hiệm giầm xuộng, giầm thiệu sử can thiệp thụ công và đầm bầo nhần
thửc kịp thới vệ bầt kỳ sai lệch nào so với phầm vi bầo hiệm cụa Kệ hoầch Tiệt kiệm dử kiện.
Tùy chộn B: Tầo hàm Lambda đệ chầy báo cáo chính sách dửa trên Savings Plans và sử dụng Amazon SES đệ gửi
báo cáo qua email sẽ kém hiệu quầ hớn vì nó liên quan đện chi phí phát triện và bầo trì tùy chỉnh. Bần sẽ cần phầi
thửớng xuyên cầp nhầt và duy trì hàm Lambda đệ tầo báo cáo phầm vi chính xác, điệu này làm tăng thêm độ phửc
tầp và gánh nầng vần hành.
Tùy chộn C: Tầo báo cáo Ngân sách AWS cho ngân sách Kệ hoầch tiệt kiệm với tần suầt hàng ngày sẽ kém hiệu
quầ hớn vì nó chỉ tầo báo cáo mà không chụ động giám sát phầm vi trong thới gian thửc. Mầc dù bần có thệ xem
dử liệu lịch sử nhửng nó không cung cầp thông báo ngay lầp tửc khi mửc độ phù hớp giầm xuộng dửới ngửớng,
dần đện nhần thửc chầm vệ các vần đệ tiệm ần.
Tùy chộn D: Tầo đăng ký cầnh báo Kệ hoầch tiệt kiệm và bầt tầt cầ các tùy chộn thông báo sẽ kém hiệu quầ hớn
vì nó thiệu tính chi tiệt trong việc xác định các ngửớng cụ thệ cho phầm vi bầo hiệm. Ngoài ra, nó có thệ dần đện
nhửng thông báo không cần thiệt vệ nhửng biện động nhộ trong phầm vi phụ sóng, có thệ gây ra tình trầng mệt
mội khi cầnh báo. Tùy chộn này cũng không cho phép tùy chỉnh ngửới nhần hoầc ngửớng thông báo.
■
Question #: : 716
real-time data ingestion:nhầp dử liệu theo thới gian thửc
encrypted.:mã hóa.
existing VPC:VPC hiện có
A company runs a real-time data ingestion solution on AWS. The solution consists of the most recent version of
Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC in private
subnets across three Availability Zones.
A solutions architect needs to redesign the data ingestion solution to be publicly available over the internet. The
data in transit must also be encrypted.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update
the MSK cluster security settings to enable mutual TLS authentication.
• B. Create a new VPC that has public subnets. Deploy an MSK cluster in the public subnets. Update the
MSK cluster security settings to enable mutual TLS authentication.
• C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALB security
group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPS protocol.
• D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLB listener for
HTTPS communication over the internet.
Hide Answer
Suggested Answer: C
Question #: : 717
immediately.:ngay lầp tửc.
A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order
files from an on-premises enterprise resource planning (ERP) system. The application then uploads the files to
an SFTP server. The application uses a scheduled job that checks for order files every hour.
The company already has an AWS account that has connectivity to the on-premises network. The new application
on AWS must support integration with the existing ERP system. The new application must be secure and resilient
and must use the SFTP protocol to process orders from the ERP system immediately.
Hide Answer
Suggested Answer: A
Việc tầo máy chụ nội bộ SFTP AWS Transfer Family trong hai Vùng sần sàng đầm bầo tính sần sàng và khầ năng
phục hội cao.
Việc sử dụng bộ lửu trử Amazon S3 đệ lửu trử tệp đớn hàng mang lầi độ bện và khầ năng mớ rộng.
Việc tầo hàm AWS Lambda đệ xử lý tệp đớn hàng sẽ cho phép xử lý ngay lầp tửc.
Việc sử dụng quy trình làm việc do Transfer Family quần lý đệ gội hàm Lambda đầm bầo việc thửc thi hiệu quầ
và đáng tin cầy.
Tùy chộn này đáp ửng yêu cầu xử lý an toàn và linh hoầt các tệp đớn hàng bầng giao thửc SFTP, tích hớp với hệ
thộng ERP hiện có và xử lý đớn hàng ngay lầp tửc.
Option A: using S3 Event Notifications to trigger a Lambda function every hour may not meet the requirement
for immediate processing of order files.
Option B:
Similar to option A, this solution lacks resilience and high availability as it deploys the SFTP server in only one
Availability Zone.
Although it utilizes Amazon EFS storage, which supports file system access from multiple EC2 instances and AWS
Lambda, it may introduce unnecessary complexity and may not be the most cost-effective solution for this use
case.
Option C:
Deploying an SFTP internal server in two Availability Zones addresses the resilience requirement, but it doesn't
support immediate processing of order files. Instead, it relies on periodic checks using Amazon EventBridge
Scheduler, which may not meet the requirement for immediate processing.
Using Amazon EFS storage adds complexity without clear benefits for this use case.
■
Question #: : 718
A company’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing
infrastructure is not scalable and is complex to manage.
A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep
the data processing on premises.
Hide Answer
Suggested Answer: A
Amazon EMR on AWS Outposts: This option involves migrating the Apache Hadoop and Apache Spark
applications to Amazon EMR clusters deployed on AWS Outposts. AWS Outposts extend AWS infrastructure to
on-premises environments, allowing you to run AWS services locally. By deploying EMR clusters on AWS
Outposts, you can leverage the scalability and managed services of EMR while keeping your data processing on
premises.
Scalability and Reduced Complexity: EMR clusters on AWS Outposts provide scalability without the need to
manage complex on-premises infrastructure. AWS manages the underlying infrastructure, including hardware
provisioning, cluster setup, and software updates, reducing operational complexity for your team.
Integration with On-Premises Data: This solution allows seamless integration with on-premises data sources, as
the EMR clusters can access data stored on local Hadoop Distributed File System (HDFS) clusters. It ensures that
data processing remains localized while benefiting from the scalability and managed services of AWS EMR.
Operational Efficiency: By migrating to EMR on AWS Outposts, you can achieve operational efficiency by
offloading infrastructure management tasks to AWS, allowing your team to focus on data processing tasks rather
than infrastructure maintenance.
In contrast, the other options involve either migrating data to the cloud (Option D) or using AWS services in the
cloud without keeping data processing on premises (Options A and B), which do not align with the requirement
to keep data processing on premises. Therefore, Option C is the most appropriate solution for the given scenario.
Amazon EMR trên AWS Outposts: Tùy chộn này liên quan đện việc di chuyện các ửng dụng Apache Hadoop và
Apache Spark sang các cụm Amazon EMR đửớc triện khai trên AWS Outposts. AWS Outposts mớ rộng cớ sớ hầ
tầng AWS sang môi trửớng tầi chộ, cho phép bần chầy các dịch vụ AWS cục bộ. Bầng cách triện khai cụm EMR
trên AWS Outposts, bần có thệ tần dụng khầ năng mớ rộng và dịch vụ đửớc quần lý cụa EMR trong khi vần duy
trì hoầt động xử lý dử liệu tầi cớ sớ.
Khầ năng mớ rộng và giầm độ phửc tầp: Các cụm EMR trên AWS Outposts cung cầp khầ năng mớ rộng mà không
cần quần lý cớ sớ hầ tầng tầi chộ phửc tầp. AWS quần lý cớ sớ hầ tầng cớ bần, bao gộm cung cầp phần cửng, thiệt
lầp cụm và cầp nhầt phần mệm, giúp giầm độ phửc tầp trong vần hành cho nhóm cụa bần.
Tích hớp với dử liệu tầi chộ: Giầi pháp này cho phép tích hớp liện mầch với các nguộn dử liệu tầi chộ, vì các cụm
EMR có thệ truy cầp dử liệu đửớc lửu trử trên các cụm Hệ thộng tệp phân tán Hadoop (HDFS) cục bộ. Nó đầm
bầo rầng việc xử lý dử liệu vần đửớc bần địa hóa động thới đửớc hửớng lới tử khầ năng mớ rộng và các dịch vụ
đửớc quần lý cụa AWS EMR.
Hiệu quầ hoầt động: Bầng cách di chuyện sang EMR trên AWS Outposts, bần có thệ đầt đửớc hiệu quầ hoầt động
bầng cách chuyện các nhiệm vụ quần lý cớ sớ hầ tầng sang AWS, cho phép nhóm cụa bần tầp trung vào các nhiệm
vụ xử lý dử liệu thay vì bầo trì cớ sớ hầ tầng.
Ngửớc lầi, các tùy chộn khác liên quan đện việc di chuyện dử liệu sang đám mây (Tùy chộn D) hoầc sử dụng dịch
vụ AWS trên đám mây mà không tiệp tục xử lý dử liệu tầi cớ sớ (Tùy chộn A và B), không phù hớp với yêu cầu
tiệp tục xử lý dử liệu tầi cớ sớ . Vì vầy, phửớng án C là giầi pháp phù hớp nhầt cho tình huộng nhầt định.
■
Question #: : 719
A company is migrating a large amount of data from on-premises storage to AWS. Windows, Mac, and Linux
based Amazon EC2 instances in the same AWS Region will access the data by using SMB and NFS storage
protocols. The company will access a portion of the data routinely. The company will access the remaining data
infrequently.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use
AWS DataSync to migrate the data to the EFS volume.
• B. Create an Amazon FSx for ONTAP instance. Create an FSx for ONTAP file system with a root volume
that uses the auto tiering policy. Migrate the data to the FSx for ONTAP volume.
• C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by
using an AWS Storage Gateway Amazon S3 File Gateway.
• D. Create an Amazon FSx for OpenZFS file system. Migrate the data to the new volume.
Hide Answer
Suggested Answer: C
• Amazon FSx for ONTAP supports both SMB and NFS protocols, making it suitable for the mixed
environment of Windows, Mac, and Linux-based EC2 instances.
• Auto tiering policy in FSx for ONTAP automatically moves data between different storage tiers based on
access patterns, ensuring that frequently accessed data remains on faster storage, while infrequently accessed data
is moved to cost-effective storage, minimizing operational overhead.
• The solution keeps the data within the AWS ecosystem, making it easier to manage and integrate with
other AWS services.
Options A, C, and D do not fully meet the requirements:
• Option A (Amazon EFS with Intelligent-Tiering): While Amazon EFS supports NFS, it does not natively
support SMB. Additionally, Intelligent-Tiering may not provide optimal performance for data that requires
frequent access.
• Option C (Amazon S3 with S3 Intelligent-Tiering): While S3 supports both SMB and NFS access
through various methods, using an S3 bucket directly might introduce additional complexity for accessing the data
from EC2 instances, especially when both SMB and NFS access is required.
• Option D (Amazon FSx for OpenZFS): While FSx for OpenZFS supports NFS, it does not support SMB,
which is needed for Windows-based EC2 instances. Additionally, it might not offer the same level of optimization
for frequently and infrequently accessed data as FSx for ONTAP with its auto-tiering policy.
FSx for NetApp ONTAP is a fully managed file storage service with advanced features and support for multiple
protocols, while S3 with S3 File Gateway is a hybrid cloud storage solution that enables on-premises applications
to access data stored in Amazon S3.
FSx cho NetApp ONTAP là dịch vụ lửu trử tệp đửớc quần lý hoàn toàn với các tính năng nâng cao và hộ trớ nhiệu
giao thửc, trong khi S3 với Cộng tệp S3 là giầi pháp lửu trử đám mây lai cho phép các ửng dụng tầi chộ truy cầp
dử liệu đửớc lửu trử trong Amazon S3.
■
Question #: : 720
Report generation:tầo báo cáo
Downtime:ngửng hoầt động
A manufacturing company runs its report generation application on AWS. The application generates each report
in about 20 minutes. The application is built as a monolith that runs on a single Amazon EC2 instance. The
application requires frequent updates to its tightly coupled modules. The application becomes complex to
maintain as the company adds new features.
Each time the company patches a software module, the application experiences downtime. Report generation must
restart from the beginning after any interruptions. The company wants to redesign the application so that the
application can be flexible, scalable, and gradually improved. The company wants to minimize application
downtime.
Hide Answer
Suggested Answer: B
Question #: : 721
A company wants to rearchitect a large-scale web application to a serverless microservices architecture. The
application uses Amazon EC2 instances and is written in Python.
The company selected one component of the web application to test as a microservice. The component supports
hundreds of requests each second. The company wants to create and test the microservice on an AWS solution
that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal
operational support.
Hide Answer
Suggested Answer: C
Question #: : 722
A company has an AWS Direct Connect connection from its on-premises location to an AWS account. The AWS
account has 30 different VPCs in the same AWS Region. The VPCs use private virtual interfaces (VIFs). Each
VPC has a CIDR block that does not overlap with other networks under the company's control.
The company wants to centrally manage the networking architecture while still allowing each VPC to
communicate with all other VPCs and on-premises networks.
Which solution will meet these requirements with the LEAST amount of operational overhead?
• A. Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn
on the transit gateway's route propagation feature.
• B. Create a Direct Connect gateway. Recreate the private VIFs to use the new gateway. Associate each
VPC by creating new virtual private gateways.
• C. Create a transit VPConnect the Direct Connect connection to the transit VPCreate a peering
connection between all other VPCs in the Region. Update the route tables.
• D. Create AWS Site-to-Site VPN connections from on premises to each VPC. Ensure that both VPN
tunnels are UP for each connection. Turn on the route propagation feature.
Hide Answer
Suggested Answer: D
AWS Transit Gateway is a highly scalable service that simplifies network management by acting as a hub for
connecting multiple VPCs and VPN connections.
By attaching all 30 VPCs to a single Transit Gateway, you can centrally manage the routing and connectivity
between these VPCs and on-premises networks.
Transit Gateway allows for efficient communication between attached VPCs without the need for individual VPC
peering connections.
With Transit Gateway, you can easily scale your network as your organization grows by adding new VPCs or VPN
connections without significant operational overhead.
It provides a simplified and centralized approach to managing network connectivity, reducing administrative
burden and operational complexity.
AWS Transit Gateway là dịch vụ có khầ năng mớ rộng cao giúp đớn giần hóa việc quần lý mầng bầng cách đóng
vai trò là trung tâm kệt nội nhiệu VPC và kệt nội VPN.
Bầng cách gần tầt cầ 30 VPC vào một Transit Gateway duy nhầt, bần có thệ quần lý tầp trung việc định tuyện và
kệt nội giửa các VPC này và mầng tầi chộ.
Transit Gateway cho phép liên lầc hiệu quầ giửa các VPC đửớc đính kèm mà không cần kệt nội ngang hàng VPC
riêng lệ.
Với Transit Gateway, bần có thệ dệ dàng mớ rộng quy mô mầng khi tộ chửc cụa bần phát triện bầng cách thêm
các VPC hoầc kệt nội VPN mới mà không cần chi phí hoầt động đáng kệ.
Nó cung cầp một cách tiệp cần đớn giần và tầp trung đệ quần lý kệt nội mầng, giầm gánh nầng hành chính và độ
phửc tầp trong vần hành.
Question #: : 723
A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS
databases by using an IAM role that has associated policies. The company wants to use AWS Systems Manager to
patch the EC2 instances without disrupting the running applications.
Hide Answer
Suggested Answer: C
Tính năng Quần lý cầu hình máy chụ mầc định cụa AWS Systems Manager cho phép bần quần lý các phiên bần
EC2 mà không cần đính kèm chính sách IAM vào vai trò IAM theo cách thụ công.
Bầng cách bầt Quần lý cầu hình máy chụ mầc định, Trình quần lý hệ thộng sẽ tử động quần lý các phiên bần EC2,
bao gộm cầ việc vá lội mà không cần cầu hình vai trò IAM thụ công.
Cách tiệp cần này giúp giầm chi phí vần hành bầng cách tần dụng các khầ năng tích hớp cụa Trình quần lý Hệ
thộng đệ quần lý các phiên bần EC2.
Nó đớn giần hóa quy trình quần lý và đầm bầo rầng Trình quần lý hệ thộng có thệ vá các phiên bần EC2 mà không
làm gián đoần các ửng dụng đang chầy.
Giầi pháp này phù hớp với các phửớng pháp hay nhầt đệ quần lý phiên bần EC2 bầng Trình quần lý hệ thộng và
đầm bầo quần lý tài nguyên hiệu quầ và hiệu quầ.
Question #: : 724
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the
Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect
notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum
capacity in the cluster, which causes performance issues.
Which solution will resolve this issue with the LEAST administrative overhead?
• A. Scale out the nodes by tracking the memory usage.
• B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
• C. Use an AWS Lambda function to resize the EKS cluster automatically.
• D. Use an Amazon EC2 Auto Scaling group to distribute the workload.
Hide Answer
Suggested Answer: B
Việc bầt Trình chia tỷ lệ tử động theo cụm yêu cầu chi phí quần trị tội thiệu vì nó tử động hóa quy trình mớ rộng
quy mô dửa trên nhu cầu khội lửớng công việc mà không cần can thiệp thụ công.
■
Question #: : 725
A company maintains about 300 TB in Amazon S3 Standard storage month after month. The S3 objects are each
typically around 50 GB in size and are frequently replaced with multipart uploads by their global application. The
number and size of S3 objects remain constant, but the company's S3 storage costs are increasing each month.
Hide Answer
Suggested Answer: A
Given the scenario provided, where the company maintains a large amount of data in Amazon S3 Standard storage
and experiences increasing storage costs, the most effective approach to reduce costs would be:
B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.
Explanation:
• Multipart uploads incur storage costs even if they are incomplete. Enabling a lifecycle policy to delete
incomplete multipart uploads will help prevent unnecessary storage costs for objects that are not fully uploaded.
• Since the objects are frequently replaced with multipart uploads, there might be instances where uploads
are started but not completed, leading to unnecessary storage costs.
• By configuring a lifecycle policy to delete incomplete multipart uploads, the company can avoid incurring
storage costs for data that is not fully uploaded or required.
Options A, C, and D are not directly relevant to the issue of reducing storage costs for the given scenario:
• Option A suggests switching to Amazon S3 Transfer Acceleration, which is a feature to accelerate data
transfers to and from Amazon S3, but it doesn't address the issue of reducing storage costs.
• Option C mentions configuring S3 inventory, which is used for generating reports on S3 object metadata,
but it doesn't directly impact storage costs.
• Option D suggests using Amazon CloudFront, which is a content delivery network service, but it's not
directly related to reducing storage costs in Amazon S3.
Question #: : 726
requires live location tracking of players based on latitude and longitude.: yêu cầu theo dõi vị trí trửc tiệp cụa
ngửới chới dửa trên vĩ độ và kinh độ
rapid updates and retrieval of locations.: cầp nhầt và truy xuầt vị trí nhanh chóng.
A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players
based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.
The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During
peak usage periods, the database is unable to maintain the performance that is needed for reading and writing
updates. The game's user base is increasing rapidly.
What should a solutions architect do to improve the performance of the data tier?
• A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
• B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
• C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game
to use DAX.
• D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the
game to use Redis.
Hide Answer
Suggested Answer: D
Question #: : 727
prevent this type of disruption in the future.: ngăn chần loầi gián đoần này trong tửớng lai.
operational overhead :chi phí vần hành
deletion protection : tính năng bầo vệ xóa
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator
accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's
operations. The company wants to prevent this type of disruption in the future.
Which solution will meet this requirement with the LEAST operational overhead?
• A. Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create
an AWS Lambda function to automatically restore deleted DynamoDB tables.
• B. Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
• C. Configure deletion protection on the DynamoDB tables.
• D. Enable point-in-time recovery on the DynamoDB tables.
Hide Answer
Suggested Answer: B
Question #: : 728
running out of storage capacity: sầp hệt dung lửớng lửu trử
minimizing bandwidth costs: giầm thiệu chi phí băng thông
immediate retrieval of data at no additional cost: truy xuầt dử liệu ngay lầp tửc mà không mầt thêm chi phí.
A company has an on-premises data center that is running out of storage capacity. The company wants to migrate
its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate
retrieval of data at no additional cost.
How can these requirements be met?
• A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity
for the workload.
• B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon
S3 while retaining copies of frequently accessed data subsets locally.
• C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
• D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage
Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data
to Amazon S3.
Hide Answer
Suggested Answer: B
Question #: : 729
scale resources appropriately according to both the forecast and live changes in utilization.: chia tỷ lệ tài nguyên
một cách thích hớp theo cầ dử báo và nhửng thay đội trửc tiệp trong việc sử dụng.
A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances
run in an Auto Scaling group for the application tier.
The company needs to make an automated scaling plan that will analyze each resource's daily and weekly historical
workload trends. The configuration must scale resources appropriately according to both the forecast and live
changes in utilization.
Which scaling strategy should a solutions architect recommend to meet these requirements?
• A. Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.
• B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
• C. Create an automated scheduled scaling action based on the traffic patterns of the web application.
• D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.
Hide Answer
Suggested Answer: D
Question #: : 730
reduces the DB cluster usage: giúp giầm mửc sử dụng cụm DB
alleviate the effect of repeated reads on the DB cluster.: giầm bớt ầnh hửớng cụa việc độc lầp lầi trên cụm cớ sớ
dử liệu.
A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL
DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. DB cluster usage
increases at a much faster rate.
The company adds a read replica, which reduces the DB cluster usage for a short period of time. However, the
load continues to increase. The operations that cause the increase in DB cluster usage are all repeated read
statements that are related to delivery details. The company needs to alleviate the effect of repeated reads on the
DB cluster.
Hide Answer
Suggested Answer: A
A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
Here's why this option is the best fit:
• Caching Solution: Amazon ElastiCache for Redis provides an in-memory caching solution that can
significantly reduce the load on the DB cluster by caching frequently accessed data. Since the operations causing
the increase in DB cluster usage are repeated read statements related to delivery details, caching this data in Redis
can help alleviate the need for repeated reads from the DB cluster.
• Reduces Database Load: By caching frequently accessed data in ElastiCache for Redis, the number of
repeated reads hitting the DB cluster can be reduced, leading to lower database load and improved performance.
Option B, adding an additional read replica to the DB cluster, may help distribute the read workload, but it may
not effectively address the issue of repeated reads causing increased DB cluster usage. Additionally, adding more
read replicas may increase costs without directly addressing the root cause of the problem.
Option C, configuring Aurora Auto Scaling for the Aurora read replicas, helps with scaling the read capacity of
the DB cluster but does not directly address the issue of repeated reads.
Option D, modifying the DB cluster to have multiple writer instances, does not seem appropriate as the issue is
related to read operations, not write operations. Additionally, Aurora does not support multiple writer instances
for a single DB cluster.
Question #: : 731
Discovers: phát hiện
Latency is in an acceptable range. : Độ trệ nầm trong phầm vi chầp nhần đửớc.
eventually consistent reads: cuội cùng các lần độc nhầt quán
Strongly consistent reads: Các lần độc nhầt quán cao
A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers
that many requests to the table are not returning the latest data. The company's users have not reported any other
issues with database performance. Latency is in an acceptable range.
Hide Answer
Suggested Answer: C
■
Question #: : 732
principle of least privilege: nguyên tầc đầc quyện tội thiệu
A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company
used the principle of least privilege to configure the database access credentials. The company's security team
wants to protect the application and the database from SQL injection and other web-based attacks.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use security groups and network ACLs to secure the database and application servers.
• B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security
settings.
• C. Use AWS Network Firewall to protect the application and the database.
• D. Use different database accounts in the application code for different functions. Avoid granting
excessive privileges to the database users.
Hide Answer
Suggested Answer: D
1. AWS WAF (Web Application Firewall): AWS WAF helps protect web applications from common web
exploits that could affect application availability, compromise security, or consume excessive resources. By
configuring AWS WAF, you can define rules to filter web traffic and block potentially harmful requests, including
those attempting SQL injection and other web-based attacks. AWS WAF integrates seamlessly with Amazon
CloudFront, Application Load Balancer (ALB), and API Gateway, making it easy to protect your applications
without significant changes to your architecture.
2. RDS Parameter Groups: RDS parameter groups allow you to configure database engine settings to meet
specific requirements, including security settings. While they may not directly prevent web-based attacks like SQL
injection, they enable you to configure security-related parameters such as enforcing SSL/TLS connections,
setting timeouts, and enabling encryption. These settings help enhance the security posture of your RDS database
with minimal operational overhead.
Option A (using security groups and network ACLs) provides network-level security controls but does not
specifically address web-based attacks like SQL injection. Additionally, managing security groups and network
ACLs might require more effort compared to configuring AWS WAF and RDS parameter groups.
Option C (using AWS Network Firewall) is more focused on network traffic filtering rather than protecting web
applications against specific web-based attacks like SQL injection. While it provides network-level protection, it
may involve more operational overhead compared to using AWS WAF and RDS parameter groups.
Option D (using different database accounts in the application code) is a good security practice to limit the scope
of privileges granted to database users. However, it alone does not protect the application from web-based attacks
like SQL injection. It's a complementary measure that should be used in conjunction with other security
mechanisms like AWS WAF and secure coding practices.
AWS WAF (Tửớng lửa ửng dụng web): AWS WAF giúp bầo vệ các ửng dụng web khội các hoầt động khai thác
web thông thửớng có thệ ầnh hửớng đện tính khầ dụng cụa ửng dụng, xâm phầm bầo mầt hoầc tiêu tộn quá nhiệu
tài nguyên. Bầng cách định cầu hình AWS WAF, bần có thệ xác định các quy tầc đệ lộc lửu lửớng truy cầp web và
chần các yêu cầu có hầi, bao gộm cầ nhửng yêu cầu cộ gầng chèn SQL và các cuộc tần công dửa trên web khác.
AWS WAF tích hớp liện mầch với Amazon CloudFront, Cân bầng tầi ửng dụng (ALB) và API Gateway, giúp bần
dệ dàng bầo vệ ửng dụng cụa mình mà không cần phầi thay đội đáng kệ kiện trúc cụa mình.
Nhóm tham sộ RDS: Nhóm tham sộ RDS cho phép bần định cầu hình cài đầt công cụ cớ sớ dử liệu đệ đáp ửng
các yêu cầu cụ thệ, bao gộm cầ cài đầt bầo mầt. Mầc dù chúng có thệ không trửc tiệp ngăn chần các cuộc tần công
dửa trên web nhử SQL SQL, nhửng chúng cho phép bần định cầu hình các tham sộ liên quan đện bầo mầt nhử
thửc thi kệt nội SSL/TLS, đầt thới gian chớ và bầt mã hóa. Các cài đầt này giúp nâng cao trầng thái bầo mầt cụa
cớ sớ dử liệu RDS cụa bần với chi phí hoầt động tội thiệu.
Tùy chộn A (sử dụng nhóm bầo mầt và ACL mầng) cung cầp các biện pháp kiệm soát bầo mầt cầp mầng nhửng
không giầi quyệt cụ thệ các cuộc tần công dửa trên web nhử SQL SQL. Ngoài ra, việc quần lý các nhóm bầo mầt
và ACL mầng có thệ cần nhiệu nộ lửc hớn so với việc định cầu hình các nhóm tham sộ AWS WAF và RDS.
Tùy chộn C (sử dụng Tửớng lửa mầng AWS) tầp trung hớn vào việc lộc lửu lửớng mầng thay vì bầo vệ các ửng
dụng web trửớc các cuộc tần công dửa trên web cụ thệ nhử tiêm SQL. Mầc dù cung cầp khầ năng bầo vệ ớ cầp độ
mầng nhửng nó có thệ đòi hội nhiệu chi phí hoầt động hớn so với việc sử dụng các nhóm tham sộ AWS WAF và
RDS.
Tùy chộn D (sử dụng các tài khoần cớ sớ dử liệu khác nhau trong mã ửng dụng) là một phửớng pháp bầo mầt tột
đệ giới hần phầm vi đầc quyện đửớc cầp cho ngửới dùng cớ sớ dử liệu. Tuy nhiên, chỉ riêng nó không bầo vệ đửớc
ửng dụng khội các cuộc tần công dửa trên web nhử SQL SQL. Đó là một biện pháp bộ sung nên đửớc sử dụng
cùng với các cớ chệ bầo mầt khác nhử AWS WAF và các biện pháp mã hóa an toàn.
■
Question #: : 733
prevent malicious activity: ngăn chần hoầt động độc hầi
identify abnormal failed and incomplete login attempts: xác định các lần đăng nhầp thầt bầi và không đầy đụ bầt
thửớng
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations.
The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to
prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases.
Which solution will meet these requirements in the MOST operationally efficient way?
• A. Attach service control policies (SCPs) to the root of the organization to identity the failed login
attempts.
• B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
• C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a
central Amazon S3 bucket.
• D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.
Hide Answer
Suggested Answer: B
Hoầt động hiệu quầ: Kích hoầt Bầo vệ RDS trong GuardDuty là một quy trình đớn giần có thệ đửớc thửc hiện
tầp trung cho tầt cầ các tài khoần thành viên cụa tộ chửc. Sau khi đửớc bầt, GuardDuty sẽ tử động phân tích nhầt
ký CloudTrail và Nhầt ký luộng VPC đệ xác định các mội đe dộa tiệm ần, bao gộm cầ các nộ lửc truy cầp cớ sớ dử
liệu trái phép. Cách tiệp cần này giầm thiệu chi phí hoầt động trong việc thiệt lầp và quần lý các giầi pháp giám
sát tùy chỉnh, chầng hần nhử SCP, Nhầt ký CloudWatch hoầc cầu hình CloudTrail.
Tùy chộn A (sử dụng SCP) không cung cầp khầ năng cụ thệ đệ xác định các lần đăng nhầp bầt thửớng vào cớ sớ
dử liệu Aurora PostgreSQL. SCP đửớc sử dụng đệ kiệm soát quyện truy cầp và quyện trong Tộ chửc AWS nhửng
không đửớc thiệt kệ đệ phát hiện hoầc giám sát mội đe dộa.
Tùy chộn C (xuầt bần nhầt ký Aurora lên CloudWatch Logs) và Tùy chộn D (xuầt bần sử kiện cớ sớ dử liệu lên
CloudTrail) là các phửớng pháp hớp lệ đệ ghi lầi hoầt động cớ sớ dử liệu, nhửng chúng yêu cầu cầu hình và quần
lý bộ sung đệ trích xuầt nhửng hiệu biệt có ý nghĩa và phát hiện các lần đăng nhầp bầt thửớng. Các tùy chộn này
có thệ đòi hội nhiệu chi phí vần hành hớn so với việc sử dụng tính năng Bầo vệ RDS cụa GuardDuty, tính năng
này cung cầp khầ năng phát hiện mội đe dộa sần có đửớc tùy chỉnh cho các phiên bần RDS.
■
Question #: : 734
A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1
Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection
between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company
and the corporation do not overlap. The company requires connectivity between two Regions and the data centers.
The company needs a solution that is scalable while reducing operational overhead.
Hide Answer
Suggested Answer: D
A company is developing a mobile game that streams score updates to a backend processor and then posts results
on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the
mobile game updates in order of receipt, and store the processed updates in a highly available database. The
company also wants to minimize the management overhead required to maintain the solution.
Hide Answer
Suggested Answer: C
Question #: : 736
A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are
stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution
that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational
overhead.
Hide Answer
Suggested Answer: B
Question #: : 737
A company has an application that delivers on-demand training videos to students around the world. The
application also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket
in the us-east-2 Region.
The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in the ap-southeast-1 Region.
The company wants to replicate the data to the new S3 buckets. The company needs to minimize latency for
developers who upload videos and students who stream videos near eu-west-2 and ap-southeast-1.
Which combination of steps will meet these requirements with the FEWEST changes to the application? (Choose
two.)
• A. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure
one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
• B. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure
one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
• C. Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
• D. Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name
(ARN) of the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
• E. Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name
(ARN) of the Multi-Region Access Point for video streaming and uploads.
Hide Answer
Suggested Answer: AB
Question #: : 738
A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users
also can post photos and videos from inside the app.
Users access content often in the first minutes after the content is posted. New content quickly replaces older
content, and then the older content disappears. The local nature of the news means that users consume 90% of
the content within the AWS Region where it is uploaded.
Which solution will optimize the user experience by providing the LOWEST latency for content uploads?
• A. Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
• B. Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
• C. Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to
Amazon S3.
• D. Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple
distributions of Amazon CloudFront.
Hide Answer
Suggested Answer: A
Question #: : 739
A company is building a new application that uses serverless architecture. The architecture will consist of an
Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.
The company wants to add a service that can send messages received from the API Gateway REST API to multiple
target Lambda functions for processing. The service must offer message filtering that gives the target Lambda
functions the ability to receive only the messages the functions need.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Send the requests from the API Gateway REST API to an Amazon Simple Notification Service
(Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic.
Configure the target Lambda functions to poll the different SQS queues.
• B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge
to invoke the target Lambda functions.
• C. Send the requests from the API Gateway REST API to Amazon Managed Streaming for Apache Kafka
(Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
• D. Send the requests from the API Gateway REST API to multiple Amazon Simple Queue Service
(Amazon SQS) queues. Configure the target Lambda functions to poll the different SQS queues.
Hide Answer
Suggested Answer: D
Question #: : 740
A company migrated millions of archival files to Amazon S3. A solutions architect needs to implement a solution
that will encrypt all the archival data by using a customer-provided key. The solution must encrypt existing
unencrypted objects and future objects.
Hide Answer
Suggested Answer: B
Community vote distribution
A (100%)
by Andy_09 at Feb. 5, 2024, 8:54 p.m.
■
Question #: : 741
The DNS provider that hosts a company's domain name records is experiencing outages that cause service
disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service
and wants the service to run on AWS.
What should a solutions architect do to rapidly migrate the DNS hosting service?
• A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing
the domain records hosted by the previous provider.
• B. Create an Amazon Route 53 private hosted zone for the domain name. Import the zone file containing
the domain records hosted by the previous provider.
• C. Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS
Directory Service for Microsoft Active Directory for the domain records.
• D. Create an Amazon Route 53 Resolver inbound endpoint in the VPC. Specify the IP addresses that the
provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the
domain to the IP addresses that are specified in the inbound endpoint.
Hide Answer
Suggested Answer: C
Question #: : 742
A company is building an application on AWS that connects to an Amazon RDS database. The company wants to
manage the application configuration and to securely store and retrieve credentials for the database and other
services.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager
to store and retrieve the credentials.
• B. Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager
Parameter Store to store and retrieve the credentials.
• C. Use an encrypted application configuration file. Store the file in Amazon S3 for the application
configuration. Create another S3 file to store and retrieve the credentials.
• D. Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store
and retrieve the credentials.
Hide Answer
Suggested Answer: B
Question #: : 743
To meet security requirements, a company needs to encrypt all of its application data in transit while
communicating with an Amazon RDS MySQL DB instance. A recent security audit revealed that encryption at
rest is enabled using AWS Key Management Service (AWS KMS), but data in transit is not enabled.
Hide Answer
Suggested Answer: A
Question #: : 744
A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load
Balancing (ELB) load balancer. However, many of the web service clients can only reach IP addresses authorized
on their firewalls.
What should a solutions architect recommend to meet the clients’ needs?
• A. A Network Load Balancer with an associated Elastic IP address.
• B. An Application Load Balancer with an associated Elastic IP address.
• C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
• D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.
Hide Answer
Suggested Answer: D
Question #: : 745
A company has established a new AWS account. The account is newly provisioned and no changes have been
made to the default settings. The company is concerned about the security of the AWS account root user.
Hide Answer
Suggested Answer: A
Question #: : 746
A company is deploying an application that processes streaming data in near-real time. The company plans to use
Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest
possible latency between nodes.
Which combination of network solutions will meet these requirements? (Choose two.)
• A. Enable and configure enhanced networking on each EC2 instance.
• B. Group the EC2 instances in separate accounts.
• C. Run the EC2 instances in a cluster placement group.
• D. Attach multiple elastic network interfaces to each EC2 instance.
• E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Hide Answer
Suggested Answer: BE
Question #: : 747
A financial services company wants to shut down two data centers and migrate more than 100 TB of data to AWS.
The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders.
Most of the data is unstructured, and the company’s file storage consists of SMB-based storage types from multiple
vendors. The company does not want to change its applications to access the data after migration.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
• A. Use AWS Direct Connect to migrate the data to Amazon S3.
• B. Use AWS DataSync to migrate the data to Amazon FSx for Lustre.
• C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
• D. Use AWS Direct Connect to migrate the data on-premises file storage to an AWS Storage Gateway
volume gateway.
Hide Answer
Suggested Answer: B
A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The
company sets up a dedicated monitoring member account in the organization. The company wants to query and
visualize observability data across the accounts by using Amazon CloudWatch.
Hide Answer
Suggested Answer: C
Question #: : 749
A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto
Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution,
and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront
distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from
accessing the website.
Hide Answer
Suggested Answer: A
Question #: : 750
A company sets up an organization in AWS Organizations that contains 10 AWS accounts. A solutions architect
must design a solution to provide access to the accounts for several thousand employees. The company has an
existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.
Hide Answer
Suggested Answer: B
Question #: : 751
A solutions architect is designing an AWS Identity and Access Management (IAM) authorization model for a
company's AWS account. The company has designated five specific employees to have full access to AWS services
and resources in the AWS account.
The solutions architect has created an IAM user for each of the five designated employees and has created an IAM
user group.
Hide Answer
Suggested Answer: C
Question #: : 752
A company has a multi-tier payment processing application that is based on virtual machines (VMs). The
communication between the tiers occurs asynchronously through a third-party middleware solution that
guarantees exactly-once delivery.
The company needs a solution that requires the least amount of infrastructure management. The solution must
guarantee exactly-once delivery for application messaging.
Hide Answer
Suggested Answer: AD
Question #: : 753
A company has a nightly batch processing routine that analyzes report files that an on-premises file system receives
daily through SFTP. The company wants to move the solution to the AWS Cloud. The solution must be highly
available and resilient. The solution also must minimize operational effort.
Hide Answer
Suggested Answer: B
A company has users all around the world accessing its HTTP-based application deployed on Amazon EC2
instances in multiple AWS Regions. The company wants to improve the availability and performance of the
application. The company also wants to protect the application against common web exploits that may affect
availability, compromise security, or consume excessive resources. Static IP addresses are required.
Hide Answer
Suggested Answer: C
Question #: : 755
A company’s data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and
multiple DB instances across different Availability Zones. Users have recently reported errors from the database
that indicate that there are too many connections. The company wants to reduce the failover time by 20% when
a read replica is promoted to primary writer.
Question #: : 756
A company stores text files in Amazon S3. The text files include customer chat messages, date and time
information, and customer personally identifiable information (PII).
The company needs a solution to provide samples of the conversations to an external service provider for quality
control. The external service provider needs to randomly pick sample conversations up to the most recent
conversation. The company must not share the customer PII with the external service provider. The solution must
scale when the number of customer conversations increases.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when
the function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
• B. Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the PII
from the files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access
the bucket that does not contain the PII.
B. Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the PII from the
files, and allows the external service provider to download new versions of the files that have the PII redacted.
• D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the
files that does not contain PII. Configure the Lambda function to store the non-PII data in the DynamoDB table
when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.
Hide Answer
Suggested Answer: D
Question #: : 757
A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and
the system cannot run on more than one instance. A solutions architect must design a resilient solution that can
improve the recovery time for the system.
What should the solutions architect recommend to meet these requirements?
• A. Enable termination protection for the EC2 instance.
• B. Configure the EC2 instance for Multi-AZ deployment.
• C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
• D. Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID
configurations for storage redundancy.
Hide Answer
Suggested Answer: A
Question #: : 758
A company wants to deploy its containerized application workloads to a VPC across three Availability Zones. The
company needs a solution that is highly available across Availability Zones. The solution must require minimal
changes to the application.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling
to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with
an Availability Zone attribute.
• B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure Application
Auto Scaling to use target tracking scaling. Set the minimum capacity to 3.
• C. Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spread placement group.
Configure an Auto Scaling group to use target tracking scaling. Set the minimum capacity to 3.
• D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure
Application Auto Scaling to use Lambda as a scalable target. Set the minimum capacity to 3.
Hide Answer
Suggested Answer: B
Question #: : 759
A media company stores movies in Amazon S3. Each movie is stored in a single video file that ranges from 1 GB
to 10 GB in size.
The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There
is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The
company wants to minimize hosting service costs based on demand.
Hide Answer
Suggested Answer: A
Question #: : 760
A solutions architect needs to design the architecture for an application that a vendor provides as a Docker
container image. The container needs 50 GB of storage available for temporary files. The infrastructure must be
serverless.
Which solution meets these requirements with the LEAST operational overhead?
• A. Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted
volume that has more than 50 GB of space.
• B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block
Store (Amazon EBS) volume that has more than 50 GB of space.
• C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch
type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.
• D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch
type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task
definition for the container image. Create a service with that task definition.
Hide Answer
Suggested Answer: B
Question #: : 761
A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management
Console. The directory service is not compatible with Security Assertion Markup Language (SAML).
Hide Answer
Suggested Answer: C
Question #: : 762
A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2
instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The
company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a
separate AWS account.
• B. Copy all AMIs to another AWS account periodically.
• C. Create a retention rule in Recycle Bin.
• D. Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
Hide Answer
Suggested Answer: D
Question #: : 763
A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud
within the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose
during the night only.
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
• A. Use AWS Snowmobile to ship the data to AWS.
• B. Order multiple AWS Snowball devices to ship the data to AWS.
• C. Enable Amazon S3 Transfer Acceleration and securely upload the data.
• D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
Hide Answer
Suggested Answer: A
Question #: : 764
A company wants to migrate its three-tier application from on premises to AWS. The web tier and the application
tier are running on third-party virtual machines (VMs). The database tier is running on MySQL.
The company needs to migrate the application by making the fewest possible changes to the architecture. The
company also needs a database solution that can restore data to a specific point in time.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.
• B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2
instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
• C. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2
instances in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
• D. Migrate the web tier and the application tier to Amazon EC2 instances in public subnets. Migrate the
database tier to Amazon Aurora MySQL in public subnets.
Hide Answer
Suggested Answer: A
Question #: : 765
A development team is collaborating with another company to create an integrated product. The other company
needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development
team's account. The other company wants to poll the queue without giving up its own account permissions to do
so.
Question #: : 766
A company’s developers want a secure way to gain SSH access on the company's Amazon EC2 instances that run
the latest version of Amazon Linux. The developers work remotely and in the corporate office.
The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private
subnet and access the internet through a NAT gateway that is deployed in a public subnet.
Hide Answer
Suggested Answer: B
Question #: : 767
A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown
exponentially over the past few months. The company's researchers regularly require a subset of the entire dataset
to be immediately available with minimal lag. However, the entire dataset does not need to be accessed on a daily
basis. All the data currently resides in on-premises storage arrays, and the company wants to reduce ongoing
capital expenses.
Which storage solution should a solutions architect recommend to meet these requirements?
• A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing
basis.
• B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage. Migrate
the data to the Storage Gateway appliance.
• C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as
the target storage. Migrate the data to the Storage Gateway appliance.
• D. Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate
data to an Amazon Elastic File System (Amazon EFS) file system.
Hide Answer
Suggested Answer: B
Question #: : 768
A company has a business-critical application that runs on Amazon EC2 instances. The application stores data in
an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours.
Which solution meets these requirements with the LEAST operational overhead?
• A. Configure point-in-time recovery for the table.
• B. Use AWS Backup for the table.
• C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
• D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a
copy of the stream in an Amazon S3 bucket.
Hide Answer
Suggested Answer: C
Question #: : 769
A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are
processed to extract metadata, which takes less than 5 seconds. The volume and frequency of the uploads varies
from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to
design a cost-effective architecture that will meet these requirements.
Hide Answer
Suggested Answer: C
Question #: : 770
A company’s application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-
driven architecture. The company uses nonproduction development environments in a different AWS account to
test new features before the company deploys the features to production.
The production instances show constant usage because of customers in different time zones. The company uses
nonproduction instances only during business hours on weekdays. The company does not use the nonproduction
instances on the weekends. The company wants to optimize the costs to run its application on AWS.
Hide Answer
Suggested Answer: D
Question #: : 771
A company stores data in an on-premises Oracle relational database. The company needs to make the data
available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to
connect its on-premises network to AWS.
The company must capture the changes that occur to the source database during the migration to Aurora
PostgreSQL.
Hide Answer
Suggested Answer: D
Question #: : 772
A company built an application with Docker containers and needs to run the application in the AWS Cloud. The
company wants to use a managed service to host the application.
The solution must scale in and out appropriately according to demand on the individual container services. The
solution also must not result in additional operational overhead or infrastructure to manage.
Hide Answer
Suggested Answer: AC
Question #: : 773
An ecommerce company is running a seasonal online sale. The company hosts its website on Amazon EC2
instances spanning multiple Availability Zones. The company wants its website to manage sudden traffic increases
during the sale.
Hide Answer
Suggested Answer: A
Question #: : 774
A solutions architect must provide an automated solution for a company's compliance policy that states security
groups cannot include a rule that allows SSH from 0.0.0.0/0. The company needs to be notified if there is any
breach in the policy. A solution is needed as soon as possible.
What should the solutions architect do to meet these requirements with the LEAST operational overhead?
• A. Write an AWS Lambda script that monitors security groups for SSH being open to 0.0.0.0/0 addresses
and creates a notification every time it finds one.
• B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification
Service (Amazon SNS) notification when a noncompliant rule is created.
• C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an
Amazon Simple Notification Service (Amazon SNS) topic to generate a notification every time the role is assumed
by a user.
• D. Configure a service control policy (SCP) that prevents non-administrative users from creating or
editing security groups. Create a notification in the ticketing system when a user requests a rule that needs
administrator permissions.
Hide Answer
Suggested Answer: C
Question #: : 775
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
A company has deployed an application in an AWS account. The application consists of microservices that run on
AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each
microservice. The company has multiple AWS accounts and wants to give each team its own account for its
microservices.
A solutions architect needs to design a solution that will provide service-to-service communication over HTTPS
(port 443). The solution also must provide a service registry for service discovery.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create an inspection VPC. Deploy an AWS Network Firewall firewall to the inspection VPC. Attach
the inspection VPC to a new transit gateway. Route VPC-to-VPC traffic to the inspection VPC. Apply firewall
rules to allow only HTTPS communication.
• B. Create a VPC Lattice service network. Associate the microservices with the service network. Define
HTTPS listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.
• C. Create a Network Load Balancer (NLB) with an HTTPS listener and target groups for each
microservice. Create an AWS PrivateLink endpoint service for each microservice. Create an interface VPC
endpoint in each VPC that needs to consume that microservice.
• D. Create peering connections between VPCs that contain microservices. Create a prefix list for each
service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create
security groups to allow only HTTPS communication.
Hide Answer
Suggested Answer: A
Question #: : 776
A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game
increased in popularity, developers noticed slowdowns related to the game's metadata load times. Performance
metrics indicate that simply scaling the database will not help. A solutions architect must explore all options that
include capabilities for snapshots, replication, and sub-millisecond response times.
Hide Answer
Suggested Answer: D
Question #: : 777
A company uses AWS Organizations for its multi-account AWS setup. The security organizational unit (OU) of
the company needs to share approved Amazon Machine Images (AMIs) with the development OU. The AMIs are
created by using AWS Key Management Service (AWS KMS) encrypted snapshots.
Hide Answer
Suggested Answer: BC
Question #: : 778
A data analytics company has 80 offices that are distributed globally. Each office hosts 1 PB of data and has
between 1 and 2 Gbps of internet bandwidth.
The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The
company must complete the migration within 4 weeks.
Hide Answer
Suggested Answer: C
Question #: : 779
A company has an Amazon Elastic File System (Amazon EFS) file system that contains a reference dataset. The
company has applications on Amazon EC2 instances that need to read the dataset. However, the applications must
not be able to change the dataset. The company wants to use IAM access control to prevent the applications from
being able to modify or delete the dataset.
Hide Answer
Suggested Answer: A
Community vote distribution
C (67%)
D (17%)
B (17%)
by Andy_09 at Feb. 6, 2024, 7:41 a.m.
■
Question #: : 780
A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an
automated tool that is hosted in an AWS account that the vendor owns. The vendor does not have IAM access to
the company’s AWS account. The company needs to grant the vendor access to the company’s AWS account.
Hide Answer
Suggested Answer: A
Question #: : 781
A company wants to run its experimental workloads in the AWS Cloud. The company has a budget for cloud
spending. The company's CFO is concerned about cloud spending accountability for each department. The CFO
wants to receive notification when the spending threshold reaches 60% of the budget.
Hide Answer
Suggested Answer: A
Question #: : 782
A company wants to deploy an internal web application on AWS. The web application must be accessible only
from the company's office. The company needs to download security patches for the web application from the
internet.
The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company's office.
A solutions architect must design a secure architecture for the web application.
Question #: : 783
A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The
company needs to migrate the data to an AWS managed service for development and maintenance of the
application data. The solution must require minimal operational support and provide immutable,
cryptographically verifiable logs of data changes.
Hide Answer
Suggested Answer: D
Question #: : 784
A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket. A series of data
preparation jobs aggregate the data for reporting. The data preparation jobs need to run at regular intervals in
parallel. A few jobs need to run in a specific order later.
The company wants to remove the operational overhead of job error handling, retry logic, and state management.
Hide Answer
Suggested Answer: C
Question #: : 785
A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets
across multiple Availability Zones. The application uses multiple Lambda functions and processes millions of
transactions each day.
The architecture must ensure that the application does not process duplicate payments.
Hide Answer
Suggested Answer: C
Community vote distribution
C (63%)
D (38%)
by Andy_09 at Feb. 6, 2024, 8:08 a.m.
■
Question #: : 786
A company runs multiple workloads in its on-premises data center. The company's data center cannot scale fast
enough to meet the company's expanding business needs. The company wants to collect usage and configuration
data about the on-premises servers and workloads to plan a migration to AWS.
Hide Answer
Suggested Answer: B
Question #: : 787
A company has an organization in AWS Organizations that has all features enabled. The company requires that
all API calls and logins in any existing or new AWS account must be audited. The company needs a managed
solution to prevent additional work and to minimize costs. The company also needs to know when any AWS
account is not compliant with the AWS Foundational Security Best Practices (FSBP) standard.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS
Security Hub and AWS Control Tower Account Factory in the environment.
• B. Deploy an AWS Control Tower environment in a dedicated Organizations member account. Enable
AWS Security Hub and AWS Control Tower Account Factory in the environment.
• C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ).
Submit an RFC to self-service provision Amazon GuardDuty in the MALZ.
• D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ).
Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Hide Answer
Suggested Answer: A
Question #: : 788
A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. The company
occasionally needs to use SQL to analyze the log files.
Hide Answer
Suggested Answer: C
Question #: : 789
A company needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access
Management (IAM) resources that include an inline policy or “*” in the statement. The solution must also prohibit
deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled
in its organization in AWS Organizations.
Hide Answer
Suggested Answer: D
Question #: : 790
A company's web application that is hosted in the AWS Cloud recently increased in popularity. The web
application currently exists on a single Amazon EC2 instance in a single public subnet. The web application has
not been able to meet the demand of the increased web traffic.
The company needs a solution that will provide high availability and scalability to meet the increased user demand
without rewriting the web application.
Question #: : 791
A company has AWS Lambda functions that use environment variables. The company does not want its developers
to see environment variables in plaintext.
Hide Answer
Suggested Answer: D
Question #: : 792
An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs
to offer a web analytics service to millions of users. Users must be verified by using an authentication service to
access the APIs.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway
REST APIs with a Cognito authorizer.
• B. Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway
HTTP APIs with a Cognito authorizer.
• C. Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway
REST APIs with a Lambda authorizer.
• D. Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs
with an IAM authorizer.
Hide Answer
Suggested Answer: D
Question #: : 793
A company has a mobile app for customers. The app’s data is sensitive and must be encrypted at rest. The company
uses AWS Key Management Service (AWS KMS).
The company needs a solution that prevents the accidental deletion of KMS keys. The solution must use Amazon
Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts
to delete a KMS key.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an
AWS Config rule that cancels any deletion of a KMS key. Add the AWS Config rule as a target of the EventBridge
rule. Create an SNS topic that notifies the administrators.
• B. Create an AWS Lambda function that has custom logic to prevent KMS key deletion. Create an
Amazon CloudWatch alarm that is activated when a user tries to delete a KMS key. Create an Amazon EventBridge
rule that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic.
Configure the EventBridge rule to publish an SNS message that notifies the administrators.
• C. Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed.
Configure the rule to initiate an AWS Systems Manager Automation runbook. Configure the runbook to cancel
the deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message
that notifies the administrators.
• D. Create an AWS CloudTrail trail. Configure the trail to deliver logs to a new Amazon CloudWatch log
group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm
to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.
Hide Answer
Suggested Answer: D
Question #: : 794
A company wants to analyze and generate reports to track the usage of its mobile app. The app is popular and has
a global user base. The company uses a custom report building program to analyze application usage.
The program generates multiple reports during the last week of each month. The program takes less than 10
minutes to produce each report. The company rarely uses the program to generate reports outside of the last week
of each month The company wants to generate reports in the least amount of time when the reports are requested.
Hide Answer
Suggested Answer: B
Question #: : 795
A company is designing a tightly coupled high performance computing (HPC) environment in the AWS Cloud.
The company needs to include features that will optimize the HPC environment for networking and storage.
Hide Answer
Suggested Answer: BD
Question #: : 796
A company needs a solution to prevent photos with unwanted content from being uploaded to the company's web
application. The solution must not involve training a machine learning (ML) model.
Hide Answer
Suggested Answer: B
A company uses AWS to run its ecommerce platform. The platform is critical to the company's operations and has
a high volume of traffic and transactions. The company configures a multi-factor authentication (MFA) device to
secure its AWS account root user credentials. The company wants to ensure that it will not lose access to the root
user account if the MFA device is lost.
Hide Answer
Suggested Answer: B
Question #: : 798
A social media company is creating a rewards program website for its users. The company gives users points when
users create and upload videos to the website. Users redeem their points for gifts or discounts from the company's
affiliated partners. A unique ID identifies users. The partners refer to this ID to verify user eligibility for rewards.
The partners want to receive notification of user IDs through an HTTP endpoint when the company gives users
points. Hundreds of vendors are interested in becoming affiliated partners every day. The company wants to
design an architecture that gives the website the ability to add partners rapidly in a scalable way.
Which solution will meet these requirements with the LEAST implementation effort?
• A. Create an Amazon Timestream database to keep a list of affiliated partners. Implement an AWS
Lambda function to read the list. Configure the Lambda function to send user IDs to each partner when the
company gives users points.
• B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol.
Subscribe the partners to the topic. Publish user IDs to the topic when the company gives users points.
• C. Create an AWS Step Functions state machine. Create a task for every affiliated partner. Invoke the
state machine with user IDs as input when the company gives users points.
• D. Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer
applications. Store a list of affiliated partners in the data stream. Send user IDs when the company gives users
points.
Hide Answer
Suggested Answer: A
Question #: : 799
A company needs to extract the names of ingredients from recipe records that are stored as text files in an Amazon
S3 bucket. A web application will use the ingredient names to query an Amazon DynamoDB table and determine
a nutrition score.
The application can handle non-food records and errors. The company does not have any employees who have
machine learning knowledge to develop this solution.
Hide Answer
Suggested Answer: D
by asdfcdsxdfc at March 5, 2024, 10:24 p.m.
A???
804
■
Question #: : 804
A company has an Amazon S3 data lake. The company needs a solution that transforms the data from the data
lake and loads the data into a data warehouse every day. The data warehouse must have massively parallel
processing (MPP) capabilities.
Data analysts then need to create and train machine learning (ML) models by using SQL commands on the data.
The solution must use serverless AWS services wherever possible.
Hide Answer
Suggested Answer: B
Question #: : 802
A company wants to run its payment application on AWS. The application receives payment notifications from
mobile devices. Payment notifications require a basic validation before they are sent for further processing.
The backend processing application is long running and requires compute and memory to be adjusted. The
company does not want to manage the infrastructure.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon
EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment
notifications and send the notifications to the backend application. Deploy the backend application on Amazon
Elastic Kubernetes Service (Amazon EKS) Anywhere. Create a standalone cluster.
• B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine
to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications
and send the notifications to the backend application. Deploy the backend application on Amazon Elastic
Kubernetes Service (Amazon EKS). Configure an EKS cluster with self-managed nodes.
• C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon
EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment
notifications and send the notifications to the backend application. Deploy the backend application on Amazon
EC2 Spot Instances. Configure a Spot Fleet with a default allocation strategy.
• D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment
notifications from mobile devices. Invoke a Lambda function to validate payment notifications and send the
notifications to the backend application. Deploy the backend application on Amazon Elastic Container Service
(Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.
Hide Answer
Suggested Answer: C
Question #: : 803
A solutions architect is designing a user authentication solution for a company. The solution must invoke two-
factor authentication for users that log in from inconsistent geographical locations, IP addresses, or devices. The
solution must also be able to scale up to accommodate millions of users.
Hide Answer
Suggested Answer: C
Question #: : 801
A financial company needs to handle highly sensitive data. The company will store the data in an Amazon S3
bucket. The company needs to ensure that the data is encrypted in transit and at rest. The company must manage
the encryption keys outside the AWS Cloud.
Hide Answer
Suggested Answer: A
Question #: : 800
A company needs to create an AWS Lambda function that will run in a VPC in the company's primary AWS
account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System
(Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files
to the file system, the solution must scale to meet the demand.
Hide Answer
Suggested Answer: A
Explain:
VPC peering allows communication between VPCs in different AWS accounts as if they were in the same network.
- By creating a VPC peering connection between the VPC containing the Lambda function in the primary account
and the VPC containing the EFS file system in the secondary account, the Lambda function can access the files
stored in the EFS file system.
- This solution ensures secure communication between the resources in different accounts without incurring data
transfer costs, as data transfer over VPC peering connections within the same AWS Region is not charged for.
- Additionally, VPC peering provides low latency and high bandwidth connectivity, which is suitable for accessing
files stored in an EFS file system.
- This solution also ensures scalability as the demand for accessing files stored in the EFS file system increases.
- VPC ngang hàng cho phép giao tiệp giửa các VPC trong các tài khoần AWS khác nhau nhử thệ chúng ớ trong
cùng một mầng.
- Bầng cách tầo kệt nội ngang hàng VPC giửa VPC chửa hàm Lambda trong tài khoần chính và VPC chửa hệ
thộng tệp EFS trong tài khoần phụ, hàm Lambda có thệ truy cầp các tệp đửớc lửu trử trong hệ thộng tệp EFS.
- Giầi pháp này đầm bầo liên lầc an toàn giửa các tài nguyên trong các tài khoần khác nhau mà không phát sinh
chi phí truyện dử liệu vì việc truyện dử liệu qua kệt nội ngang hàng VPC trong cùng Khu vửc AWS không bị tính
phí.
- Ngoài ra, VPC ngang hàng cung cầp độ trệ thầp và kệt nội băng thông cao, phù hớp đệ truy cầp các tệp đửớc
lửu trử trong hệ thộng tệp EFS.
- Giầi pháp này còn đầm bầo khầ năng mớ rộng khi nhu cầu truy cầp các file đửớc lửu trử trong hệ thộng file EFS
ngày càng tăng.
Question #: : 805
remain locally in the company's data center:lửu trử cục bộ trong trung tâm dử liệu cụa công ty
A company runs containers in a Kubernetes environment in the company's local data center. The company wants
to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain
locally in the company's data center and cannot be stored in any remote site or cloud to maintain compliance.
Hide Answer
Suggested Answer: B
Question #: : 806
A social media company has workloads that collect and process data. The workloads store the data in on-premises
NFS storage. The data store cannot scale fast enough to meet the company’s expanding business needs. The
company wants to migrate the current data store to AWS.
Hide Answer
Suggested Answer: D
2. **Scalability**: With Amazon S3 serving as the backend storage for the File Gateway, the company gains access
to virtually limitless scalability. They can scale their storage capacity in AWS S3 to meet their expanding business
needs without facing constraints.
3. **Migration**: The File Gateway simplifies the migration process by providing a bridge between the on-
premises NFS storage and AWS S3. It allows for a gradual migration strategy, minimizing disruption to operations
and ensuring a smooth transition of data to the cloud.
4. **Cost-Effectiveness**: Utilizing the Amazon S3 File Gateway offers a highly cost-effective solution. The
company pays only for the storage capacity they use in AWS S3, eliminating the need for upfront hardware
investments and reducing ongoing maintenance costs associated with managing on-premises storage
infrastructure.
Question #: : 807
reduce the compute costs and to maintain service latency :giầm chi phí tính toán và duy trì độ trệ
A company uses high concurrency AWS Lambda functions to process a constantly increasing number of messages
in a message queue during marketing events. The Lambda functions use CPU intensive code to process the
messages. The company wants to reduce the compute costs and to maintain service latency for its customers.
Hide Answer
Suggested Answer: C
Community vote distribution
D (67%)
A (33%)
by 1dd at March 9, 2024, 4:14 a.m.
Explain:
Provisioned concurrency is a feature in AWS Lambda that keeps functions initialized and hyper-ready to respond
in double-digit milliseconds. This could be useful for their high concurrency use case. By configuring provisioned
concurrency, the company can ensure that there are always a set number of instances ready to respond to the
requests, reducing the cold start latency.
Increasing the memory allocation for the Lambda functions will also increase the CPU power available to them
according to the AWS's shared compute environment, which should help with processing the CPU-intensive
messages more quickly. AWS Compute Optimizer can provide recommendations on the right amount of memory
to allocate for optimum performance.
Option A and C suggest decreasing the memory allocation which may reduce costs but could also lead to increased
latency, especially if the functions are CPU intensive. Decreasing memory would also decrease the available CPU
capacity, which could adversely impact the function's performance.
Option B suggests using reserved concurrency which reserves a specific number of instances for a function. While
this can prevent other functions from using all the available concurrency, it does not help in keeping the functions
warm like provisioned concurrency does, which might be beneficial in a high concurrency use case to maintain
low latency.
• ※Reserved concurrency – This represents the maximum number of concurrent instances allocated to
your function. When a function has reserved concurrency, no other function can use that concurrency.
Configuring reserved concurrency for a function incurs no additional charges.
• Provisioned concurrency – This is the number of pre-initialized execution environments allocated to
your function. These execution environments are ready to respond immediately to incoming function requests.
Configuring provisioned concurrency incurs additional charges to your AWS account.
• Động thới dành riêng – Điệu này thệ hiện sộ lửớng phiên bần động thới tội đa đửớc phân bộ cho chửc
năng cụa bần. Khi một hàm có động thới dành riêng thì không có hàm nào khác có thệ sử dụng động thới đó. Việc
định cầu hình động thới dành riêng cho một chửc năng không phát sinh thêm phí.
•
• Động thới đửớc cung cầp – Đây là sộ lửớng môi trửớng thửc thi đửớc khới tầo trửớc đửớc phân bộ cho
chửc năng cụa bần. Các môi trửớng thửc thi này sần sàng đáp ửng ngay lầp tửc các yêu cầu chửc năng đện. Việc
định cầu hình động thới đửớc cung cầp sẽ phát sinh thêm phí cho tài khoần AWS cụa bần.
Question #: : 808
A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). The container images that
the ECS task definition uses need to be scanned for Common Vulnerabilities and Exposures (CVEs). New
container images that are created also need to be scanned.
Which solution will meet these requirements with the FEWEST changes to the workloads?
• A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the
container images. Specify scan on push filters for the ECR basic scan.
• B. Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an
S3 Event Notification to initiate a Macie scan for every event with an s3:ObjectCreated:Put event type.
• C. Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic
Container Registry (Amazon ECR) as a private image repository. Specify scan on push filters for the ECR
enhanced scan.
• D. Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3
Event Notification for s3:ObjectCreated:* events to invoke an AWS Lambda function. Configure the Lambda
function to initiate an Amazon Inspector scan.
Hide Answer
Suggested Answer: C
Question #: : 809
invoke a third-party reporting application:gội ửng dụng báo cáo cụa bên thử ba
A company uses an AWS Batch job to run its end-of-day sales process. The company needs a serverless solution
that will invoke a third-party reporting application when the AWS Batch job is successful. The reporting
application has an HTTP API interface that uses username and password authentication.
Hide Answer
Suggested Answer: D
Question #: : 810
A company collects and processes data from a vendor. The vendor stores its data in an Amazon RDS for MySQL
database in the vendor's own AWS account. The company’s VPC does not have an internet gateway, an AWS
Direct Connect connection, or an AWS Site-to-Site VPN connection. The company needs to access the data that
is in the vendor database.
Hide Answer
Suggested Answer: A
Question #: : 811
A company wants to set up Amazon Managed Grafana as its visualization tool. The company wants to visualize
data from its Amazon RDS database as one data source. The company needs a secure solution that will not expose
the data over the internet.
Hide Answer
Suggested Answer: B
Question #: : 812
A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various
data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include
filtering of anomalies, normalizing of data to standard date and time values, and generation of aggregates for
analyses.
The company must store the transformed data in S3 buckets that data analysts access. The company needs a
prebuilt solution for data transformation that does not require code. The solution must provide data lineage and
data profiling. The company needs to share the data transformation steps with employees throughout the company.
Hide Answer
Suggested Answer: B
Question #: : 813
A solutions architect runs a web application on multiple Amazon EC2 instances that are in individual target groups
behind an Application Load Balancer (ALB). Users can reach the application through a public website.
The solutions architect wants to allow engineers to use a development version of the website to access one specific
development EC2 instance to test new features for the application. The solutions architect wants to use an Amazon
Route 53 hosted zone to give the engineers access to the development instance. The solution must automatically
route to the development instance even if the development instance is replaced.
Hide Answer
Suggested Answer: C
Question #: : 814
A company runs a container application on a Kubernetes cluster in the company's data center. The application
uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center
cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the
workloads to AWS.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon
Simple Queue Service (Amazon SQS) to retrieve the messages.
• B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon
MQ to retrieve the messages.
• C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve the
messages.
• D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon
SQS) to retrieve the messages.
Hide Answer
Suggested Answer: A
Question #: : 815
An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs)
across multiple AWS Regions. The NLBs can route requests to targets over the internet. The company wants to
improve the customer playing experience by reducing end-to-end load time for its global customer base.
Hide Answer
Suggested Answer: A
Question #: : 816
legacy applications: ửng dụng cũ
A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The
company is migrating to the AWS Cloud. The company has created an application that uses Amazon S3 APIs to
upload files from vendors.
Some vendors run their systems on legacy applications that do not support S3 APIs. The vendors want to continue
to use SFTP-based applications to upload data. The company wants to use managed services for the needs of the
vendors that use legacy applications.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS Database Migration Service (AWS DMS) instance to replicate data from the storage
of the vendors that use legacy applications to Amazon S3. Provide the vendors with the credentials to access the
AWS DMS instance.
• B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.
• C. Configure an Amazon EC2 instance to run an SFTP server. Instruct the vendors that use legacy
applications to use the SFTP server to upload data.
• D. Configure an Amazon S3 File Gateway for vendors that use legacy applications to upload files to an
SMB file share.
Hide Answer
Suggested Answer: B
AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using
SFTP, FTPS, and FTP. This enables vendors to continue to use their SFTP-based legacy applications to upload
data without having to modify their applications or manage the underlying servers. This solution would meet the
company's requirements with the least operational overhead. Other solutions would require significantly more
configuration and maintenance, thereby increasing operational overhead.
AWS Transfer Family cung cầp hộ trớ đửớc quần lý hoàn toàn đệ truyện tệp trửc tiệp vào và ra khội Amazon S3
bầng SFTP, FTPS và FTP. Điệu này cho phép nhà cung cầp tiệp tục sử dụng các ửng dụng cũ dửa trên SFTP cụa
hộ đệ tầi dử liệu lên mà không cần phầi sửa đội ửng dụng cụa hộ hoầc quần lý các máy chụ cớ bần. Giầi pháp này
sẽ đáp ửng các yêu cầu cụa công ty với chi phí hoầt động ít nhầt. Các giầi pháp khác sẽ yêu cầu cầu hình và bầo
trì nhiệu hớn đáng kệ, do đó làm tăng chi phí hoầt động.
Question #: : 817
A marketing team wants to build a campaign for an upcoming multi-sport event. The team has news reports from
the past five years in PDF format. The team needs a solution to extract insights about the content and the
sentiment of the news reports. The solution must use Amazon Textract to process the news reports.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Provide the extracted insights to Amazon Athena for analysis. Store the extracted insights and analysis
in an Amazon S3 bucket.
• B. Store the extracted insights in an Amazon DynamoDB table. Use Amazon SageMaker to build a
sentiment model.
• C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon
S3 bucket.
• D. Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze
the data.
Hide Answer
Suggested Answer: B
Amazon Textract trích xuầt văn bần và dử liệu tử các tài liệu đửớc quét. Sau đó, văn bần và dử liệu đửớc trích
xuầt có thệ đửớc phân tích bới Amazon Comprehend. Amazon Comprehend sử dụng máy hộc đệ tìm thông tin
chi tiệt và mội quan hệ trong văn bần, bao gộm cầ phân tích cầm tính, đây là một trong nhửng yêu cầu. Kệt quầ
có thệ đửớc lửu trử trên bộ chửa S3 đòi hội ít chi phí hoầt động nhầt thay vì sử dụng Amazon Athena, Amazon
SageMaker, Amazon DynamoDB hoầc Amazon QuickSight.
Question #: : 818
A company's application runs on Amazon EC2 instances that are in multiple Availability Zones. The application
needs to ingest real-time data from third-party applications.
The company needs a data ingestion solution that places the ingested raw data in an Amazon S3 bucket.
Hide Answer
Suggested Answer: A
Luộng dử liệu Amazon Kinesis có thệ liên tục thu thầp hàng gigabyte dử liệu mội giây tử hàng trăm nghìn nguộn
nhử luộng nhầp chuột trên trang web, luộng sử kiện cớ sớ dử liệu, giao dịch tài chính, nguộn cầp dử liệu truyện
thông xã hội, nhầt ký CNTT và sử kiện theo dõi vị trí. Dử liệu đửớc thu thầp có sần tính bầng mili giây đệ cho
phép phân tích theo thới gian thửc. Sau đó, với Amazon Kinesis Data Firehose, bần có thệ chuần bị và tầi dử liệu
phát trửc tuyện lên Amazon S3, đây là bộ lửu trử dử liệu bện bỉ, an toàn và có thệ mớ rộng quy mô.
Question #: : 819
A company’s application is receiving data from multiple data sources. The size of the data varies and is expected
to increase over time. The current maximum size is 700 KB. The data volume and data size continue to grow as
more data sources are added.
The company decides to use Amazon DynamoDB as the primary database for the application. A solutions architect
needs to identify a solution that handles the large data sizes.
Which solution will meet these requirements in the MOST operationally efficient way?
• A. Create an AWS Lambda function to filter the data that exceeds DynamoDB item size limits. Store the
larger data in an Amazon DocumentDB (with MongoDB compatibility) database.
• B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has
an attribute that points to the S3 URL of the data.
• C. Split all incoming large data into a collection of items that have the same partition key. Write the data
to a DynamoDB table in a single operation by using the BatchWriteItem API operation.
• D. Create an AWS Lambda function that uses gzip compression to compress the large objects as they are
written to a DynamoDB table.
Hide Answer
Suggested Answer: D
Question #: : 820
A company is migrating a legacy application from an on-premises data center to AWS. The application relies on
hundreds of cron jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.
The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring. The solution
must support running the cron jobs in response to an event in the future.
Bộ lầp lịch Amazon EventBridge giúp việc lên lịch các tác vụ định kỳ hoầc tác vụ định kỳ thông thửớng trớ nên dệ
dàng hớn và việc sử dụng AWS Fargate cho phép bần chầy các bộ chửa mà không cần quần lý các máy chụ cớ bần.
Việc tầo hình ầnh vùng chửa cho các tác vụ định kỳ hiện có sẽ cho phép di chuyện các tác vụ sang AWS mà không
cần tái cầu trúc ớ mửc tội thiệu và EventBridge sẽ xử lý việc lầp lịch trình.
Question #: : 821
A company uses Salesforce. The company needs to load existing data and ongoing data changes from Salesforce
to Amazon Redshift for analysis. The company does not want the data to travel over the public internet.
Which solution will meet these requirements with the LEAST development effort?
• A. Establish a VPN connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.
• B. Establish an AWS Direct Connect connection from the VPC to Salesforce. Use AWS Glue DataBrew
to transfer data.
• C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer
data.
• D. Create a VPC peering connection to Salesforce. Use Amazon AppFlow to transfer data.
Hide Answer
Suggested Answer: C
Explain:
Amazon AppFlow is a fully managed integration service that enables data transfer in a secure and scalable manner
among Software as a Service (SaaS), AWS services, and on-premises applications. AppFlow supports Salesforce
as a data source and can transfer data directly to an Amazon Redshift cluster, greatly reducing the development
effort required.
AWS PrivateLink provides a connection between your VPC and an external service (in this case Salesforce) that
is secured and does not route traffic over the public internet.
Amazon AppFlow là một dịch vụ tích hớp đửớc quần lý toàn phần, cho phép truyện dử liệu một cách an toàn và
có thệ mớ rộng giửa Phần mệm dửới dầng dịch vụ (SaaS), dịch vụ AWS và ửng dụng tầi chộ. AppFlow hộ trớ
Salesforce dửới dầng nguộn dử liệu và có thệ truyện dử liệu trửc tiệp đện cụm Amazon Redshift, giúp giầm đáng
kệ nộ lửc phát triện cần thiệt.
AWS PrivateLink cung cầp kệt nội giửa VPC cụa bần và một dịch vụ bên ngoài (trong trửớng hớp này là
Salesforce) đửớc bầo mầt và không định tuyện lửu lửớng truy cầp qua Internet công cộng.
Question #: : 822
A company recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an
Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon Elastic File
System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes the
company's files. The index is stored in an Amazon RDS database.
The company needs to optimize storage costs with some application and services changes.
Hide Answer
Suggested Answer: A
Giầi pháp này là hiệu quầ nhầt vệ mầt chi phí. Phân bầc thông minh cụa Amazon S3 đửớc thiệt kệ đệ tội ửu hóa
chi phí bầng cách tử động di chuyện dử liệu giửa hai tầng truy cầp (truy cầp thửớng xuyên và không thửớng
xuyên) dửa trên việc thay đội mô hình truy cầp. Ngoài ra, Amazon S3 thửớng rệ hớn Amazon EFS vệ mầt lửu trử
nên việc di chuyện dử liệu có thệ tiệt kiệm chi phí và việc sử dụng API S3 đệ truy xuầt và lửu trử dử liệu cũng rầt
đớn giần.
Question #: : 823
A robotics company is designing a solution for medical surgery. The robots will use advanced sensors, cameras,
and AI algorithms to perceive their environment and to complete surgeries.
The company needs a public load balancer in the AWS Cloud that will ensure seamless communication with
backend services. The load balancer must be capable of routing traffic based on the query strings to different
target groups. The traffic must also be encrypted.
Hide Answer
Suggested Answer: C
Cân bầng tầi ửng dụng hoầt động ớ cầp độ yêu cầu (lớp 7), hộ trớ định tuyện dửa trên nội dung cụa yêu cầu bao
gộm các chuội truy vần. Ngoài ra, hộ còn cung cầp tính năng chầm dửt SSL gộc, do đó bần có thệ giầm tầi công
việc mã hóa và giầi mã lửu lửớng truy cầp tử các dịch vụ cụa mình. Bần có thệ đính kèm chửng chỉ SSL do AWS
Certification Manager (ACM) cung cầp vào bộ cân bầng tầi cụa mình đệ đáp ửng yêu cầu mã hóa. Network Load
Balancer, Gateway Load Balancer không có khầ năng định tuyện lửu lửớng truy cầp dửa trên chuội truy vần.
Question #: : 824
A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database
that runs on the same EC2 instance. The company needs a highly available and automatically scalable solution to
handle increased traffic.
Hide Answer
Suggested Answer: B
Amazon Redshift (Tùy chộn A) là dịch vụ lửu trử dử liệu và không phù hớp với khội lửớng công việc giao dịch
điện hình trong phần phụ trớ ửng dụng.
Cụm Amazon RDS cho MySQL điện hình (Tùy chộn B) sẽ không tử động mớ rộng quy mô.
Amazon ElastiCache for Redis (Tùy chộn D) là kho lửu trử dử liệu trong bộ nhớ và không thay thệ cho cớ sớ dử
liệu quan hệ nhử MySQL.
Question #: : 825
A company is planning to migrate data to an Amazon S3 bucket. The data must be encrypted at rest within the S3
bucket. The encryption key must be rotated automatically every year.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the data to the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE-
S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
• B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key
rotation. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data
to the S3 bucket.
• C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket's
default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. Manually
rotate the KMS key every year.
• D. Use customer key material to encrypt the data. Migrate the data to the S3 bucket. Create an AWS Key
Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key.
Enable automatic key rotation.
Hide Answer
Suggested Answer: A
This solution meets all the requirements specified. AWS Key Management Service (KMS) allows you to create
and manage cryptographic keys and control their use across a wide range of AWS services and in your applications.
You can set up automatic key rotation for a customer managed key, which will automatically rotate the key every
year so you don't need to do this manually. Setting the S3 bucket's default encryption behavior to use this customer
managed KMS key will ensure data is automatically encrypted at rest with the key when it is loaded to the S3
bucket. This results in the least amount of operational overhead while meeting the key rotation and encryption
requirements.
Giầi pháp này đáp ửng tầt cầ các yêu cầu đửớc chỉ định. Dịch vụ quần lý khóa AWS (KMS) cho phép bần tầo và
quần lý khóa mầt mã cũng nhử kiệm soát việc sử dụng chúng trên nhiệu dịch vụ AWS và trong ửng dụng cụa bần.
Bần có thệ thiệt lầp xoay khóa tử động cho khóa do khách hàng quần lý, thao tác này sẽ tử động xoay khóa hàng
năm nên bần không cần phầi thửc hiện việc này theo cách thụ công. Việc đầt hành vi mã hóa mầc định cụa bộ
chửa S3 đệ sử dụng khóa KMS do khách hàng quần lý này sẽ đầm bầo dử liệu đửớc mã hóa tử động ớ phần lửu
trử bầng khóa khi dử liệu đửớc tầi vào bộ chửa S3. Điệu này dần đện chi phí vần hành ít nhầt trong khi vần đáp
ửng các yêu cầu vệ mã hóa và luân chuyện khóa.
Question #: : 826
A company is migrating applications from an on-premises Microsoft Active Directory that the company manages
to AWS. The company deploys the applications in multiple AWS accounts. The company uses AWS Organizations
to manage the accounts centrally.
The company's security team needs a single sign-on solution across all the company's AWS accounts. The
company must continue to manage users and groups that are in the on-premises Active Directory.
Hide Answer
Suggested Answer: B
Question #: : 827
A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The
application will receive large amounts of traffic. The company wants to optimize the storage performance of the
cluster as the load on the application increases.
Hide Answer
Suggested Answer: C
Question #: : 828
proof that the designed controls have been implemented and are functioning correctly.’ bầng chửng rầng các biện
pháp kiệm soát đửớc thiệt kệ đã đửớc triện khai và hoầt động chính xác.
A financial services company that runs on AWS has designed its security controls to meet industry standards. The
industry standards include the National Institute of Standards and Technology (NIST) and the Payment Card
Industry Data Security Standard (PCI DSS).
The company's third-party auditors need proof that the designed controls have been implemented and are
functioning correctly. The company has hundreds of AWS accounts in a single organization in AWS Organizations.
The company needs to monitor the current state of the controls across accounts.
Hide Answer
Suggested Answer: D
Community vote distribution
D (100%)
by Kaula at March 23, 2024, 2:44 p.m.
Explain’
Security Hub is a central service that collects security findings from AWS Config and other AWS security services.
This aligns perfectly with the initial recommendation for the financial services company.
Security Hub là dịch vụ trung tâm thu thầp các phát hiện bầo mầt tử AWS Config và các dịch vụ bầo mầt AWS
khác. Điệu này hoàn toàn phù hớp với khuyện nghị ban đầu dành cho công ty dịch vụ tài chính.
Each of the mentioned services has a different role within the AWS ecosystem:
1. Amazon Inspector is an automated security assessment service that helps improve the security and
compliance of applications deployed on AWS. It assesses applications for vulnerabilities or deviations from best
practices.
2. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect AWS accounts and workloads.
3. AWS CloudTrail is a service that enables governance, compliance, operations auditing, and risk auditing
of your AWS account. It logs all activities that happen in your AWS environment.
4. AWS Security Hub gives you a comprehensive view of your high-priority security alerts and security
status across your AWS accounts. It aggregates, organizes, and prioritizes your security alerts, or findings, from
multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and AWS CloudTrail.
AWS Security Hub is a central place where you can manage security and compliance across an AWS environment
so you can get a comprehensive understanding of your high-priority security alerts and compliance status across
AWS accounts.
Mội dịch vụ đửớc đệ cầp có vai trò khác nhau trong hệ sinh thái AWS:
Amazon Inspector là dịch vụ đánh giá bầo mầt tử động giúp cầi thiện tính bầo mầt và tuân thụ cụa các ửng dụng
đửớc triện khai trên AWS. Nó đánh giá các ửng dụng đệ tìm lộ hộng hoầc sai lệch so với các phửớng pháp hay
nhầt.
Amazon GuardDuty là dịch vụ phát hiện mội đe dộa liên tục giám sát hoầt động độc hầi và hành vi trái phép đệ
bầo vệ tài khoần và khội lửớng công việc AWS.
AWS CloudTrail là dịch vụ cho phép quần trị, tuân thụ, kiệm tra hoầt động và kiệm tra rụi ro đội với tài khoần
AWS cụa bần. Nó ghi lầi tầt cầ các hoầt động diện ra trong môi trửớng AWS cụa bần.
AWS Security Hub cung cầp cho bần cái nhìn toàn diện vệ các cầnh báo bầo mầt có mửc độ ửu tiên cao và trầng
thái bầo mầt trên các tài khoần AWS cụa bần. Nó tộng hớp, sầp xệp và ửu tiên các cầnh báo hoầc phát hiện bầo
mầt cụa bần tử nhiệu dịch vụ AWS, chầng hần nhử Amazon GuardDuty, Amazon Inspector và AWS CloudTrail.
Trung tâm bầo mầt AWS là nới trung tâm nới bần có thệ quần lý bầo mầt và tuân thụ trên môi trửớng AWS đệ
bần có thệ hiệu rõ toàn diện vệ các cầnh báo bầo mầt có mửc độ ửu tiên cao và trầng thái tuân thụ trên các tài
khoần AWS.
Question #: : 829
provide immediate availability for frequently accessed objects.:cung cầp tính khầ dụng ngay lầp tửc cho các đội
tửớng đửớc truy cầp thửớng xuyên.
A company uses an Amazon S3 bucket as its data lake storage platform. The S3 bucket contains a massive amount
of data that is accessed randomly by multiple teams and hundreds of applications. The company wants to reduce
the S3 storage costs and provide immediate availability for frequently accessed objects.
What is the MOST operationally efficient solution that meets these requirements?
• A. Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class.
• B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to the data.
• C. Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects
to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
• D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an
AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an
application.
Hide Answer
Suggested Answer: A
Question #: : 830
A company has 5 TB of datasets. The datasets consist of 1 million user profiles and 10 million connections. The
user profiles have connections as many-to-many relationships. The company needs a performance efficient way
to find mutual connections up to five levels.
Hide Answer
Suggested Answer: B
Question #: : 831
high bandwidth: băng thông cao
A company needs a secure connection between its on-premises environment and AWS. This connection does not
need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.
Hide Answer
Suggested Answer: D
Question #: : 832
scale the file transfer solution: mớ rộng giầi pháp truyện tệp
optimize costs: tội ửu hóa chi phí
A company has an on-premises SFTP file transfer solution. The company is migrating to the AWS Cloud to scale
the file transfer solution and to optimize costs by using Amazon S3. The company's employees will use their
credentials for the on-premises Microsoft Active Directory (AD) to access the new solution. The company wants
to keep the current authentication and file access mechanisms.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure an S3 File Gateway. Create SMB file shares on the file gateway that use the existing Active
Directory to authenticate.
• B. Configure an Auto Scaling group with Amazon EC2 instances to run an SFTP solution. Configure the
group to scale up at 60% CPU utilization.
• C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service
option as the identity provider. Use AD Connector to connect the on-premises Active Directory.
• D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory
Service option as the identity provider to connect to the existing Active Directory.
Hide Answer
Suggested Answer: C
Question #: : 833
multiple validation steps: nhiệu bửớc xác thửc
Individual validation steps: Các bửớc xác thửc riêng lệ
loosely coupled to accommodate future business changes.: kệt hớp lộng lệo đệ phù hớp với nhửng thay đội kinh
doanh trong tửớng lai.
A company is designing an event-driven order processing system. Each order requires multiple validation steps
after the order is created. An idempotent AWS Lambda function performs each validation step. Each validation
step is independent from the other validation steps. Individual validation steps need only a subset of the order
event information.
The company wants to ensure that each validation step Lambda function has access to only the information from
the order event that the function requires. The components of the order processing system should be loosely
coupled to accommodate future business changes.
Hide Answer
Suggested Answer: C
Question #: : 834
A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past,
the application users reported poor application performance when creating new entries. These performance issues
were caused by users generating different real-time reports from the application during working hours.
Which solution will improve the performance of the application when it is moved to AWS?
• A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application
to use DynamoDB for reports.
• B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed
the on-premises database.
• C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the
application to use the reader endpoint for reports.
• D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup
instance of the cluster as an endpoint for the reports.
Hide Answer
Suggested Answer: C
Question #: : 835
A company is expanding a secure on-premises network to the AWS Cloud by using an AWS Direct Connect
connection. The on-premises network has no direct internet access. An application that runs on the on-premises
network needs to use an Amazon S3 bucket.
Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by Kaula at March 23, 2024, 3:26 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03
Question #: : 836
A company serves its website by using an Auto Scaling group of Amazon EC2 instances in a single AWS Region.
The website does not require a database.
The company is expanding, and the company's engineering team deploys the website to a second Region. The
company wants to distribute traffic across both Regions to accommodate growth and for disaster recovery
purposes. The solution should not serve traffic from a Region in which the website is unhealthy.
Which policy or resource should the company use to meet these requirements?
• A. An Amazon Route 53 simple routing policy
• B. An Amazon Route 53 multivalue answer routing policy
• C. An Application Load Balancer in one Region with a target group that specifies the EC2 instance IDs
from both Regions
• D. An Application Load Balancer in one Region with a target group that specifies the IP addresses of the
EC2 instances from both Regions
Hide Answer
Suggested Answer: B
Question #: : 837
A company runs its applications on Amazon EC2 instances that are backed by Amazon Elastic Block Store
(Amazon EBS). The EC2 instances run the most recent Amazon Linux release. The applications are experiencing
availability issues when the company's employees store and retrieve files that are 25 GB or larger. The company
needs a solution that does not require the company to transfer files between EC2 instances. The files must be
available across many EC2 instances and across multiple Availability Zones.
Hide Answer
Suggested Answer: C
Question #: : 838
A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database.
Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest.
Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of
changes to the infrastructure?
• A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database
volume.
• B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt database volumes.
• C. Configure SSL encryption using AWS Key Management Service (AWS KMS) keys to encrypt database
volumes.
• D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with
AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.
Hide Answer
Suggested Answer: D
Question #: : 839
A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the
internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output
as an object to Amazon S3.
Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the
NAT instance's network. The company wants to access Amazon S3 without traversing the internet.
Hide Answer
Suggested Answer: C
Question #: : 840
A news company that has reporters all over the world is hosting its broadcast system on AWS. The reporters send
live broadcasts to the broadcast system. The reporters use software on their phones to send live streams through
the Real Time Messaging Protocol (RTMP).
A solutions architect must design a solution that gives the reporters the ability to send the highest quality streams.
The solution must provide accelerated TCP connections back to the broadcast system.
Question #: : 841
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) to run its self-managed
database. The company has 350 TB of data spread across all EBS volumes. The company takes daily EBS snapshots
and keeps the snapshots for 1 month. The daily change rate is 5% of the EBS volumes.
Because of new regulations, the company needs to keep the monthly snapshots for 7 years. The company needs
to change its backup strategy to comply with the new regulations and to ensure that data is available with minimal
administrative effort.
Hide Answer
Suggested Answer: A
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03
Question #: : 20
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company wants to improve its ability to clone large amounts of production data into a test environment in the
same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS)
volumes. Modifications to the cloned data must not affect the production environment. The software that accesses
this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test
environment.
Which solution will meet these requirements?
• A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store
volumes in the test environment.
• B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of
the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
• C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach
the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production
EBS snapshots.
• D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature
on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2
instances in the test environment.
Hide Answer
Suggested Answer: D
Explain:
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized
at creation. This eliminates the latency of I/O operations on a block when it is accessed for the first time. Volumes
that are created using fast snapshot restore instantly deliver all of their provisioned performance.
Amazon EBS fast snapshot restore (FSR) cụa Amazon EBS cho phép bần tầo ộ đĩa tử ầnh chụp nhanh đửớc khới
tầo hoàn toàn khi tầo. Điệu này giúp loầi bộ độ trệ cụa các thao tác I/O trên một khội khi nó đửớc truy cầp lần
đầu tiên. Các tầp đửớc tầo bầng cách sử dụng khôi phục ầnh chụp nhanh nhanh sẽ ngay lầp tửc cung cầp tầt cầ
hiệu suầt đửớc cung cầp cụa chúng.
Question #: : 842
A company runs an application on several Amazon EC2 instances that store persistent data on an Amazon Elastic
File System (Amazon EFS) file system. The company needs to replicate the data to another AWS Region by using
an AWS managed service solution.
Hide Answer
Suggested Answer: D
Question #: : 843
An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists
of a web application and a backend Microsoft SQL database for storage.
The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS
Cloud must be highly available and scalable.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an
Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in
both Availability Zones.
• B. Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across
two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across
separate AWS Regions with database replication.
• C. Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two
Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ
deployment.
• D. Migrate the web application to three Amazon EC2 instances across three Availability Zones behind
an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.
Hide Answer
Suggested Answer: C
Question #: : 844
A company has an on-premises business application that generates hundreds of files each day. These files are
stored on an SMB file share and require a low-latency connection to the application servers. A new company policy
states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.
The application development team does not have time to make the necessary code modifications to move the
application to AWS.
Which service should a solutions architect recommend to allow the application to copy files to AWS?
• A. Amazon Elastic File System (Amazon EFS)
• B. Amazon FSx for Windows File Server
• C. AWS Snowball
• D. AWS Storage Gateway
Hide Answer
Suggested Answer: D
Question #: : 845
A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The
company wants to send an email message to each employee on the day of the employee's work anniversary.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service
(Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day
on an Amazon EC2 instance.
• B. Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon
SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an
Amazon EC2 instance.
• C. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple
Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda
function to run every day.
• D. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue
Service (Amazon SQS) to send email messages to employees when necessary. Schedule this Lambda function to
run every day.
Hide Answer
Suggested Answer: C
Question #: : 846
A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic
Load Balancing (ELB) load balancer. Based on the application's history, the company anticipates a spike in traffic
during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group
proactively increases capacity to minimize any performance impact on application users.
Hide Answer
Suggested Answer: B
Question #: : 847
A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password
rotation for the databases.
Which solution meets this requirement with the LEAST operational overhead?
• A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
• B. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the
parameter.
• C. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that
rotates the password.
• D. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the
AWS KMS key.
Hide Answer
Suggested Answer: A
Question #: : 848
A company runs its application on Oracle Database Enterprise Edition. The company needs to migrate the
application and the database to AWS. The company can use the Bring Your Own License (BYOL) model while
migrating to AWS. The application uses third-party database features that require privileged access.
Hide Answer
Suggested Answer: B
Question #: : 849
A large international university has deployed all of its compute services in the AWS Cloud. These services include
Amazon EC2, Amazon RDS, and Amazon DynamoDB. The university currently relies on many custom scripts to
back up its infrastructure. However, the university wants to centralize management and automate data backups as
much as possible by using AWS native options.
Hide Answer
Suggested Answer: B
Question #: : 850
A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose
security risks. The company's security team must be able to query data in the IT infrastructure map and quickly
identify security risks.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
• B. Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
• C. Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
• D. Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.
Hide Answer
Suggested Answer: B
Question #: : 851
A large company wants to provide its globally located developers separate, limited size, managed PostgreSQL
databases for development purposes. The databases will be low volume. The developers need the databases only
when they are actively working.
Hide Answer
Suggested Answer: C
Question #: : 852
A company is building a web application that serves a content management system. The content management
system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an
Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and
other website assets in the content management system.
A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content
with the least possible lag time.
Hide Answer
Suggested Answer: B
Question #: : 853
A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load
Balancer in a VPC. An Amazon RDS for MySQL DB instance contains the data. The company needs the ability
to automatically detect and respond to suspicious or unexpected behavior in its AWS environment. The company
already has added AWS WAF to its architecture.
Hide Answer
Suggested Answer: A
Question #: : 854
A company is planning to run a group of Amazon EC2 instances that connect to an Amazon Aurora database. The
company has built an AWS CloudFormation template to deploy the EC2 instances and the Aurora DB cluster.
The company wants to allow the instances to authenticate to the database in a secure way. The company does not
want to maintain static database credentials.
Which solution meets these requirements with the LEAST operational effort?
• A. Create a database user with a user name and password. Add parameters for the database user name
and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are
launched.
• B. Create a database user with a user name and password. Store the user name and password in AWS
Systems Manager Parameter Store. Configure the EC2 instances to retrieve the database credentials from
Parameter Store.
• C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM
authentication. Associate a role with the EC2 instances to allow applications on the instances to access the
database.
• D. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database
user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow
applications on the instances to access the database.
Hide Answer
Suggested Answer: C
Question #: : 855
A company wants to configure its Amazon CloudFront distribution to use SSL/TLS certificates. The company
does not want to use the default domain name for the distribution. Instead, the company wants to use a different
domain name for the distribution.
Which solution will deploy the certificate without incurring any additional costs?
• A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1
Region.
• B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-
1 Region.
• C. Request an Amazon issued public certificate from AWS Certificate Manager (ACM) in the us-east-1
Region.
• D. Request an Amazon issued public certificate from AWS Certificate Manager (ACM) in the us-west-
1 Region.
Hide Answer
Suggested Answer: A
Question #: : 856
A company creates operations data and stores the data in an Amazon S3 bucket. For the company's annual audit,
an external consultant needs to access an annual report that is stored in the S3 bucket. The external consultant
needs to access the report for 7 days.
The company must implement a solution to allow the external consultant access to only the report.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a new S3 bucket that is configured to host a public static website. Migrate the operations data
to the new S3 bucket. Share the S3 website URL with the external consultant.
• B. Enable public access to the S3 bucket for 7 days. Remove access to the S3 bucket when the external
consultant completes the audit.
• C. Create a new IAM user that has access to the report in the S3 bucket. Provide the access keys to the
external consultant. Revoke the access keys after 7 days.
• D. Generate a presigned URL that has the required access to the location of the report on the S3 bucket.
Share the presigned URL with the external consultant.
Hide Answer
Suggested Answer: D
Question #: : 857
A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances. The workload
requires low-latency network performance and high network throughput with tightly coupled node-to-node
communication.
Hide Answer
Suggested Answer: D
Question #: : 859
A company runs several Amazon RDS for Oracle On-Demand DB instances that have high utilization. The RDS
DB instances run in member accounts that are in an organization in AWS Organizations.
The company's finance team has access to the organization's management account and member accounts. The
finance team wants to find ways to optimize costs by using AWS Trusted Advisor.
Hide Answer
Suggested Answer: BC
Question #: : 860
A solutions architect is creating an application. The application will run on Amazon EC2 instances in private
subnets across multiple Availability Zones in a VPC. The EC2 instances will frequently access large files that
contain confidential information. These files are stored in Amazon S3 buckets for processing. The solutions
architect must optimize the network architecture to minimize data transfer costs.
Hide Answer
Suggested Answer: C
Question #: : 861
A company wants to relocate its on-premises MySQL database to AWS. The database accepts regular imports
from a client-facing application, which causes a high volume of write operations. The company is concerned that
the amount of traffic might be causing performance issues within the application.
Hide Answer
Suggested Answer: A
Community vote distribution
A (100%)
by Tanidanindo at April 9, 2024, 6:17 a.m.
Question #: : 862
A company runs an application in the AWS Cloud that generates sensitive archival data files. The company wants
to rearchitect the application's data storage. The company wants to encrypt the data files and to ensure that third
parties do not have access to the data before the data is encrypted and sent to AWS. The company has already
created an Amazon S3 bucket.
Hide Answer
Suggested Answer: D
Question #: : 863
A company uses Amazon RDS with default backup settings for its database tier. The company needs to make a
daily backup of the database to meet regulatory requirements. The company must retain the backups for 30 days.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Write an AWS Lambda function to create an RDS snapshot every day.
• B. Modify the RDS database to have a retention period of 30 days for automated backups.
• C. Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
• D. Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention period.
Hide Answer
Suggested Answer: B
Question #: : 865
A company's near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the
data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of
incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.
Which combination of steps should the solutions architect take? (Choose two.)
• A. Use Amazon Kinesis Data Firehose to ingest the data.
• B. Use AWS Lambda with AWS Step Functions to process the data.
• C. Use AWS Database Migration Service (AWS DMS) to ingest the data.
• D. Use Amazon EC2 instances in an Auto Scaling group to process the data.
• E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Hide Answer
Suggested Answer: AB
Question #: : 866
A company runs a web application on multiple Amazon EC2 instances in a VPC. The application needs to write
sensitive data to an Amazon S3 bucket. The data cannot be sent over the public internet.
Hide Answer
Suggested Answer: A
Question #: : 867
A company runs its production workload on Amazon EC2 instances with Amazon Elastic Block Store (Amazon
EBS) volumes. A solutions architect needs to analyze the current EBS volume cost and to recommend
optimizations. The recommendations need to include estimated monthly saving opportunities.
Hide Answer
Suggested Answer: D
A global company runs its workloads on AWS. The company's application uses Amazon S3 buckets across AWS
Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets
daily. The company wants to identify all S3 buckets that are not versioning-enabled.
Hide Answer
Suggested Answer: B
Đội với yêu cầu cụ thệ vệ việc xác định các nhóm S3 không hộ trớ lầp phiên bần trên các Khu vửc, Storage Lens
có thệ cung cầp thông tin chi tiệt vệ cầu hình cụa tửng nhóm S3, bao gộm cầ việc liệu phiên bần có đửớc bầt hay
không. Điệu này cho phép công ty dệ dàng xác định các nhóm thiệu phiên bần trên khầp các Khu vửc.
※A Config rule that checks whether versioning is enabled for your S3 buckets. Optionally, the rule checks if MFA
delete is enabled for your S3 buckets.
Question #: : 869
unpredictable traffic surges:thới gian lửu lửớng truy cầp tăng đột biện khó lửớng.
A company wants to enhance its ecommerce order-processing application that is deployed on AWS. The
application must process each order exactly once without affecting the customer experience during unpredictable
traffic surges.
Which solution will meet these requirements?
• A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Put all the orders in the SQS
queue. Configure an AWS Lambda function as the target to process the orders.
• B. Create an Amazon Simple Notification Service (Amazon SNS) standard topic. Publish all the orders
to the SNS standard topic. Configure the application as a notification target.
• C. Create a flow by using Amazon AppFlow. Send the orders to the flow. Configure an AWS Lambda
function as the target to process the orders.
• D. Configure AWS X-Ray in the application to track the order requests. Configure the application to
process the orders by pulling the orders from Amazon CloudWatch.
Hide Answer
Suggested Answer: A
Hàng đới Amazon SQS FIFO (First-In-First-Out) đửớc thiệt kệ đệ đầm bầo rầng thử tử cụa tin nhần đửớc giử
nguyên và tin nhần đửớc gửi chính xác sau khi loầi bộ bầt kỳ sử trùng lầp nào. Cách tiệp cần này cũng có thệ xử
lý sử gia tăng lửu lửớng truy cầp vì các đớn đầt hàng/tin nhần sẽ chỉ đửớc xệp hàng đới cho đện khi chúng có thệ
đửớc xử lý. AWS Lambda có thệ xử lý các tin nhần/đớn đầt hàng này một cách dệ dàng và điệu chỉnh quy mô theo
lửu lửớng truy cầp đện. Giầi pháp này sẽ đầm bầo các đớn hàng đửớc xử lý chính xác một lần và ửng dụng có thệ
xử lý trớn tru các đớt tăng đột biện lửu lửớng truy cầp mà không ầnh hửớng đện trầi nghiệm cụa khách hàng.
Question #: : 870
A company has two AWS accounts: Production and Development. The company needs to push code changes in
the Development account to the Production account. In the alpha phase, only two senior developers on the
development team need access to the Production account. In the beta phase, more developers will need access to
perform testing.
Hide Answer
Suggested Answer: C
Chính sách tin cầy dành cho Tài khoần phát triện: Bầng cách xác định chính sách tin cầy trong vai trò IAM trong
tài khoần Sần xuầt chỉ định tài khoần Phát triện, quyện truy cầp chỉ đửớc giới hần ớ nhửng tài khoần đửớc xác
thửc trong tài khoần Phát triện. Điệu này đầm bầo rầng chỉ nhửng ngửới dùng đửớc ụy quyện tử tài khoần Phát
triện mới có thệ đầm nhần vai trò trong tài khoần Sần xuầt.
Mớ rộng quyện truy cầp dần dần: Ban đầu, chỉ nhửng nhà phát triện cầp cao mới cần quyện truy cầp vào tài khoần
Sần xuầt. Vì cần có nhiệu nhà phát triện hớn trong giai đoần beta nên hộ có thệ đửớc thêm vào chính sách cụa vai
trò trong tài khoần Phát triện, cho phép hộ đầm nhần vai trò và có quyện truy cầp vào tài khoần Sần xuầt. Điệu
này cho phép mớ rộng quyện truy cầp có kiệm soát khi cần thiệt.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03
Question #: : 871
restrict access: hần chệ quyện truy cầp
implement a serverless architecture for authorization: triện khai kiện trúc không có máy chụ đệ ụy quyện
authentication that has low login latency.: xác thửc có độ trệ đăng nhầp thầp.
A company wants to restrict access to the content of its web application. The company needs to protect the content
by using authorization techniques that are available on AWS. The company also wants to implement a serverless
architecture for authorization and authentication that has low login latency.
The solution must integrate with the web application and serve web content globally. The application currently
has a small user base, but the company expects the application's user base to increase.
Hide Answer
Suggested Answer: A
Amazon Cognito cung cầp các tính năng xác thửc và quần lý thử mục ngửới dùng mầnh mẽ. Lambda@Edge đửớc
tích hớp với CloudFront (dịch vụ mầng phân phội nội dung cụa AWS), cho phép bần thửc thi các chửc năng ụy
quyện gần hớn với ngửới dùng và do đó giầm độ trệ. Amazon CloudFront có thệ phục vụ ửng dụng web trên toàn
cầu và đáng tin cầy, mang lầi độ trệ thầp và tộc độ truyện cao.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03
Question #: : 872
A development team uses multiple AWS accounts for its development, staging, and production environments.
Team members have been launching large Amazon EC2 instances that are underutilized. A solutions architect
must prevent large instances from being launched in all accounts.
How can the solutions architect meet this requirement with the LEAST operational overhead?
• A. Update the IAM policies to deny the launch of large EC2 instances. Apply the policies to all users.
• B. Define a resource in AWS Resource Access Manager that prevents the launch of large EC2 instances.
• C. Create an IAM role in each account that denies the launch of large EC2 instances. Grant the
developers IAM group access to the role.
• D. Create an organization in AWS Organizations in the management account with the default policy.
Create a service control policy (SCP) that denies the launch of large EC2 instances, and apply it to the AWS
accounts.
Hide Answer
Suggested Answer: D
Question #: : 873
A company has migrated a fleet of hundreds of on-premises virtual machines (VMs) to Amazon EC2 instances.
The instances run a diverse fleet of Windows Server versions along with several Linux distributions. The company
wants a solution that will automate inventory and updates of the operating systems. The company also needs a
summary of common vulnerabilities of each instance for regular monthly reviews.
Hide Answer
Suggested Answer: A
AWS Systems Manager Patch Manager sẽ tử động vá lội và cầp nhầt hệ điệu hành trong khi Amazon Inspector sẽ
phân tích các phiên bần EC2 đệ tìm lộ hộng và cung cầp báo cáo hàng tháng. Trung tâm bầo mầt AWS (Tùy chộn
A) không cung cầp báo cáo vệ lộ hộng hệ điệu hành, AWS Shield Advanced (Tùy chộn C) phù hớp hớn đệ bầo vệ
DDoS và Amazon GuardDuty (Tùy chộn D) đệ phát hiện mội đe dộa chử không phầi quần lý bần vá hệ điệu hành
hoầc quần lý lộ hộng bầo mầt.
Question #: : 874
A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances in an Auto
Scaling group behind an Elastic Load Balancing (ELB) load balancer. The application connects to an Amazon
DynamoDB table.
For disaster recovery (DR) purposes, the company wants to ensure that the application is available from another
AWS Region with minimal downtime.
Which solution will meet these requirements with the LEAST downtime?
• A. Create an Auto Scaling group and an ELB in the DR Region. Configure the DynamoDB table as a
global table. Configure DNS failover to point to the new DR Region's ELB.
• B. Create an AWS CloudFormation template to create EC2 instances, ELBs, and DynamoDB tables to
be launched when necessary. Configure DNS failover to point to the new DR Region's ELB.
• C. Create an AWS CloudFormation template to create EC2 instances and an ELB to be launched when
necessary. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new DR
Region's ELB.
• D. Create an Auto Scaling group and an ELB in the DR Region. Configure the DynamoDB table as a
global table. Create an Amazon CloudWatch alarm with an evaluation period of 10 minutes to invoke an AWS
Lambda function that updates Amazon Route 53 to point to the DR Region's ELB.
Hide Answer
Suggested Answer: A
Question #: : 875
A company runs an application on Amazon EC2 instances in a private subnet. The application needs to store and
retrieve data in Amazon S3 buckets. According to regulatory requirements, the data must not travel across the
public internet.
Hide Answer
Suggested Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html
Question #: : 876
A company hosts an application on Amazon EC2 instances that run in a single Availability Zone. The application
is accessible by using the transport layer of the Open Systems Interconnection (OSI) model. The company needs
the application architecture to have high availability.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
• A. Configure new EC2 instances in a different Availability Zone. Use Amazon Route 53 to route traffic
to all instances.
• B. Configure a Network Load Balancer in front of the EC2 instances.
• C. Configure a Network Load Balancer for TCP traffic to the instances. Configure an Application Load
Balancer for HTTP and HTTPS traffic to the instances.
• D. Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple
Availability Zones. Configure the Auto Scaling group to run application health checks on the instances.
• E. Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to
a stopped state.
Hide Answer
Suggested Answer: CD
Question #: : 877
A company uses Amazon S3 to host its static website. The company wants to add a contact form to the webpage.
The contact form will have dynamic server-side components for users to input their name, email address, phone
number, and user message.
The company expects fewer than 100 site visits each month. The contact form must notify the company by email
when a customer fills out the form.
Question #: : 878
A company creates dedicated AWS accounts in AWS Organizations for its business units. Recently, an important
notification was sent to the root user email address of a business unit account instead of the assigned account
owner. The company wants to ensure that all future notifications can be sent to different employees based on the
notification categories of billing, operations, or security.
Hide Answer
Suggested Answer: B
Question #: : 879
A company runs an ecommerce application on AWS. Amazon EC2 instances process purchases and store the
purchase details in an Amazon Aurora PostgreSQL DB cluster.
Customers are experiencing application timeouts during times of peak usage. A solutions architect needs to
rearchitect the application so that the application can scale to meet peak usage demands.
Which combination of actions will meet these requirements MOST cost-effectively? (Choose two.)
• A. Configure an Auto Scaling group of new EC2 instances to retry the purchases until the processing is
complete. Update the applications to connect to the DB cluster by using Amazon RDS Proxy.
• B. Configure the application to use an Amazon ElastiCache cluster in front of the Aurora PostgreSQL
DB cluster.
• C. Update the application to send the purchase requests to an Amazon Simple Queue Service (Amazon
SQS) queue. Configure an Auto Scaling group of new EC2 instances that read from the SQS queue.
• D. Configure an AWS Lambda function to retry the ticket purchases until the processing is complete.
• E. Configure an Amazon AP! Gateway REST API with a usage plan.
Hide Answer
Suggested Answer: AC
Question #: : 880
A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts. The company
used AWS Cost and Usage Report to create a new report in the management account. The report is delivered to
an Amazon S3 bucket that is replicated to a bucket in the data collection account.
The company’s senior leadership wants to view a custom dashboard that provides NAT gateway costs each day
starting at the beginning of the current month.
AWS Athena allows you to query data that's stored in Amazon S3 using standard SQL syntax.
As the Cost and Usage Reports are stored in S3, Athena can be used as a direct querying layer for this reporting
data.
So, the data can be queried by Athena, and the results can be visualized by QuickSight and shared with the senior
leadership.
Question #: : 881
A company is hosting a high-traffic static website on Amazon S3 with an Amazon CloudFront distribution that
has a default TTL of 0 seconds. The company wants to implement caching to improve performance for the website.
However, the company also wants to ensure that stale content is not served for more than a few minutes after a
deployment.
Which combination of caching methods should a solutions architect implement to meet these requirements?
(Choose two.)
• A. Set the CloudFront default TTL to 2 minutes.
• B. Set a default TTL of 2 minutes on the S3 bucket.
• C. Add a Cache-Control private directive to the objects in Amazon S3.
• D. Create an AWS Lambda@Edge function to add an Expires header to HTTP responses. Configure the
function to run on viewer response.
• E. Add a Cache-Control max-age directive of 24 hours to the objects in Amazon S3. On deployment,
create a CloudFront invalidation to clear any changed files from edge caches.
Hide Answer
Suggested Answer: AE
"Set a default TTL of 2 minutes on the S3 bucket": Unfortunately, this statement is technically incorrect
because S3 does not utilize a TTL setting. Instead, S3 uses metadata (like Cache-Control) on the objects to
influence caching behavior. TTL is a concept associated with caching systems like CloudFront, not storage systems
like S3. Setting this metadata on S3 objects will not have S3 evict or delete objects after that time. It's actually a
hint for HTTP caches (like CloudFront) that tells them how long to retain a copy of the object before checking
for a new version.
The Cache-Control private directive and the Cache-Control max-age directive serve distinct purposes in
HTTP caching.
Cache-Control: private: The "private" directive in Cache-Control header signifies that the response
message is intended for a single user (usually a specific browser) and must not be stored by a shared cache (like a
CDN). This means that the response is unique to a specific user and should not be cached by intermediate shared
or public caches such as proxies or CDNs (like CloudFront). Consequently, the response is delivered directly from
the origin server (in this case, an Amazon S3 bucket) for every single request, and it could increase the load time
for the end user, especially if the user is geographically distant from the S3 bucket's region.
Cache-Control: max-age: The "max-age" directive tells all caches (private, public, or shared) how long
the response is considered fresh. For example, Cache-Control: max-age=86400 would mean that the file should
be considered 'fresh' for 24 hours. During this period, the file will be delivered from the cache (e.g., CloudFront)
without contacting the origin server (S3 in this case), reducing the load on the server and improving the
performance (especially for frequently accessed files). If the file changes at the origin server within this ‘freshness’
period, end users might still see the older version unless a cache invalidation is performed.
In the context of your question, using Cache-Control: max-age along with CloudFront invalidation on
deployment provides a better approach to balance between caching (for performance improvement) and content
freshness (avoid serving stale content).
Question #: : 882
A company runs its application by using Amazon EC2 instances and AWS Lambda functions. The EC2 instances
run in private subnets of a VPC. The Lambda functions need direct network access to the EC2 instances for the
application to work.
The application will run for 1 year. The number of Lambda functions that the application uses will increase during
the 1-year period. The company must minimize costs on all application resources.
Which solution will meet these requirements?
• A. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to the private subnets that
contain the EC2 instances.
• B. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to new public subnets in the
same VPC where the EC2 instances run.
• C. Purchase a Compute Savings Plan. Connect the Lambda functions to the private subnets that contain
the EC2 instances.
• D. Purchase a Compute Savings Plan. Keep the Lambda functions in the Lambda service VPC.
Hide Answer
Suggested Answer: C
Question #: : 883
A company has deployed a multi-account strategy on AWS by using AWS Control Tower. The company has
provided individual AWS accounts to each of its developers. The company wants to implement controls to limit
AWS resource costs that the developers incur.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Instruct each developer to tag all their resources with a tag that has a key of CostCenter and a value
of the developer's name. Use the required-tags AWS Config managed rule to check for the tag. Create an AWS
Lambda function to terminate resources that do not have the tag. Configure AWS Cost Explorer to send a daily
report to each developer to monitor their spending.
• B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and
forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets
actions to apply a DenyAll policy to the developer's IAM role to prevent additional resources from being launched
when the assigned budget is reached.
• C. Use AWS Cost Explorer to monitor and report on costs for each developer account. Configure Cost
Explorer to send a daily report to each developer to monitor their spending. Use AWS Cost Anomaly Detection
to detect anomalous spending and provide alerts.
• D. Use AWS Service Catalog to allow developers to launch resources within a limited cost range. Create
AWS Lambda functions in each AWS account to stop running resources at the end of each work day. Configure
the Lambda functions to resume the resources at the start of each work day.
Hide Answer
Suggested Answer: B
Question #: : 884
A solutions architect is designing a three-tier web application. The architecture consists of an internet-facing
Application Load Balancer (ALB) and a web tier that is hosted on Amazon EC2 instances in private subnets. The
application tier with the business logic runs on EC2 instances in private subnets. The database tier consists of
Microsoft SQL Server that runs on EC2 instances in private subnets. Security is a high priority for the company.
Which combination of security group configurations should the solutions architect use? (Choose three.)
• A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group
for the ALB.
• B. Configure the security group for the web tier to allow outbound HTTPS traffic to 0.0.0.0/0.
• C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from
the security group for the application tier.
• D. Configure the security group for the database tier to allow outbound HTTPS traffic and Microsoft
SQL Server traffic to the security group for the web tier.
• E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security
group for the web tier.
• F. Configure the security group for the application tier to allow outbound HTTPS traffic and Microsoft
SQL Server traffic to the security group for the web tier.
Hide Answer
Suggested Answer: ACE
Question #: : 885
A company has released a new version of its production application. The company's workload uses Amazon EC2,
AWS Lambda, AWS Fargate, and Amazon SageMaker.
The company wants to cost optimize the workload now that usage is at a steady state. The company wants to cover
the most services with the fewest savings plans.
Which combination of savings plans will meet these requirements? (Choose two.)
• A. Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
• B. Purchase a Compute Savings Plan for Amazon EC2, Lambda, and SageMaker.
• C. Purchase a SageMaker Savings Plan.
• D. Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2.
• E. Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate.
Hide Answer
Suggested Answer: BD
Question #: : 886
A company uses a Microsoft SQL Server database. The company's applications are connected to the database.
The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the
application code.
Hide Answer
Suggested Answer: CD
Question #: : 889
A global company runs its workloads on AWS. The company's application uses Amazon S3 buckets across AWS
Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets
daily. The company wants to identify all S3 buckets that are not versioning-enabled.
Which solution will meet these requirements?
• A. Set up an AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-
enabled across Regions.
• B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.
• C. Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across
Regions.
• D. Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled
across Regions.
Hide Answer
Suggested Answer: B
Question #: : 890
A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot
be recreated. Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.
The company must store the files for 4 years before the files can be deleted. The files must be immediately
accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed
after the first 30 days.
Hide Answer
Suggested Answer: D
Question #: : 892
A company is migrating a data center from its on-premises location to AWS. The company has several legacy
applications that are hosted on individual virtual servers. Changes to the application designs cannot be made.
Each individual virtual server currently runs as its own EC2 instance. A solutions architect needs to ensure that
the applications are reliable and fault tolerant after migration to AWS. The applications will run on Amazon EC2
instances.
Hide Answer
Suggested Answer: C
Question #: : 896
A company is designing its production application's disaster recovery (DR) strategy. The application is backed by
a MySQL database on an Amazon Aurora cluster in the us-east-1 Region. The company has chosen the us-west-
1 Region as its DR Region.
The company's target recovery point objective (RPO) is 5 minutes and the target recovery time objective (RTO)
is 20 minutes. The company wants to minimize configuration changes.
Which solution will meet these requirements with the MOST operational efficiency?
• A. Create an Aurora read replica in us-west-1 similar in size to the production application's Aurora
MySQL cluster writer instance.
• B. Convert the Aurora cluster to an Aurora global database. Configure managed failover.
• C. Create a new Aurora cluster in us-west-1 that has Cross-Region Replication.
• D. Create a new Aurora cluster in us-west-1. Use AWS Database Migration Service (AWS DMS) to sync
both clusters.
Hide Answer
Suggested Answer: B
Question #: : 898
A company runs workloads in the AWS Cloud. The company wants to centrally collect security data to assess
security across the entire company and to improve workload protection.
Which solution will meet these requirements with the LEAST development effort?
• A. Configure a data lake in AWS Lake Formation. Use AWS Glue crawlers to ingest the security data into
the data lake.
• B. Configure an AWS Lambda function to collect the security data in .csv format. Upload the data to an
Amazon S3 bucket.
• C. Configure a data lake in Amazon Security Lake to collect the security data. Upload the data to an
Amazon S3 bucket.
• D. Configure an AWS Database Migration Service (AWS DMS) replication instance to load the security
data into an Amazon RDS cluster.
Hide Answer
Suggested Answer: C
Question #: : 895
A company is implementing a shared storage solution for a media application that the company hosts on AWS.
The company needs the ability to use SMB clients to access stored data.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create an AWS Storage Gateway Volume Gateway. Create a file share that uses the required client
protocol. Connect the application server to the file share.
• B. Create an AWS Storage Gateway Tape Gateway. Configure tapes to use Amazon S3. Connect the
application server to the Tape Gateway.
• C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the
instance. Connect the application server to the file share.
• D. Create an Amazon FSx for Windows File Server file system. Connect the application server to the file
system.
Hide Answer
Suggested Answer: D
Question #: : 894
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website
serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs.
Hide Answer
Suggested Answer: A
Question #: : 893
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a
solution that centrally manages networking components for the workloads. The solution also must create accounts
with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private
subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the
workload accounts.
• B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private
subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the
workload accounts.
• C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each
VPC to route through an inspection VPC by using a transit gateway attachment.
• D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each
VPC to route through an inspection VPC by using a transit gateway attachment.
Hide Answer
Suggested Answer: A
Question #: : 901
A company is migrating its workloads to AWS. The company has sensitive and critical data in on-premises
relational databases that run on SQL Server instances.
The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases.
Hide Answer
Suggested Answer: B
Question #: : 904
A company has an application that customers use to upload images to an Amazon S3 bucket. Each night, the
company launches an Amazon EC2 Spot Fleet that processes all the images that the company received that day.
The processing for each image takes 2 minutes and requires 512 MB of memory.
A solutions architect needs to change the application to process the images when the images are uploaded.
Hide Answer
Suggested Answer: A
Question #: : 903
A company manages a data lake in an Amazon S3 bucket that numerous applications access. The S3 bucket
contains a unique prefix for each application. The company wants to restrict each application to its specific prefix
and to have granular control of the objects under each prefix.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create dedicated S3 access points and access point policies for each application.
• B. Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket.
• C. Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication rules
by prefix.
• D. Replicate the objects in the S3 bucket to new S3 buckets for each application. Create dedicated S3
access points for each application.
Hide Answer
Suggested Answer: A
Question #: : 902
A company wants to migrate an application to AWS. The company wants to increase the application's current
availability. The company wants to use AWS WAF in the application's architecture.
Hide Answer
Suggested Answer: A
A company is migrating its workloads to AWS. The company has sensitive and critical data in on-premises
relational databases that run on SQL Server instances.
The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases.
Hide Answer
Suggested Answer: B