0% found this document useful (0 votes)
412 views285 pages

Exam Question AWS (Test)

Uploaded by

Yun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
412 views285 pages

Exam Question AWS (Test)

Uploaded by

Yun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 285

500-530

Question #: : 500

A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files
into an Amazon FSx for Windows File Server file system. File permissions must be preserved to ensure that access
rights do not change.

Which solutions will meet these requirements? (Choose two.)


• A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx
for Windows File Server file system.
• B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS
DataSync tasks to transfer the data to the FSx for Windows File Server file system.
• C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule
AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
• D. Order an AWS Snowclone device. Connect the device to the on-premises network. Launch AWS
DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server
file system.
• E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises
network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import into Amazon S3.
Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.

Hide Answer
Suggested Answer: AD
Community vote distribution
AD (93%)
7%

Question #: : 501

A company wants to ingest customer payment data into the company's data lake in Amazon S3. The company
receives payment data every minute on average. The company wants to analyze the payment data in real time.
Then the company wants to ingest the data into the data lake.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
• B. Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
• C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the
data in real time.
• D. Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Hide Answer
Suggested Answer: A

Community vote distribution


C (94%)
6%

Question #: : 502

A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a
single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images
are stored on an Amazon Elastic Block Store (Amazon EBS) volume that is mounted inside the EC2 instance.

Which combination of actions should a solutions architect take to improve the performance and resilience of the
website? (Choose two.)
• A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
• B. Share the website images by using an NFS share from the primary EC2 instance. Mount this share on
the other EC2 instances.
• C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is
mounted on every EC2 instance.
• D. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision
new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the
website
• E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision
new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling
group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.

Hide Answer
Suggested Answer: DE

Community vote distribution


CE (63%)
AE (37%)

Question #: : 503

A company runs an infrastructure monitoring service. The company is building a new feature that will enable the
service to monitor data in customer AWS accounts. The new feature will call AWS APIs in customer accounts to
describe Amazon EC2 instances and read Amazon CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
• A. Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch
permissions and a trust policy to the company’s account.
• B. Create a serverless API that implements a token vending machine to provide temporary AWS
credentials for a role with read-only EC2 and CloudWatch permissions.
• C. Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch
permissions. Encrypt and store customer access and secret keys in a secrets management system.
• D. Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with
read-only EC2 and CloudWatch permissions. Encrypt and store the Amazon Cognito user and password in a
secrets management system.

Hide Answer
Suggested Answer: A

Community vote distribution


A (92%)
8%

Question #: : 504

A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The
company's networking team has its own AWS account to manage the cloud network.

What is the MOST operationally efficient solution to connect the VPCs?


• A. Set up VPC peering connections between each VPC. Update each associated subnet’s route table
• B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the
internet
• C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes from
each VPC.
• D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account to
connect to each VPC.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)

Question #: : 505

A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an
Auto Scaling group that uses On-Demand billing. If a job fails on one instance, another instance will reprocess
the job. The batch jobs run between 12:00 AM and 06:00 AM local time every day.

Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
• A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling
group that the batch job uses.
• B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the
instances in the Auto Scaling group that the batch job uses.
• C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a
policy to scale out based on CPU usage.
• D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to
scale out based on CPU usage.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)

Question #: : 506

A social media company is building a feature for its website. The feature will give users the ability to upload photos.
The company expects significant increases in demand during large events and must ensure that the website can
handle the upload traffic from users.

Which solution meets these requirements with the MOST scalability?


• A. Upload files from the user's browser to the application servers. Transfer the files to an Amazon S3
bucket.
• B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user's browser to the
file gateway.
• C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser
into an S3 bucket.
• D. Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the
user's browser to the file system.

Hide Answer
Suggested Answer: C

Community vote distribution


C (95%)
5%
Question #: : 507

A company has a web application for travel ticketing. The application is based on a database that runs in a single
data center in North America. The company wants to expand the application to serve a global user base. The
company needs to deploy the application to multiple AWS Regions. Average latency must be less than 1 second
on updates to the reservation database.

The company wants to have separate deployments of its web platform across multiple Regions. However, the
company must maintain a single primary reservation database that is globally consistent.

Which solution should a solutions architect recommend to meet these requirements?


• A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table.
Use the correct Regional endpoint in each Regional deployment.
• B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each
Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
• C. Migrate the database to an Amazon RDS for MySQL database. Deploy MySQL read replicas in each
Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
• D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database
to each Region. Use the correct Regional endpoint in each Regional deployment to access the database. Use AWS
Lambda functions to process event streams in each Region to synchronize the databases.

Hide Answer
Suggested Answer: B

Community vote distribution


A (54%)
B (46%)
by dacosa at May 18, 2023, 4:22 a.m.

Question #: : 508

A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the
us-west-1 Region. The company manually backs up the workloads to create an image as needed.

In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the
us-west-2 Region. The company wants no more than 24 hours of data loss on the EC2 instances. The company
also wants to automate any backups of the EC2 instances.

Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
• A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Copy the image on demand.
• B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup
based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region.
• C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for
the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the
backup data to us-west-2.
• D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2
instances based on tag values. Define the destination for the copy as us-west-2. Specify the backup schedule to
run twice daily.
• E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2
instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.

Hide Answer
Suggested Answer: BC

Community vote distribution


BD (100%)
by nosense at May 17, 2023, 1:56 p.m.

Question #: : 509

A company operates a two-tier application for image processing. The application uses two Availability Zones, each
with one public subnet and one private subnet. An Application Load Balancer (ALB) for the web tier uses the
public subnets. Amazon EC2 instances for the application tier use the private subnets.

Users report that the application is running more slowly than expected. A security audit of the web server log files
shows that the application is receiving millions of illegitimate requests from a small number of IP addresses. A
solutions architect needs to resolve the immediate performance problem while the company investigates a more
permanent solution.

What should the solutions architect recommend to meet this requirement?


• A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are
consuming resources.
• B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that
are consuming resources.
• C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that
are consuming resources.
• D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP
addresses that are consuming resources.
Hide Answer
Suggested Answer: B

Community vote distribution


B (86%)
14%
by nosense at May 17, 2023, 1:58 p.m.

Question #: : 510

A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region.
Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in
ap-southeast-2.

Which network design will meet these requirements?


• A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create
an inbound rule in the eu-west-1 application security group that allows traffic from the database server IP
addresses in the ap-southeast-2 security group.
• B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC.
Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that
references the security group ID of the application servers in eu-west-1.
• C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPUpdate
the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that allows traffic
from the eu-west-1 application server IP addresses.
• D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-
2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the
database security group that references the security group ID of the application servers in eu-west-1.

Hide Answer
Suggested Answer: B

Community vote distribution


C (84%)
B (16%)
by cloudenthusiast at May 19, 2023, 1:37 p.m.

Question #: : 511
A company is developing software that uses a PostgreSQL database schema. The company needs to configure
multiple development environments and databases for the company's developers. On average, each development
environment is used for half of the 8-hour workday.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure each development environment with its own Amazon Aurora PostgreSQL database
• B. Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB
instances
• C. Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-
Compatible database
• D. Configure each development environment with its own Amazon S3 bucket by using Amazon S3
Object Select

Hide Answer
Suggested Answer: B

Community vote distribution


C (56%)
B (44%)
by nosense at May 17, 2023, 2:10 p.m.

Question #: : 512

A company uses AWS Organizations with resources tagged by account. The company also uses AWS Backup to
back up its AWS infrastructure resources. The company needs to back up all AWS resources.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Config to identify all untagged resources. Tag the identified resources programmatically.
Use tags in the backup plan.
• B. Use AWS Config to identify all resources that are not running. Add those resources to the backup
vault.
• C. Require all AWS account owners to review their resources to identify the resources that need to be
backed up.
• D. Use Amazon Inspector to identify all noncompliant resources.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by cloudenthusiast at May 19, 2023, 1:44 p.m.

Question #: : 513

A social media company wants to allow its users to upload images in an application that is hosted in the AWS
Cloud. The company needs a solution that automatically resizes the images so that the images can be displayed
on multiple device types. The application experiences unpredictable traffic patterns throughout the day. The
company is seeking a highly available solution that maximizes scalability.

What should a solutions architect do to meet these requirements?


• A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images
and store the images in an Amazon S3 bucket.
• B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the
images and store the images in an Amazon RDS database.
• C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a
process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket.
• D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service
(Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-
resizing program that runs on an Amazon EC2 instance to process the resize jobs.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by cloudenthusiast at May 19, 2023, 2:20 p.m.

Question #: : 514

A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the
application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for scalability. The company must
configure the Amazon EKS control plane with endpoint private access set to true and endpoint public access set
to false to maintain security compliance. The company must also put the data plane in private subnets. However,
the company has received error notifications because the node cannot join the cluster.

Which solution will allow the node to join the cluster?


• A. Grant the required permission in AWS Identity and Access Management (IAM) to the
AmazonEKSNodeRole IAM role.
• B. Create interface VPC endpoints to allow nodes to access the control plane.
• C. Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
• D. Allow outbound traffic in the security group of the nodes.

Hide Answer
Suggested Answer: B

Community vote distribution


B (50%)
A (50%)
by nosense at May 17, 2023, 2:25 p.m.

Question #: : 515

A company is migrating an on-premises application to AWS. The company wants to use Amazon Redshift as a
solution.

Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
• A. Supporting data APIs to access data with traditional, containerized, and event-driven applications
• B. Supporting client-side and server-side encryption
• C. Building analytics workloads during specified hours and when the application is not active
• D. Caching data to reduce the pressure on the backend database
• E. Scaling globally to support petabytes of data and tens of millions of requests per minute
• F. Creating a secondary replica of the cluster by using the AWS Management Console

Hide Answer
Suggested Answer: BCE

Community vote distribution


BCE (51%)
ACE (17%)
9%
Other
by nosense at May 17, 2023, 2:27 p.m.

Question #: : 516
A company provides an API interface to customers so the customers can retrieve their financial information. Еhe
company expects a larger number of requests during peak usage times of the year.

The company requires the API to respond consistently with low latency to ensure customer satisfaction. The
company needs to provide a compute host for the API.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
• B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
• C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
• D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.

Hide Answer
Suggested Answer: B

Community vote distribution


B (73%)
A (27%)
by cloudenthusiast at May 19, 2023, 2:32 p.m.

Question #: : 517

A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3 bucket for archival
purposes.

Which solution will meet this requirement with the MOST operational efficiency?
• A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
• B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an
S3 bucket from the group for archival purposes.
• C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon
EventBridge to run the Systems Manager document against all servers that are in the account daily.
• D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch
logs subscription that pushes any incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set
Amazon S3 as the destination.

Hide Answer
Suggested Answer: D
Community vote distribution
A (90%)
10%
by nosense at May 17, 2023, 2:35 p.m.

Question #: : 518

An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A
solutions architect wants to increase the disk space without downtime.

Which solution meets these requirements with the LEAST amount of effort?
• A. Enable storage autoscaling in RDS
• B. Increase the RDS database instance size
• C. Change the RDS database instance storage type to Provisioned IOPS
• D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous
instance

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by cloudenthusiast at May 19, 2023, 2:38 p.m.

Question #: : 519

A consulting company provides professional services to customers worldwide. The company provides solutions
and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage
and deploy a common set of solutions and tools for customers to use for self-service purposes.

Which solution will meet these requirements?


• A. Create AWS CloudFormation templates for the customers.
• B. Create AWS Service Catalog products for the customers.
• C. Create AWS Systems Manager templates for the customers.
• D. Create AWS Config items for the customers.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by cloudenthusiast at May 19, 2023, 2:39 p.m.

Question #: : 520

A company is designing a new web application that will run on Amazon EC2 Instances. The application will use
Amazon DynamoDB for backend data storage. The application traffic will be unpredictable. The company expects
that the application read and write throughput to the database will be moderate to high. The company needs to
scale in response to application traffic.

Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
• A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class.
Set DynamoDB auto scaling to a maximum defined capacity.
• B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
• C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent
Access (DynamoDB Standard-IA) table class. Set DynamoDB auto scaling to a maximum defined capacity.
• D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access
(DynamoDB Standard-IA) table class.

Hide Answer
Suggested Answer: B

Community vote distribution


B (57%)
A (37%)
3%
by nosense at May 17, 2023, 2:44 p.m.

Question #: : 521

A retail company has several businesses. The IT team for each business manages its own AWS account. Each team
account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an
Amazon DynamoDB table in the team's own AWS account.

The company is deploying a central inventory reporting application into a shared AWS account. The application
must be able to read items from all the teams' DynamoDB tables.

Which authentication option will meet these requirements MOST securely?


• A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the
application to use the correct secret from Secrets Manager to authenticate and read the DynamoDB table.
Schedule secret rotation for every 30 days.
• B. In every business account, create an IAM user that has programmatic access. Configure the application
to use the correct IAM user access key ID and secret access key to authenticate and read the DynamoDB table.
Manually rotate IAM access keys every 30 days.
• C. In every business account, create an IAM role named BU_ROLE with a policy that gives the role access
to the DynamoDB table and a trust policy to trust a specific role in the inventory application account. In the
inventory account, create a role named APP_ROLE that allows access to the STS AssumeRole API operation.
Configure the application to use APP_ROLE and assume the crossaccount role BU_ROLE to read the DynamoDB
table.
• D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to
authenticate DynamoDB. Configure the application to use the correct certificate to authenticate and read the
DynamoDB table.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by nosense at May 19, 2023, 11:58 a.m.

Question #: : 522

A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The
company's workload is not consistent throughout the day. The company wants Amazon EKS to scale in and out
according to the workload.

Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
• A. Use an AWS Lambda function to resize the EKS cluster.
• B. Use the Kubernetes Metrics Server to activate horizontal pod autoscaling.
• C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
• D. Use Amazon API Gateway and connect it to Amazon EKS.
• E. Use AWS App Mesh to observe network activity.

Hide Answer
Suggested Answer: BC
Community vote distribution
BC (100%)
by nosense at May 19, 2023, 11:56 a.m.

Question #: : 523

A company runs a microservice-based serverless web application. The application must be able to retrieve data
from multiple Amazon DynamoDB tables A solutions architect needs to give the application the ability to retrieve
the data with no impact on the baseline performance of the application.

Which solution will meet these requirements in the MOST operationally efficient way?
• A. AWS AppSync pipeline resolvers
• B. Amazon CloudFront with Lambda@Edge functions
• C. Edge-optimized Amazon API Gateway with AWS Lambda functions
• D. Amazon Athena Federated Query with a DynamoDB connector

Hide Answer
Suggested Answer: A

Community vote distribution


D (36%)
A (33%)
B (31%)
by nosense at May 19, 2023, 11:56 a.m.

Question #: : 524

A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to
IAM permissions. The company has AWS CloudTrail turned on.

Which solution will meet these requirements with the LEAST effort?
• A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
• B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
• C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
• D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.

Hide Answer
Suggested Answer: C
Community vote distribution
C (64%)
D (36%)
by alexandercamachop at June 7, 2023, 7:35 p.m.

Question #: : 525

A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs
to recommend a solution that will give the company access to its usage cost programmatically. The company must
be able to access cost data for the current year and forecast costs for the next 12 months.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
• B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
• C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
• D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by oras2023 at June 6, 2023, 4:06 p.m.

Question #: : 526

A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database
administrator recently failed over the application's Amazon Aurora PostgreSQL database writer instance as part
of a scaling exercise. The failover resulted in 3 minutes of downtime for the application.

Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
• A. Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
• B. Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the
application to use the secondary cluster's writer endpoint.
• C. Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
• D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Hide Answer
Suggested Answer: D

Community vote distribution


D (71%)
B (21%)
7%
by AshishRocks at June 6, 2023, 9:15 a.m.

Question #: : 527

A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture
consists of web servers and application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling
groups behind Elastic Load Balancers. The architecture includes an Amazon Aurora global database cluster that
extends across multiple Availability Zones.

The company wants to expand globally and to ensure that its application has minimal downtime.

Which solution will provide the MOST fault tolerance?


• A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in
Availability Zones in a second Region. Use an Aurora global database to deploy the database in the primary Region
and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
• B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-
Region Aurora Replica in the second Region. Use Amazon Route 53 health checks with a failover routing policy
to the second Region. Promote the secondary to primary as needed.
• C. Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL
database in the second Region. Use AWS Database Migration Service (AWS DMS) to replicate the primary
database to the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second
Region.
• D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global
database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks
with a failover routing policy to the second Region. Promote the secondary to primary as needed.

Hide Answer
Suggested Answer: B

Community vote distribution


D (92%)
4%
by alexandercamachop at June 7, 2023, 7:57 p.m.
Question #: : 528

A data analytics company wants to migrate its batch processing system to AWS. The company receives thousands
of small data files periodically during the day through FTP. An on-premises batch job processes the data files
overnight. However, the batch job takes hours to finish running.

The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to
the FTP clients that send the files. The solution must delete the incoming data files after the files have been
processed successfully. Processing for each file needs to take 3-8 minutes.

Which solution will meet these requirements in the MOST operationally efficient way?
• A. Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3
Glacier Flexible Retrieval. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job
to process the objects nightly from S3 Glacier Flexible Retrieval. Delete the objects after the job has processed the
objects.
• B. Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic
Block Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use Amazon EventBridge rules to
invoke the job to process the files nightly from the EBS volume. Delete the files after the job has processed the
files.
• C. Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block
Store (Amazon EBS) volume. Configure a job queue in AWS Batch. Use an Amazon S3 event notification when
each file arrives to invoke the job in AWS Batch. Delete the files after the job has processed the files.
• D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard.
Create an AWS Lambda function to process the files and to delete the files after they are processed. Use an S3
event notification to invoke the Lambda function when the files arrive.

Hide Answer
Suggested Answer: B

Community vote distribution


D (93%)
7%
by Bill1000 at June 6, 2023, 10:59 p.m.

Question #: : 529

A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases.
The company wants to use AWS Cloud solutions to increase security and reduce operational overhead for the
databases.

Which solution will meet these requirements?


• A. Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS
managed key for encryption.
• B. Migrate the databases to Amazon RDS Configure encryption at rest.
• C. Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
• D. Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by AshishRocks at June 6, 2023, 9:26 a.m.

Question #: : 530

A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The
company uses Amazon Route 53 to point the application traffic to multiple Network Load Balancers (NLBs) in
different AWS Regions. The company needs to improve application performance and decrease latency for the
online game in preparation for user growth.

Which solution will meet these requirements?


• A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age
parameter.
• B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based
routing.
• C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use
the correct listener ports.
• D. Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method
caching for the different stages.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by oras2023 at June 6, 2023, 3:16 p.m.

531-555
ーーーーー

Question #: : 531

A company needs to integrate with a third-party data feed. The data feed sends a webhook to notify an external
service when new data is ready for consumption. A developer wrote an AWS Lambda function to retrieve data
when the company receives a webhook callback. The developer must make the Lambda function available for the
third party to call.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third party
for the webhook.
• B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL
to the third party for the webhook.
• C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda
function. Provide the public hostname of the SNS topic to the third party for the webhook.
• D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda
function. Provide the public hostname of the SQS queue to the third party for the webhook.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by alexandercamachop at June 7, 2023, 8:03 p.m.
ーーーーー

Question #: : 532

A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon
API Gateway REST API. The company uses Amazon Route 53 as its DNS provider. The company wants to provide
individual and secure URLs for all customers.

Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
• A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53
hosted zone and record in the zone that points to the API Gateway endpoint.
• B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a
different Region.
• C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the
API Gateway endpoint.
• D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager
(ACM) in the same Region.
• E. Create multiple API endpoints for each customer in API Gateway.
• F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS
Certificate Manager (ACM).

Hide Answer
Suggested Answer: CFD

Community vote distribution


ADF (100%)
by AncaZalog at June 7, 2023, 8:57 a.m.
ーーーーー

Question #: : 533

A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable
information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The
company needs to automatically detect PII in S3 buckets and to notify the company’s security team.

Which solution will meet these requirements?


• A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from
Macie findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security
team.
• B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from
GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the
security team.
• C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the
SensitiveData:S3Object/Personal event type from Macie findings and to send an Amazon Simple Queue Service
(Amazon SQS) notification to the security team.
• D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type
from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the
security team.

Hide Answer
Suggested Answer: C

Community vote distribution


A (80%)
C (20%)
by alexandercamachop at June 7, 2023, 8:15 p.m.
ーーーーー

Question #: : 534

A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the
logs from all accounts in a centralized account. The company has created an Amazon S3 bucket in the centralized
account to store the VPC flow logs and AWS CloudTrail logs. All logs must be highly available for 30 days for
frequent analysis, retained for an additional 60 days for backup purposes, and deleted 90 days after creation.

Which solution will meet these requirements MOST cost-effectively?


• A. Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action
that directs Amazon S3 to delete objects after 90 days.
• B. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after
creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration action
that directs Amazon S3 to delete objects after 90 days.
• C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an
expiration action that directs Amazon S3 to delete objects after 90 days.
• D. Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days
after creation. Move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days. Write an expiration
action that directs Amazon S3 to delete objects after 90 days.

Hide Answer
Suggested Answer: B

Community vote distribution


C (60%)
A (30%)
10%
by alexandercamachop at June 7, 2023, 8:18 p.m.

Question #: : 535

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets
that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?


• A. Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage,
rotate, and store all secrets in Amazon EKS.
• B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets
encryption on the Amazon EKS cluster.
• C. Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon
EBS) Container Storage Interface (CSI) driver as an add-on.
• D. Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable
default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.

Hide Answer
Suggested Answer: D

Community vote distribution


B (93%)
7%
by AncaZalog at June 7, 2023, 9:14 a.m.

ーーーーー

Question #: : 536

A company wants to provide data scientists with near real-time read-only access to the company's production
Amazon RDS for PostgreSQL database. The database is currently configured as a Single-AZ database. The data
scientists use complex queries that will not affect the production database. The company needs a solution that is
highly available.

Which solution will meet these requirements MOST cost-effectively?


• A. Scale the existing production database in a maintenance window to provide enough power for the data
scientists.
• B. Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary
standby instance. Provide the data scientists access to the secondary instance.
• C. Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional read
replicas for the data scientists.
• D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby
instances. Provide read endpoints to the data scientists.

Hide Answer
Suggested Answer: C

Community vote distribution


D (81%)
Other
by alexandercamachop at June 7, 2023, 8:24 p.m.
ーーーーー

Question #: : 537

A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The
application architecture has an Application Load Balancer, an Amazon EC2 web server that hosts user session
states, and a MySQL database that runs on an EC2 instance. The company expects sudden increases in application
traffic. The company wants to be able to scale to meet future application capacity demands and to ensure high
availability across all three Availability Zones.

Which solution will meet these requirements?


• A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web
server to an Auto Scaling group that is in three Availability Zones.
• B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment.
Use Amazon ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate
the web server to an Auto Scaling group that is in three Availability Zones.
• C. Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache
reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three
Availability Zones.
• D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon
ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an
Auto Scaling group that is in three Availability Zones.

Hide Answer
Suggested Answer: B

Community vote distribution


A (74%)
B (26%)
by AncaZalog at June 7, 2023, 9:30 a.m.
ーーーーー

Question #: : 538

A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The
company wants to roll out content in a phased manner across multiple countries. The company needs to ensure
that viewers who are outside the countries to which the company rolls out content are not able to view the content.
Which solution will meet these requirements?
• A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error
message.
• B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up
a custom error message.
• C. Encrypt the data for the content that the company distributes. Set up a custom error message.
• D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by AncaZalog at June 7, 2023, 9:34 a.m.
ーーーーー

Question #: : 539

A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR) configuration. The
company's core production business application uses Microsoft SQL Server Standard, which runs on a virtual
machine (VM). The application has a recovery point objective (RPO) of 30 seconds or fewer and a recovery time
objective (RTO) of 60 minutes. The DR solution needs to minimize costs wherever possible.

Which solution will meet these requirements?


• A. Configure a multi-site active/active setup between the on-premises server and AWS by using
Microsoft SQL Server Enterprise with Always On availability groups.
• B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database
Migration Service (AWS DMS) to use change data capture (CDC).
• C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
• D. Use third-party backup software to capture backups every night. Store a secondary set of backups in
Amazon S3.

Hide Answer
Suggested Answer: D

Community vote distribution


B (54%)
C (46%)
by Bill1000 at June 6, 2023, 6:25 p.m.
ーーーーー

Question #: : 540

A company has an on-premises server that uses an Oracle database to process and store customer information.
The company wants to use an AWS database service to achieve higher availability and to improve application
performance. The company also wants to offload reporting from its primary database system.

Which solution will meet these requirements in the MOST operationally efficient way?
• A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple
AWS Regions. Point the reporting functions toward a separate DB instance from the primary DB instance.
• B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in
the same zone as the primary DB instance. Direct the reporting functions to the read replica.
• C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct
the reporting functions to use the reader instance in the cluster deployment.
• D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database.
Direct the reporting functions to the reader instances.

Hide Answer
Suggested Answer: D

Community vote distribution


D (64%)
C (36%)
by alexandercamachop at June 7, 2023, 10:12 p.m.
ーーーーー

Question #: : 541

A company wants to build a web application on AWS. Client access requests to the website are not predictable
and can be idle for a long time. Only customers who have paid a subscription fee can have the ability to sign in
and use the web application.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
• A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an
Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
• B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load
Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept
RESTful APIs. Send the API calls to the Lambda function.
• C. Create an Amazon Cognito user pool to authenticate users.
• D. Create an Amazon Cognito identity pool to authenticate users.
• E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated
Amazon CloudFront configuration.
• F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the
frontend web content.

Hide Answer
Suggested Answer: ACE

Community vote distribution


ACE (78%)
14%
6%
by alexandercamachop at June 7, 2023, 10:14 p.m.
ーーーーー

Question #: : 542

A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company
wants only premium customers to have access to the media streams and file content. The company stores all
content in an Amazon S3 bucket. The company also delivers content on demand to customers for a specific
purpose, such as movie rentals or music downloads.

Which solution will meet these requirements?


• A. Generate and provide S3 signed cookies to premium customers.
• B. Generate and provide CloudFront signed URLs to premium customers.
• C. Use origin access control (OAC) to limit the access of non-premium customers.
• D. Generate and activate field-level encryption to block non-premium customers.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by alexandercamachop at June 7, 2023, 10:19 p.m.
ーーーーー

Question #: : 543
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company
recently purchased a Savings Pian. Because of changes in the company’s business requirements, the company has
decommissioned a large number of EC2 instances. The company wants to use its Savings Plan discounts on its
other AWS accounts.

Which combination of steps will meet these requirements? (Choose two.)


• A. From the AWS Account Management Console of the management account, turn on discount sharing
from the billing preferences section.
• B. From the AWS Account Management Console of the account that purchased the existing Savings Plan,
turn on discount sharing from the billing preferences section. Include all accounts.
• C. From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM)
to share the Savings Plan with other accounts.
• D. Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts
to join the organization from the management account.
• E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2
instances and Savings Plan. Invite the other AWS accounts to join the organization from the management account.

Hide Answer
Suggested Answer: AE

Community vote distribution


AE (50%)
AD (31%)
Other
by alexandercamachop at June 7, 2023, 10:21 p.m.
ーーーーー

Question #: : 544

A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint
is a custom domain name that points to an Amazon Route 53 alias record. A solutions architect needs to create a
solution that has minimal effects on customers and minimal data loss to release the new version of APIs.

Which solution will meet these requirements?


• A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an
appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the
production stage.
• B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format.
Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new version of the
API to the production stage.
• C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use
the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the new version of the
API to the production stage.
• D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain
name for the new API Gateway API. Point the Route 53 alias record to the new API Gateway API custom domain
name.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by alexandercamachop at June 7, 2023, 10:46 p.m.

ーーーーー

Question #: : 545

A company wants to direct its users to a backup static error page if the company's primary website is unavailable.
The primary website's DNS records are hosted in Amazon Route 53. The domain is pointing to an Application
Load Balancer (ALB). The company needs a solution that minimizes changes and infrastructure overhead.

Which solution will meet these requirements?


• A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in
an Amazon S3 bucket to the records so that the traffic is sent to the most responsive endpoints.
• B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is
hosted in an Amazon S3 bucket when Route 53 health checks determine that the ALB endpoint is unhealthy.
• C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a
static error page as endpoints. Configure Route 53 to send requests to the instance only if the health checks fail
for the ALB.
• D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct
traffic to the website if the health check passes. Direct traffic to a static error page that is hosted in Amazon S3 if
the health check does not pass.

Hide Answer
Suggested Answer: B

Community vote distribution


B (89%)
11%
by Bmaster at Aug. 1, 2023, 12:45 p.m.
ーーーーー

Question #: : 546

A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief
information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the
use of physical backup tapes. The company must preserve the existing investment in the on-premises backup
applications and workflows.

What should a solutions architect recommend?


• A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
• B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
• C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
• D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape
library (VTL) interface.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Bmaster at Aug. 1, 2023, 12:57 p.m.
ーーーーー

Question #: : 547

A company has data collection sensors at different locations. The data collection sensors stream a high volume of
data to the company. The company wants to design a platform on AWS to ingest and process high-volume
streaming data. The solution must be scalable and support data collection in near real time. The company must
store the data in Amazon S3 for future reporting.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
• B. Use AWS Glue to deliver streaming data to Amazon S3.
• C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
• D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.

Hide Answer
Suggested Answer: A

Community vote distribution


A (80%)
D (20%)
by Bmaster at Aug. 1, 2023, 1:03 p.m.
ーーーーー

Question #: : 548

A company has separate AWS accounts for its finance, data analytics, and development departments. Because of
costs and security concerns, the company wants to control which services each AWS account can use.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Systems Manager templates to control which AWS services each department can use.
• B. Create organization units (OUs) for each department in AWS Organizations. Attach service control
policies (SCPs) to the OUs.
• C. Use AWS CloudFormation to automatically provision only the AWS services that each department
can use.
• D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the
usage of specific AWS services.

Hide Answer
Suggested Answer: B

Community vote distribution


B (89%)
11%
by Bmaster at Aug. 1, 2023, 1:04 p.m.
ーーーーー

Question #: : 549

A company has created a multi-tier application for its ecommerce website. The website uses an Application Load
Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on
Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing
information that is hosted on the internet by a third-party provider. A solutions architect must devise a strategy
that maximizes security without increasing operational overhead.

What should the solutions architect do to meet these requirements?


• A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
• B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all
internet-bound traffic to the NAT gateway.
• C. Configure an internet gateway and attach it to the VPModify the private subnet route table to direct
internet-bound traffic to the internet gateway.
• D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to
direct internet-bound traffic to the virtual private gateway.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Bmaster at Aug. 1, 2023, 1:10 p.m.
ーーーーー

Question #: : 550

A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment
variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the
environment variables.

Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
• A. Add AWS KMS permissions in the Lambda resource policy.
• B. Add AWS KMS permissions in the Lambda execution role.
• C. Add AWS KMS permissions in the Lambda function policy.
• D. Allow the Lambda execution role in the AWS KMS key policy.
• E. Allow the Lambda resource policy in the AWS KMS key policy.

Hide Answer
Suggested Answer: BD

Community vote distribution


BD (100%)
by Bmaster at Aug. 1, 2023, 1:11 p.m.
ーーーーー

Question #: : 551
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in
Amazon S3. The reports are frequently accessed during the first week after production and must be stored for
several years. The reports must be retrievable within 6 hours.

Which solution meets these requirements MOST cost-effectively?


• A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
• B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access
(S3 Standard-IA) after 7 days.
• C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-
Infrequent Access (S3 Standard-IA) and S3 Glacier.
• D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after
7 days.

Hide Answer
Suggested Answer: B

Community vote distribution


A (64%)
C (33%)
3%
by Josantru at July 31, 2023, 2:25 p.m.
ーーーーー

Question #: : 552

A company needs to optimize the cost of its Amazon EC2 instances. The company also needs to change the type
and family of its EC2 instances every 2-3 months.

What should the company do to meet these requirements?


• A. Purchase Partial Upfront Reserved Instances for a 3-year term.
• B. Purchase a No Upfront Compute Savings Plan for a 1-year term.
• C. Purchase All Upfront Reserved Instances for a 1-year term.
• D. Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Josantru at July 31, 2023, 2:28 p.m.
ーーーーー

Question #: : 553

A solutions architect needs to review a company's Amazon S3 buckets to discover personally identifiable
information (PII). The company stores the PII data in the us-east-1 Region and us-west-2 Region.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
• B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is
in Amazon S3.
• C. Configure Amazon Inspector to analyze the data that is in Amazon S3.
• D. Configure Amazon GuardDuty to analyze the data that is in Amazon S3.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Deepakin96 at Aug. 3, 2023, 11 a.m.
ーーーーー

Question #: : 554

A company's SAP application has a backend SQL Server database in an on-premises environment. The company
wants to migrate its on-premises application and database server to AWS. The company needs an instance type
that meets the high demands of its SAP database. On-premises performance data shows that both the SAP
application and the database have high memory utilization.

Which solution will meet these requirements?


• A. Use the compute optimized instance family for the application. Use the memory optimized instance
family for the database.
• B. Use the storage optimized instance family for both the application and the database.
• C. Use the memory optimized instance family for both the application and the database.
• D. Use the high performance computing (HPC) optimized instance family for the application. Use the
memory optimized instance family for the database.

Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by mrsoa at Aug. 6, 2023, 1:25 a.m.
ーーーーー

Question #: : 555

A company runs an application in a VPC with public and private subnets. The VPC extends across multiple
Availability Zones. The application runs on Amazon EC2 instances in private subnets. The application uses an
Amazon Simple Queue Service (Amazon SQS) queue.

A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and
the SQS queue.

Which solution will meet these requirements?


• A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private
subnets. Add to the endpoint a security group that has an inbound access rule that allows traffic from the EC2
instances that are in the private subnets.
• B. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public
subnets. Attach to the interface endpoint a VPC endpoint policy that allows access from the EC2 instances that
are in the private subnets.
• C. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public
subnets. Attach an Amazon SQS access policy to the interface VPC endpoint that allows requests from only a
specified VPC endpoint.
• D. Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets. Attach
an IAM role to the EC2 instances that allows access to the SQS queue.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Bmaster at Aug. 1, 2023, 2:10 p.m.

556-599

Question #: : 556

A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web
application consists of a web tier and an application tier that stores and retrieves user data in Amazon DynamoDB
tables. The web and application tiers are hosted on Amazon EC2 instances, and the database tier is not publicly
accessible. The application EC2 instances need to access the DynamoDB tables without exposing API credentials
in the template.

What should the solutions architect do to meet these requirements?


• A. Create an IAM role to read the DynamoDB tables. Associate the role with the application instances
by referencing an instance profile.
• B. Create an IAM role that has the required permissions to read and write from the DynamoDB tables.
Add the role to the EC2 instance profile, and associate the instance profile with the application instances.
• C. Use the parameter section in the AWS CloudFormation template to have the user input access and
secret keys from an already-created IAM user that has the required permissions to read and write from the
DynamoDB tables.
• D. Create an IAM user in the AWS CloudFormation template that has the required permissions to read
and write from the DynamoDB tables. Use the GetAtt function to retrieve the access and secret keys, and pass
them to the application instances through the user data.

Hide Answer
Suggested Answer: B

Community vote distribution


B (85%)
A (15%)
by kangho at Aug. 5, 2023, 6:41 p.m.
ーーー

Question #: : 557

A solutions architect manages an analytics application. The application stores large amounts of semistructured
data in an Amazon S3 bucket. The solutions architect wants to use parallel data processing to process the data
more quickly. The solutions architect also wants to use information that is stored in an Amazon Redshift database
to enrich the data.

Which solution will meet these requirements?


• A. Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to enrich
the S3 data.
• B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to enrich
the S3 data.
• C. Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3 data into
Amazon Redshift so that the data can be enriched.
• D. Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data to
enrich the S3 data.

Hide Answer
Suggested Answer: D

Community vote distribution


B (63%)
A (38%)
by zjcorpuz at Aug. 4, 2023, 3:30 p.m.

Question #: : 558

A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company
needs to allow network traffic between these VPCs. Approximately 500 GB of data transfer will occur between the
VPCs each month.

What is the MOST cost-effective solution to connect these VPCs?


• A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use
the transit gateway for inter-VPC communication.
• B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC
to use the VPN tunnel for inter-VPC communication.
• C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the
VPC peering connection for inter-VPC communication.
• D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each
VPC to use the Direct Connect connection for inter-VPC communication.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by luiscc at Aug. 2, 2023, 7:17 a.m.

ーーー

Question #: : 559

A company hosts multiple applications on AWS for different product lines. The applications use different compute
resources, including Amazon EC2 instances and Application Load Balancers. The applications run in different
AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each
product line have tagged each compute resource in the individual accounts.

The company wants more details about the cost for each product line from the consolidated billing feature in
Organizations.

Which combination of steps will meet these requirements? (Choose two.)


• A. Select a specific AWS generated tag in the AWS Billing console.
• B. Select a specific user-defined tag in the AWS Billing console.
• C. Select a specific user-defined tag in the AWS Resource Groups console.
• D. Activate the selected tag from each AWS account.
• E. Activate the selected tag from the Organizations management account.

Hide Answer
Suggested Answer: BE

Community vote distribution


BE (100%)
by Kiki_Pass at Aug. 5, 2023, 11:33 a.m.
ーーー

Question #: : 560

A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The
solutions architect has organized the company's accounts into organizational units (OUs).

The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also
needs to notify the company's operations team of any changes.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify
the changes to the OU hierarchy.
• B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to
identify the changes to the OU hierarchy.
• C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization
trail to identify the changes to the OU hierarchy.
• D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection
operation on a stack to identify the changes to the OU hierarchy.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Bmaster at Aug. 2, 2023, 2:17 a.m.
ーーー

Question #: : 561

A company's website handles millions of requests each day, and the number of requests continues to increase. A
solutions architect needs to improve the response time of the web application. The solutions architect determines
that the application needs to decrease latency when retrieving product details from the Amazon DynamoDB table.

Which solution will meet these requirements with the LEAST amount of operational overhead?
• A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
• B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route
all read requests through Redis.
• C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application.
Route all read requests through Memcached.
• D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and
populate Amazon ElastiCache. Route all read requests through ElastiCache.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Bmaster at Aug. 2, 2023, 2:19 a.m.

ーーー

Question #: : 562

A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC
do not travel across the internet.

Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)
• A. Create a route table entry for the endpoint.
• B. Create a gateway endpoint for DynamoDB.
• C. Create an interface endpoint for Amazon EC2.
• D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
• E. Create a security group entry in the endpoint's security group to provide access.

Hide Answer
Suggested Answer: AB

Community vote distribution


AB (67%)
BE (29%)
4%
by Soei at Aug. 3, 2023, 3:42 p.m.

ーーー

Question #: : 563

A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS) clusters and on-
premises Kubernetes clusters. The company wants to view all clusters and workloads from a central location.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
• B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
• C. Use AWS Systems Manager to collect and view the cluster information.
• D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes
commands.

Hide Answer
Suggested Answer: B

Community vote distribution


B (89%)
11%
by Bmaster at Aug. 2, 2023, 2:24 a.m.
ーーー

Question #: : 564

A company is building an ecommerce application and needs to store sensitive customer information. The company
needs to give customers the ability to complete purchase transactions on the website. The company also needs to
ensure that sensitive customer data is protected, even from database administrators.

Which solution meets these requirements?


• A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to
encrypt the data. Use an IAM instance role to restrict access.
• B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS)
client-side encryption to encrypt the data.
• C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side
encryption to encrypt the data. Use S3 bucket policies to restrict access.
• D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers.
Use Windows file permissions to restrict access.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Bmaster at Aug. 2, 2023, 2:27 a.m.
ーーー

Question #: : 565

A company has an on-premises MySQL database that handles transactional data. The company is migrating the
database to the AWS Cloud. The migrated database must maintain compatibility with the company's applications
that use the database. The migrated database also must scale automatically during periods of increased demand.

Which migration solution will meet these requirements?


• A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic
storage scaling.
• B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for
the Amazon Redshift cluster.
• C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn
on Aurora Auto Scaling.
• D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB.
Configure an Auto Scaling policy.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Bmaster at Aug. 2, 2023, 2:39 a.m.
ーーー

Question #: : 566

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances
host applications that use a hierarchical directory structure. The applications need to read and write rapidly and
concurrently to shared storage.

What should a solutions architect do to meet these requirements?


• A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
• B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from
each EC2 instance.
• C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS)
volume. Attach the EBS volume to all the EC2 instances.
• D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each
EC2 instance. Synchronize the EBS volumes across the different EC2 instances.

Hide Answer
Suggested Answer: A

Community vote distribution


B (92%)
8%
by Josantru at July 31, 2023, 2:45 p.m.

ーーー

Question #: : 567

A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a
building. The sensors will feed a database through HTTP requests that will add up usage for each tenant. The
solutions architect must use managed services when possible. The workload will receive more features in the future
as the solutions architect adds independent components.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process
the data, and store the data in an Amazon DynamoDB table.
• B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to
receive and process the data from the sensors. Use an Amazon S3 bucket to store the processed data.
• C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process
the data, and store the data in a Microsoft SQL Server Express database on an Amazon EC2 instance.
• D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances
to receive and process the data from the sensors. Use an Amazon Elastic File System (Amazon EFS) shared file
system to store the processed data.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Bmaster at Aug. 2, 2023, 2:48 a.m.
ーーー

Question #: : 568

A solutions architect is designing the storage architecture for a new web application used for storing and viewing
engineering drawings. All application components will be deployed on the AWS infrastructure.

The application design must support caching to minimize the amount of time that users wait for the engineering
drawings to load. The application must be able to store petabytes of data.

Which combination of storage and caching should the solutions architect use?
• A. Amazon S3 with Amazon CloudFront
• B. Amazon S3 Glacier with Amazon ElastiCache
• C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
• D. AWS Storage Gateway with Amazon ElastiCache

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Bmaster at Aug. 2, 2023, 2:48 a.m.

Question #: : 569

An Amazon EventBridge rule targets a third-party API. The third-party API has not received any incoming traffic.
A solutions architect needs to determine whether the rule conditions are being met and if the rule's target is being
invoked.

Which solution will meet these requirements?


• A. Check for metrics in Amazon CloudWatch in the namespace for AWS/Events.
• B. Review events in the Amazon Simple Queue Service (Amazon SQS) dead-letter queue.
• C. Check for the events in Amazon CloudWatch Logs.
• D. Check the trails in AWS CloudTrail for the EventBridge events.

Hide Answer
Suggested Answer: A

Community vote distribution


A (63%)
D (22%)
C (15%)
by Jayhawk_Kim at Aug. 5, 2023, 6:59 a.m.

ーーー

Question #: : 570

A company has a large workload that runs every Friday evening. The workload runs on Amazon EC2 instances
that are in two Availability Zones in the us-east-1 Region. Normally, the company must run no more than two
instances at all times. However, the company wants to scale up to six instances each Friday to handle a regularly
repeating increased workload.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a reminder in Amazon EventBridge to scale the instances.
• B. Create an Auto Scaling group that has a scheduled action.
• C. Create an Auto Scaling group that uses manual scaling.
• D. Create an Auto Scaling group that uses automatic scaling.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Josantru at July 31, 2023, 2:49 p.m.
Question #: : 571

A company is creating a REST API. The company has strict requirements for the use of TLS. The company
requires TLSv1.3 on the API endpoints. The company also requires a specific public third-party certificate
authority (CA) to sign the TLS certificate.

Which solution will meet these requirements?


• A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate into
AWS Certificate Manager (ACM). Create an HTTP API in Amazon API Gateway with a custom domain.
Configure the custom domain to use the certificate.
• B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create
an HTTP API in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
• C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA.
Import the certificate into AWS Certificate Manager (ACM). Create an AWS Lambda function with a Lambda
function URL. Configure the Lambda function URL to use the certificate.
• D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create
an AWS Lambda function with a Lambda function URL. Configure the Lambda function URL to use the certificate.

Hide Answer
Suggested Answer: A

Community vote distribution


A (63%)
B (37%)
by Josantru at July 31, 2023, 3:08 p.m.

ーーー

Question #: : 572

A company runs an application on AWS. The application receives inconsistent amounts of usage. The application
uses AWS Direct Connect to connect to an on-premises MySQL-compatible database. The on-premises database
consistently uses a minimum of 2 GiB of memory.

The company wants to migrate the on-premises database to a managed AWS service. The company wants to use
auto scaling capabilities to manage unexpected workload increases.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Provision an Amazon DynamoDB database with default read and write capacity settings.
• B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
• C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity
unit (ACU).
• D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Bmaster at Aug. 2, 2023, 3:09 a.m.

Question #: : 573

A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce
startup latency for Lambda functions that run on Java 11. The company does not have strict latency requirements
for the applications. The company wants to reduce cold starts and outlier latencies when a function scales up.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure Lambda provisioned concurrency.
• B. Increase the timeout of the Lambda functions.
• C. Increase the memory of the Lambda functions.
• D. Configure Lambda SnapStart.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by RaksAWS at July 31, 2023, 6:41 p.m.

Question #: : 574

A financial services company launched a new application that uses an Amazon RDS for MySQL database. The
company uses the application to track stock market trends. The company needs to operate the application for only
2 hours at the end of each week. The company needs to optimize the cost of running the database.

Which solution will meet these requirements MOST cost-effectively?


• A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database cluster.
• B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
• C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL.
Purchase an instance reservation for the EC2 instance.
• D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon
ECS) cluster that uses MySQL container images to run tasks.

Hide Answer
Suggested Answer: A

Community vote distribution


A (85%)
B (15%)
by mrsoa at Aug. 3, 2023, 7:55 p.m.

Question #: : 575

A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application
Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The
company wants the data in the database to be highly available. The company also needs increased capacity for
read workloads.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create an Amazon DynamoDB database table configured with global tables.
• B. Create an Amazon RDS database with Multi-AZ deployments.
• C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
• D. Create an Amazon RDS database configured with cross-Region read replicas.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by luiscc at Aug. 1, 2023, 10:41 a.m.

Question #: : 576

A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS
Lambda. The users of this web application will be geographically distributed, and the company wants to reduce
the latency of API requests to these users.

Which type of endpoint should a solutions architect use to meet these requirements?
• A. Private endpoint
• B. Regional endpoint
• C. Interface VPC endpoint
• D. Edge-optimized endpoint

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Josantru at July 31, 2023, 3:14 p.m.

ーーー

Question #: : 577

A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs
to ensure that clients use a TLS certificate when accessing the company's website. The company wants to automate
the creation and renewal of the TLS certificates.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Use a CloudFront security policy to create a certificate.
• B. Use a CloudFront origin access control (OAC) to create a certificate.
• C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
• D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Bmaster at Aug. 2, 2023, 4:27 a.m.

by Bmaster at Aug. 2, 2023, 4:27 a.m.


ーーー

Question #: : 578

A company deployed a serverless application that uses Amazon DynamoDB as a database layer. The application
has experienced a large increase in users. The company wants to improve database response time from
milliseconds to microseconds and to cache requests to the database.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use DynamoDB Accelerator (DAX).
• B. Migrate the database to Amazon Redshift.
• C. Migrate the database to Amazon RDS.
• D. Use Amazon ElastiCache for Redis.

Hide Answer
Suggested Answer: A

Community vote distribution


A (92%)
8%
by Bmaster at Aug. 2, 2023, 4:31 a.m.
ーーー

Question #: : 579

A company runs an application that uses Amazon RDS for PostgreSQL. The application receives traffic only on
weekdays during business hours. The company wants to optimize costs and reduce operational overhead based on
this usage.

Which solution will meet these requirements?


• A. Use the Instance Scheduler on AWS to configure start and stop schedules.
• B. Turn off automatic backups. Create weekly manual snapshots of the database.
• C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU
utilization.
• D. Purchase All Upfront reserved DB instances.

Hide Answer
Suggested Answer: C

Community vote distribution


A (94%)
6%
by luiscc at July 31, 2023, 6:40 p.m.

Question #: : 580
A company uses locally attached storage to run a latency-sensitive application on premises. The company is using
a lift and shift method to move the application to the AWS Cloud. The company does not want to change the
application architecture.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre file
system to run the application.
• B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS)
GP2 volume to run the application.
• C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for OpenZFS
file system to run the application.
• D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS)
GP3 volume to run the application.

Hide Answer
Suggested Answer: B

Community vote distribution


D (100%)
by Ale1973 at Aug. 9, 2023, 1:55 p.m.

Question #: : 581

A company runs a stateful production application on Amazon EC2 instances. The application requires at least two
EC2 instances to always be running.

A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The
solutions architect creates an Auto Scaling group of EC2 instances.

Which set of additional steps should the solutions architect take to meet these requirements?
• A. Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one
Availability Zone and one On-Demand Instance in a second Availability Zone.
• B. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two On-Demand Instances in a second Availability Zone.
• C. Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability
Zone.
• D. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one
Availability Zone and two Spot Instances in a second Availability Zone.

Hide Answer
Suggested Answer: D

Community vote distribution


B (71%)
A (29%)
by luiscc at Aug. 1, 2023, 10:32 a.m.

Question #: : 582

An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its website on premises
and in the AWS Cloud. The company's on-premises data center is near the us-west-1 Region. The company uses
the eu-central-1 Region to host the website. The company wants to minimize load time for the website as much
as possible.

Which solution will meet these requirements?


• A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data
center. Send the traffic that is near eu-central-1 to eu-central-1.
• B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and routes
all traffic that is near the on-premises datacenter to the on-premises data center.
• C. Set up a latency routing policy. Associate the policy with us-west-1.
• D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises
data center.

Hide Answer
Suggested Answer: A

Community vote distribution


A (80%)
C (20%)
by Guru4Cloud at Aug. 21, 2023, 1:55 p.m.

Question #: : 583

A company has 5 PB of archived data on physical tapes. The company needs to preserve the data on the tapes for
another 10 years for compliance purposes. The company wants to migrate to AWS in the next 6 months. The data
center that stores the tapes has a 1 Gbps uplink internet connectivity.

Which solution will meet these requirements MOST cost-effectively?


• A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS DataSync
to migrate the data to Amazon S3 Glacier Flexible Retrieval.
• B. Use an on-premises backup application to read the data from the tapes and to write directly to Amazon
S3 Glacier Deep Archive.
• C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual
tapes in Snowball. Ship the Snowball devices to AWS. Create a lifecycle policy to move the tapes to Amazon S3
Glacier Deep Archive.
• D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup
software to copy the physical tape to the virtual tape.

Hide Answer
Suggested Answer: C

Community vote distribution


C (96%)
4%
by Deepakin96 at Aug. 3, 2023, 11:46 a.m

Question #: : 584

A company is deploying an application that processes large quantities of data in parallel. The company plans to
use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of
nodes from sharing the same underlying hardware.

Which networking solution meets these requirements?


• A. Run the EC2 instances in a spread placement group.
• B. Group the EC2 instances in separate accounts.
• C. Configure the EC2 instances with dedicated tenancy.
• D. Configure the EC2 instances with shared tenancy.

Hide Answer
Suggested Answer: A

Community vote distribution


A (75%)
C (25%)
by czyboi at Aug. 31, 2023, 12:19 a.m.

Question #: : 585

A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2 capacity in a failover
AWS Region. Business requirements state that the DR strategy must meet capacity in the failover Region.

Which solution will meet these requirements?


• A. Purchase On-Demand Instances in the failover Region.
• B. Purchase an EC2 Savings Plan in the failover Region.
• C. Purchase regional Reserved Instances in the failover Region.
• D. Purchase a Capacity Reservation in the failover Region.

Hide Answer
Suggested Answer: C

Community vote distribution


D (90%)
10%
by ErnShm at Sept. 1, 2023, 12:32 p.m.

Question #: : 586

A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU
correlates to the five businesses that the company owns. The company's research and development (R&D)
business is separating from the company and will need its own organization. A solutions architect creates a
separate new management account for this purpose.

What should the solutions architect do next in the new management account?
• A. Have the R&D AWS account be part of both organizations during the transition.
• B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left
the prior organization.
• C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D
AWS account to the new R&D AWS account.
• D. Have the R&D AWS account join the new organization. Make the new management account a
member of the prior organization.
Hide Answer
Suggested Answer: C

Community vote distribution


B (76%)
C (24%)
by gispankaj at Sept. 1, 2023, 12:43 p.m.

Question #: : 587

A company is designing a solution to capture customer activity in different web applications to process analytics
and make predictions. Customer activity in the web applications is unpredictable and can increase suddenly. The
company requires a solution that integrates with other web applications. The solution must include an
authorization step for security purposes.

Which solution will meet these requirements?


• A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives in an Amazon Elastic
File System (Amazon EFS) file system. Authorization is resolved at the GWLB.
• B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores
the information that the company receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve
authorization.
• C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores
the information that the company receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to
resolve authorization.
• D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service
(Amazon ECS) container instance that stores the information that the company receives on an Amazon Elastic
File System (Amazon EFS) file system. Use an AWS Lambda function to resolve authorization.

Hide Answer
Suggested Answer: D

Community vote distribution


C (91%)
9%
by ralfj at Aug. 31, 2023, 5:05 p.m.

Question #: : 588
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft
SQL Server Enterprise Edition. The company's current recovery point objective (RPO) and recovery time
objective (RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?


• A. Create a cross-Region read replica and promote the read replica to the primary instance.
• B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
• C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
• D. Copy automatic snapshots to another Region every 24 hours.

Hide Answer
Suggested Answer: B

Community vote distribution


D (100%)
by TiagueteVital at Sept. 3, 2023, 12:06 a.m.

Question #: : 589

A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application
Load Balancer that has sticky sessions enabled. The web server currently hosts the user session state. The company
wants to ensure high availability and avoid user session state loss in the event of a web server outage.

Which solution will meet these requirements?


• A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application
to use ElastiCache for Memcached to store the session state.
• B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache
for Redis to store the session state.
• C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use
AWS Storage Gateway cached volume to store the session state.
• D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the
session state.

Hide Answer
Suggested Answer: D

Community vote distribution


B (89%)
11%
by czyboi at Aug. 31, 2023, 12:39 a.m.

Question #: : 590

A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for
MySQL DB instance. The company sized the RDS DB instance to meet the company's average daily workload.
Once a month, the database performs slowly when the company runs queries for a report. The company wants to
have the ability to run reports and maintain the performance of the daily workloads.

Which solution will meet these requirements?


• A. Create a read replica of the database. Direct the queries to the read replica.
• B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the
new database.
• C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
• D. Resize the DB instance to accommodate the additional workload.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)

Question #: : 591

A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The
application includes microservices that manage customers and place orders. The company needs to route
incoming requests to the appropriate microservices.

Which solution will meet this requirement MOST cost-effectively?


• A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
• B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
• C. Use an AWS Lambda function to connect the requests to Amazon EKS.
• D. Use Amazon API Gateway to connect the requests to Amazon EKS.

Hide Answer
Suggested Answer: C
Community vote distribution
B (64%)
D (36%)
by ralfj at Aug. 31, 2023, 4:25 p.m.

Question #: : 592

A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be
able to access these images quickly. The company must deny access to users from specific countries. The company
wants to minimize costs as much as possible.

Which solution will meet these requirements?


• A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket
access. Provide customers with a link to the S3 bucket.
• B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group
that has permission to access the S3 bucket.
• C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images.
Deploy the instances only in the countries the company services. Provide customers with links to the ALBs for
their specific country's instances.
• D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic
restrictions. Provide a signed URL for each customer to access the data in CloudFront.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by ralfj at Aug. 31, 2023, 4:20 p.m.

Question #: : 593

A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions
architect needs to ensure that failures do not result in performance degradation or loss of data locally and within
an AWS Region. The solution needs to provide high availability at the node level and at the Region level.

Which solution will meet these requirements?


• A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
• B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
• C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
• D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.

Hide Answer
Suggested Answer: A

Community vote distribution


A (67%)
C (19%)
14%
by ralfj at Aug. 31, 2023, 4:18 p.m.

Question #: : 594

A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its application. During the
migration testing phase, a technical team observes that the application takes a long time to launch and load
memory to become fully productive.

Which solution will reduce the launch time of the application during the next testing phase?
• A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2
On-Demand Instances available during the next testing phase.
• B. Launch EC2 Spot Instances to support the application and to scale the application so it is available
during the next testing phase.
• C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling
warm pools during the next testing phase.
• D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances
during the next testing phase.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by ralfj at Aug. 31, 2023, 4:10 p.m.

Question #: : 595
A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its
applications experience sudden traffic increases on random days of the week. The company wants to maintain
application performance during sudden traffic increases.

Which solution will meet these requirements MOST cost-effectively?


• A. Use manual scaling to change the size of the Auto Scaling group.
• B. Use predictive scaling to change the size of the Auto Scaling group.
• C. Use dynamic scaling to change the size of the Auto Scaling group.
• D. Use schedule scaling to change the size of the Auto Scaling group.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by ralfj at Aug. 31, 2023, 4:03 p.m.

Question #: : 596

An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly
sales event, database usage increases and causes database connection issues for the application. The traffic is
unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to
maintain performance when there is an unpredictable increase in traffic.

Which solution resolves this issue in the MOST cost-effective way?


• A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2.
• B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased
usage.
• C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
• D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.

Hide Answer
Suggested Answer: C

Community vote distribution


A (91%)
9%
by anikety123 at Aug. 31, 2023, 7:33 p.m.
Question #: : 597

A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda.
The company’s employees report issues with high latency when they begin using the application each day. The
company wants to reduce latency.

Which solution will meet these requirements?


• A. Increase the API Gateway throttling limit.
• B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to
use the application each day.
• C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the
beginning of each day.
• D. Increase the Lambda function memory.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by oayoade at Aug. 30, 2023, 7:06 p.m.

Question #: : 598

A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS
Cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB file share.
Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically
throughout the day.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
• A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
• B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
• C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
• D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon
S3. Provide access to analysts.
• E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
• F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
Hide Answer
Suggested Answer: CEF

Community vote distribution


ACF (95%)
5%
by ralfj at Aug. 31, 2023, 3:44 p.m.

Question #: : 599

A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB
instances to build and run a payment processing application. The company will run the application in its on-
premises data center for compliance purposes.

A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with
the company's operational team to build the application.

Which activities are the responsibility of the company's operational team? (Choose three.)
• A. Providing resilient power and network connectivity to the Outposts racks
• B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
• C. Physical security and access controls of the data center environment
• D. Availability of the Outposts infrastructure including the power supplies, servers, and networking
equipment within the Outposts racks
• E. Physical maintenance of Outposts components
• F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events

Hide Answer
Suggested Answer: ACE

Community vote distribution


ACF (46%)
ACE (26%)
ACD (25%)
2%
by SOMEONE1675 at Aug. 31, 2023, 9:25 a.m.

600-620

Question #: : 600
A company is planning to migrate a TCP-based application into the company's VPC. The application is publicly
accessible on a nonstandard TCP port through a hardware appliance in the company's data center. This public
endpoint can process up to 3 million requests per second with low latency. The company requires the same level
of performance for the new public endpoint in AWS.

What should a solutions architect recommend to meet this requirement?


• A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP
port that the application requires.
• B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the
TCP port that the application requires.
• C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires.
Use an Application Load Balancer as the origin.
• D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application
requires. Configure AWS Lambda functions with provisioned concurrency to process the requests.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by taustin2 at Sept. 22, 2023, 8:56 p.m.

Question #: : 601

A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to
migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora PostgreSQL
DB cluster.
• B. Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read
replicate to a new Aurora PostgreSQL DB cluster.
• C. Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
• D. Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a new
Aurora PostgreSQL DB cluster.

Hide Answer
Suggested Answer: B
Community vote distribution
B (79%)
A (21%)
by taustin2 at Sept. 22, 2023, 10:27 p.m

Question #: : 602

A company's infrastructure consists of hundreds of Amazon EC2 instances that use Amazon Elastic Block Store
(Amazon EBS) storage. A solutions architect must ensure that every EC2 instance can be recovered after a disaster.

What should the solutions architect do to meet this requirement with the LEAST amount of effort?
• A. Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS
CloudFormation template to launch new EC2 instances from the EBS storage.
• B. Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk
to set the environment based on the EC2 template and attach the EBS storage.
• C. Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup
API or the AWS CLI to speed up the restore process for multiple EC2 instances.
• D. Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2
instance and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores
with the copied AMIs and attach the EBS storage.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by taustin2 at Sept. 22, 2023, 10:44 p.m.

Question #: : 603

A company recently migrated to the AWS Cloud. The company wants a serverless solution for large-scale parallel
on-demand processing of a semistructured dataset. The data consists of logs, media files, sales transactions, and
IoT sensor data that is stored in Amazon S3. The company wants the solution to process thousands of items in the
dataset in parallel.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Use the AWS Step Functions Map state in Inline mode to process the data in parallel.
• B. Use the AWS Step Functions Map state in Distributed mode to process the data in parallel.
• C. Use AWS Glue to process the data in parallel.
• D. Use several AWS Lambda functions to process the data in parallel.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by [deleted] at Sept. 22, 2023, 10:32 p.m.

Question #: : 604

A company will migrate 10 PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink
to the internet. Other on-premises applications share the uplink. The company can use 80% of the internet
bandwidth for this one-time migration task.

Which solution will meet these requirements?


• A. Configure AWS DataSync to migrate the data to Amazon S3 and to automatically verify the data.
• B. Use rsync to transfer the data directly to Amazon S3.
• C. Use the AWS CLI and multiple copy processes to send the data directly to Amazon S3.
• D. Order multiple AWS Snowball devices. Copy the data to the devices. Send the devices to AWS to copy
the data to Amazon S3.

Hide Answer
Suggested Answer: A

Community vote distribution


D (92%)
8%
by kambarami at Sept. 22, 2023, 4:12 p.m.

Question #: : 605

A company has several on-premises Internet Small Computer Systems Interface (ISCSI) network storage servers.
The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect
must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with
a minimal number of infrastructure changes.
Which solution will meet these requirements?
• A. Deploy an Amazon S3 File Gateway.
• B. Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
• C. Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
• D. Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by nnecode at Sept. 22, 2023, 12:55 p.m.

Question #: : 606

A solutions architect is designing an application that will allow business users to upload objects to Amazon S3.
The solution needs to maximize object durability. Objects also must be readily available at any time and for any
length of time. Users will access objects frequently within the first 30 days after the objects are uploaded, but users
are much less likely to access objects that are older than 30 days.

Which solution meets these requirements MOST cost-effectively?


• A. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Glacier
after 30 days.
• B. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Standard-
Infrequent Access (S3 Standard-IA) after 30 days.
• C. Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 One Zone-
Infrequent Access (S3 One Zone-IA) after 30 days.
• D. Store all the objects in S3 Intelligent-Tiering with an S3 Lifecycle rule to transition the objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by taustin2 at Sept. 22, 2023, 11:02 p.m.
Question #: : 607

A company has migrated a two-tier application from its on-premises data center to the AWS Cloud. The data tier
is a Multi-AZ deployment of Amazon RDS for Oracle with 12 TB of General Purpose SSD Amazon Elastic Block
Store (Amazon EBS) storage. The application is designed to process and store documents in the database as binary
large objects (blobs) with an average document size of 6 MB.

The database size has grown over time, reducing the performance and increasing the cost of storage. The company
must improve the database performance and needs a solution that is highly available and resilient.

Which solution will meet these requirements MOST cost-effectively?


• A. Reduce the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type
to Magnetic.
• B. Increase the RDS DB instance size. Increase the storage capacity to 24 TiChange the storage type to
Provisioned IOPS.
• C. Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the
object metadata in the existing database.
• D. Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database
Migration Service (AWS DMS) to migrate data from the Oracle database to DynamoDB.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by taustin2 at Sept. 22, 2023, 11:04 p.m.

Question #: : 608

A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations
around the world. The application consists of backend web services that are exposed over HTTPS on port 443.
The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The retail
locations communicate with the web application over the public internet. The company allows each retail location
to register the IP address that the retail location has been allocated by its local ISP.

The company's security team recommends to increase the security of the application endpoint by restricting access
to only the IP addresses registered by the retail locations.
What should a solutions architect do to meet these requirements?
• A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update
the IP addresses in the rule to include the registered IP addresses.
• B. Deploy AWS Firewall Manager to manage the ALConfigure firewall rules to restrict traffic to the
ALModify the firewall rules to include the registered IP addresses.
• C. Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization
function on the ALB to validate that incoming requests are from the registered IP addresses.
• D. Configure the network ACL on the subnet that contains the public interface of the ALB. Update the
ingress rules on the network ACL with entries for each of the registered IP addresses.

Hide Answer
Suggested Answer: A

Community vote distribution


A (85%)
C (15%)
by taustin2 at Sept. 22, 2023, 11:12 p.m.

Question #: : 609

A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest
data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent
access to portions of the data that contain sensitive information.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an IAM role that includes permissions to access Lake Formation tables.
• B. Create data filters to implement row-level security and cell-level security.
• C. Create an AWS Lambda function that removes sensitive information before Lake Formation ingests
the data.
• D. Create an AWS Lambda function that periodically queries and removes sensitive information from
Lake Formation tables.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by nnecode at Sept. 22, 2023, 11:33 a.m.
Question #: : 610

A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon
S3 buckets so that the data can be processed in the future. According to compliance laws, the data must not be
transmitted over the public internet. Servers in the company's on-premises data center will consume the output
from an application that runs on the EC2 instances.

Which solution will meet these requirements?


• A. Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection
between the company and the VPC.
• B. Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between
the on-premises network and the VPC.
• C. Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-
Site VPN connection between the company and the VPC.
• D. Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to
fetch S3 data and feed the application instances.

Hide Answer
Suggested Answer: B

Community vote distribution


B (83%)
A (17%)
by taustin2 at Sept. 22, 2023, 11:19 p.m.

Question #: : 611

A company has an application with a REST-based interface that allows data to be received in near-real time from
a third-party vendor. Once received, the application processes and stores the data for further analysis. The
application is running on Amazon EC2 instances.

The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application.
When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to
process all requests.

Which design should a solutions architect recommend to provide a more scalable solution?
• A. Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
• B. Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit
for the third-party vendor.
• C. Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in
an Auto Scaling group behind an Application Load Balancer.
• D. Repackage the application as a container. Deploy the application using Amazon Elastic Container
Service (Amazon ECS) using the EC2 launch type with an Auto Scaling group.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by taustin2 at Sept. 22, 2023, 11:26 p.m.

Question #: : 612

A company has an application that runs on Amazon EC2 instances in a private subnet. The application needs to
process sensitive information from an Amazon S3 bucket. The application must not use the internet to connect to
the S3 bucket.

Which solution will meet these requirements?


• A. Configure an internet gateway. Update the S3 bucket policy to allow access from the internet gateway.
Update the application to use the new internet gateway.
• B. Configure a VPN connection. Update the S3 bucket policy to allow access from the VPN connection.
Update the application to use the new VPN connection.
• C. Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update
the application to use the new NAT gateway.
• D. Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint.
Update the application to use the new VPC endpoint.

Hide Answer
Suggested Answer: A

Community vote distribution


D (100%)
by nnecode at Sept. 22, 2023, 11:07 a.m.

Question #: : 613
A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS
cluster stores sensitive information in the Kubernetes secrets object. The company wants to ensure that the
information is encrypted.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use the container application to encrypt the information by using AWS Key Management Service
(AWS KMS).
• B. Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS).
• C. Implement an AWS Lambda function to encrypt the information by using AWS Key Management
Service (AWS KMS).
• D. Use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key
Management Service (AWS KMS).

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by nnecode at Sept. 22, 2023, 11:04 a.m.

Question #: : 614

A company is designing a new multi-tier web application that consists of the following components:

• Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
• An Amazon RDS DB instance for data storage

A solutions architect needs to limit access to the application servers so that only the web servers can access them.

Which solution will meet these requirements?


• A. Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only
the web servers to access the application servers.
• B. Deploy a VPC endpoint in front of the application servers. Configure the security group to allow only
the web servers to access the application servers.
• C. Deploy a Network Load Balancer with a target group that contains the application servers' Auto
Scaling group. Configure the network ACL to allow only the web servers to access the application servers.
• D. Deploy an Application Load Balancer with a target group that contains the application servers' Auto
Scaling group. Configure the security group to allow only the web servers to access the application servers.
Hide Answer
Suggested Answer: A

Community vote distribution


D (83%)
B (17%)
by nnecode at Sept. 22, 2023, 11 a.m.

Question #: : 615

A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS).
The application has a microservices architecture. The company needs to implement a solution that collects,
aggregates, and summarizes metrics and logs from the application in a centralized location.

Which solution meets these requirements?


• A. Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the
CloudWatch console.
• B. Run AWS App Mesh in the existing EKS cluster. View the metrics and logs in the App Mesh console.
• C. Configure AWS CloudTrail to capture data events. Query CloudTrail by using Amazon OpenSearch
Service.
• D. Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and
logs in the CloudWatch console.

Hide Answer
Suggested Answer: C

Community vote distribution


D (86%)
14%
by nnecode at Sept. 22, 2023, 10:54 a.m.

Question #: : 616

A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network
Load Balancer. The company stores the product’s objects in an Amazon S3 bucket.

The company recently experienced malicious attacks against its systems. The company needs a solution that
continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.
The solution must also report suspicious activity and display the information on a dashboard.

Which solution will meet these requirements?


• A. Configure Amazon Macie to monitor and report findings to AWS Config.
• B. Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
• C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
• D. Configure AWS Config to monitor and report findings to Amazon EventBridge.

Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by awslearnerin2022 at Sept. 22, 2023, 1:42 p.m.

Question #: : 617

A company wants to migrate an on-premises data center to AWS. The data center hosts a storage server that stores
data in an NFS-based file system. The storage server holds 200 GB of data. The company needs to migrate the
data without interruption to existing services. Multiple resources in AWS must be able to access the data by using
the NFS protocol.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
• A. Create an Amazon FSx for Lustre file system.
• B. Create an Amazon Elastic File System (Amazon EFS) file system.
• C. Create an Amazon S3 bucket to receive the data.
• D. Manually use an operating system copy command to push the data into the AWS destination.
• E. Install an AWS DataSync agent in the on-premises data center. Use a DataSync task between the on-
premises location and AWS.

Hide Answer
Suggested Answer: AB

Community vote distribution


BE (100%)
by awslearnerin2022 at Sept. 22, 2023, 1:39 p.m.

Question #: : 618
A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB
file share mounted as a volume in the us-east-1 Region. The company has a recovery point objective (RPO) of 5
minutes for planned system maintenance or unplanned service disruptions. The company needs to replicate the
file system to the us-west-2 Region. The replicated data must not be deleted by any user for 5 years.

Which solution will meet these requirements?


• A. Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment
type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-
2. Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
• B. Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type.
Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
• C. Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type.
Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
• D. Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment
type. Use AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-
2. Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by taustin2 at Sept. 22, 2023, 11:52 p.m.

Question #: : 619

A solutions architect is designing a security solution for a company that wants to provide developers with
individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because
the individual developers will have AWS account root user-level access to their own accounts, the solutions
architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer
accounts is not modified.
Which action meets these requirements?
• A. Create an IAM policy that prohibits changes to CloudTrail. and attach it to the root user.
• B. Create a new trail in CloudTrail from within the developer accounts with the organization trails option
enabled.
• C. Create a service control policy (SCP) that prohibits changes to CloudTrail, and attach it the developer
accounts.
• D. Create a service-linked role for CloudTrail with a policy condition that allows changes only from an
Amazon Resource Name (ARN) in the management account.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by taustin2 at Sept. 22, 2023, 11:55 p.m.

Question #: : 620

A company is planning to deploy a business-critical application in the AWS Cloud. The application requires
durable storage with consistent, low-latency performance.

Which type of storage should a solutions architect recommend to meet these requirements?
• A. Instance store volume
• B. Amazon ElastiCache for Memcached cluster
• C. Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume
• D. Throughput Optimized HDD Amazon Elastic Block Store (Amazon EBS) volume

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by taustin2 at Sept. 22, 2023, 11:57 p.m.

621-640
ーーーーー

Question #: : 621
An online photo-sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 Region.
The company needs to store a copy of all new photos in the us-east-1 Region.

Which solution will meet this requirement with the LEAST operational effort?
• A. Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the
existing S3 bucket to the second S3 bucket.
• B. Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-
east-1 in the CORS rule's AllowedOrigin element.
• C. Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule
to save photos into the second S3 bucket.
• D. Create a second S3 bucket in us-east-1. Configure S3 event notifications on object creation and update
events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by taustin2 at Sept. 22, 2023, 11:59 p.m.
ーーーーー

Question #: : 622

A company is creating a new web application for its subscribers. The application will consist of a static single page
and a persistent database layer. The application will have millions of users for 4 hours in the morning, but the
application will have only a few thousand users during the rest of the day. The company's data architects have
requested the ability to rapidly evolve their schema.

Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)
• A. Deploy Amazon DynamoDB as the database solution. Provision on-demand capacity.
• B. Deploy Amazon Aurora as the database solution. Choose the serverless DB engine mode.
• C. Deploy Amazon DynamoDB as the database solution. Ensure that DynamoDB auto scaling is enabled.
• D. Deploy the static content into an Amazon S3 bucket. Provision an Amazon CloudFront distribution
with the S3 bucket as the origin.
• E. Deploy the web servers for static content across a fleet of Amazon EC2 instances in Auto Scaling
groups. Configure the instances to periodically refresh the content from an Amazon Elastic File System (Amazon
EFS) volume.

Hide Answer
Suggested Answer: CD

Community vote distribution


AD (45%)
CD (45%)
8%
by taustin2 at Sept. 23, 2023, 12:04 a.m.

ーーーーー

Question #: : 623

A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The
company must protect the REST APIs from SQL injection and cross-site scripting attacks.

What is the MOST operationally efficient solution that meets these requirements?
• A. Configure AWS Shield.
• B. Configure AWS WAF.
• C. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS Shield in CloudFront.
• D. Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront.

Hide Answer
Suggested Answer: A

Community vote distribution


B (93%)
7%
by awslearnerin2022 at Sept. 22, 2023, 1:14 p.m.

ーーーーー

Question #: : 624

A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their
access to on-premises resources through Active Directory user groups on the corporate network. However, the
company does not want users to have to maintain another identity to access the resources. A solutions architect
must manage user access to the AWS resources while preserving access to the on-premises resources.

What should the solutions architect do to meet these requirements?


• A. Create an IAM user for each user in the company. Attach the appropriate policies to each user.
• B. Use Amazon Cognito with an Active Directory user pool. Create roles with the appropriate policies
attached.
• C. Define cross-account roles with the appropriate policies attached. Map the roles to the Active
Directory groups.
• D. Configure Security Assertion Markup Language (SAML) 2 0-based federation. Create roles with the
appropriate policies attached Map the roles to the Active Directory groups.

Hide Answer
Suggested Answer: D

Community vote distribution


D (85%)
B (15%)
by dilaaziz at Nov. 4, 2023, 8:23 a.m.
ーーーーー

Question #: : 625

A company is hosting a website behind multiple Application Load Balancers. The company has different
distribution rights for its content around the world. A solutions architect needs to ensure that users are served the
correct content without violating distribution rights.

Which configuration should the solutions architect choose to meet these requirements?
• A. Configure Amazon CloudFront with AWS WAF.
• B. Configure Application Load Balancers with AWS WAF
• C. Configure Amazon Route 53 with a geolocation policy
• D. Configure Amazon Route 53 with a geoproximity routing policy

Hide Answer
Suggested Answer: A

Community vote distribution


C (68%)
A (32%)
by dilaaziz at Nov. 4, 2023, 8:27 a.m.
ーーーーー

Question #: : 626
A company stores its data on premises. The amount of data is growing beyond the company's available capacity.

The company wants to migrate its data from the on-premises location to an Amazon S3 bucket. The company
needs a solution that will automatically validate the integrity of the data after the transfer.

Which solution will meet these requirements?


• A. Order an AWS Snowball Edge device. Configure the Snowball Edge device to perform the online data
transfer to an S3 bucket
• B. Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online
data transfer to an S3 bucket.
• C. Create an Amazon S3 File Gateway on premises Configure the S3 File Gateway to perform the online
data transfer to an S3 bucket
• D. Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configure the accelerator
to perform the online data transfer to an S3 bucket.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by dilaaziz at Nov. 4, 2023, 8:42 a.m.
ーーーーー

Question #: : 627

A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and
receive 1 million requests each day on average. The company wants to maximize availability while minimizing the
operational overhead that is related to the management of the two servers.

What should a solutions architect recommend to meet these requirements?


• A. Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
• B. Launch a single large Amazon EC2 instance Import zone tiles. Configure Amazon CloudWatch alarms
and notifications to alert the company about any downtime.
• C. Migrate the servers to AWS by using AWS Server Migration Service (AWS SMS). Configure Amazon
CloudWatch alarms and notifications to alert the company about any downtime.
• D. Launch an Amazon EC2 instance in an Auto Scaling group across two Availability Zones. Import zone
files. Set the desired capacity to 1 and the maximum capacity to 3 for the Auto Scaling group. Configure scaling
alarms to scale based on CPU utilization.

Hide Answer
Suggested Answer: A

Community vote distribution


A (89%)
11%
by potomac at Nov. 7, 2023, 1:29 a.m.
ーーーーー

Question #: : 628

A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's
applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The
company wants to report on incomplete multipart uploads for cost compliance purposes.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure AWS Config with a rule to report the incomplete multipart upload object count.
• B. Create a service control policy (SCP) to report the incomplete multipart upload object count.
• C. Configure S3 Storage Lens to report the incomplete multipart upload object count.
• D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by warp at Nov. 5, 2023, 9 p.m.
ーーーーー

Question #: : 629

A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database
version for security compliance reasons. Because the database contains critical data, the company wants a quick
solution to upgrade and test functionality without losing any data.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.
• B. Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for
MySQL.
• C. Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version
of Amazon RDS for MySQL.
• D. Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by warp at Nov. 5, 2023, 9:05 p.m.

ーーーーー

Question #: : 630

A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete.
If the job is interrupted, it has to restart from the beginning.

How should the solutions architect address this issue in the MOST cost-effective manner?
• A. Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
• B. Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
• C. Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon
EventBridge scheduled event.
• D. Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by
an Amazon EventBridge scheduled event.

Hide Answer
Suggested Answer: C

Community vote distribution


C (80%)
B (20%)
by potomac at Nov. 7, 2023, 1:44 a.m.
ーーーーー

Question #: : 631

A social media company wants to store its database of user profiles, relationships, and interactions in the AWS
Cloud. The company needs an application to monitor any changes in the database. The application needs to
analyze the relationships between the data entities and to provide recommendations to users.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes
in the database.
• B. Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the
database.
• C. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon
Kinesis Data Streams to process changes in the database.
• D. Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune
Streams to process changes in the database.

Hide Answer
Suggested Answer: B

Community vote distribution


B (82%)
C (18%)
by AF_1221 at Nov. 1, 2023, 4:05 p.m.

ーーーーー

Question #: : 632

A company is creating a new application that will store a large amount of data. The data will be analyzed hourly
and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones.
The needed amount of storage space will continue to grow for the next 6 months.

Which storage solution should a solutions architect recommend to meet these requirements?
• A. Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the
application instances.
• B. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on
the application instances.
• C. Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on
the application instances.
• D. Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared
between the application instances.

Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by AF_1221 at Nov. 1, 2023, 4:07 p.m.
ーーーーー

Question #: : 633

A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance.
Increases in traffic are causing performance problems. The company determines that database queries are the
primary reason for the slow performance.

What should a solutions architect do to improve the application's performance?


• A. Serve read traffic from the Multi-AZ standby replica.
• B. Configure the DB instance to use Transfer Acceleration.
• C. Create a read replica from the source DB instance. Serve read traffic from the read replica.
• D. Use Amazon Kinesis Data Firehose between the application and Amazon RDS to increase the
concurrency of database requests.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by warp at Nov. 6, 2023, 11:48 a.m.
ーーーーー

Question #: : 634

A company collects 10 GB of telemetry data daily from various machines. The company stores the data in an
Amazon S3 bucket in a source data account.

The company has hired several consulting agencies to use this data for analysis. Each agency needs read access to
the data for its analysts. The company must share the data from the source data account by choosing a solution
that maximizes security and operational efficiency.

Which solution will meet these requirements?


• A. Configure S3 global tables to replicate data for each agency.
• B. Make the S3 bucket public for a limited time. Inform only the agencies.
• C. Configure cross-account access for the S3 bucket to the accounts that the agencies own.
• D. Set up an IAM user for each analyst in the source data account. Grant each user access to the S3
bucket.

Hide Answer
Suggested Answer: C

Community vote distribution


C (88%)
13%
by potomac at Nov. 7, 2023, 2:11 a.m.
ーーーーー

Question #: : 635

A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares.
Applications that run on Amazon EC2 instances access the file shares. The company needs a storage disaster
recovery (DR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be
accessed by using the same protocols as the primary Region.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicate the S3 bucket
to the secondary Region.
• B. Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the
secondary Region. Create a new FSx for ONTAP instance from the backup.
• C. Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate
data from the primary Region to the secondary Region.
• D. Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume.
Replicate the volume to the secondary Region.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by potomac at Nov. 7, 2023, 2:19 a.m.
ーーーーー

Question #: : 636

A development team is creating an event-based application that uses AWS Lambda functions. Events will be
generated when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple
Notification Service (Amazon SNS) configured as the event target from Amazon S3.

What should a solutions architect do to process the events from Amazon S3 in a scalable way?
• A. Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon
ECS) before the event runs in Lambda.
• B. Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon
EKS) before the event runs in Lambda
• C. Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS).
Configure the SOS queue to trigger a Lambda function.
• D. Create an SNS subscription that sends the event to AWS Server Migration Service (AWS SMS).
Configure the Lambda function to poll from the SMS event.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by potomac at Nov. 7, 2023, 2:23 a.m.
ーーーーー

Question #: : 637

A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service
will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data
that needs to be persisted in a backend database is currently less than 1 GB with unpredictable future growth.
Data can be queried using simple key-value requests.

Which combination ofAWS services would meet these requirements? (Choose two.)
• A. AWS Fargate
• B. AWS Lambda
• C. Amazon DynamoDB
• D. Amazon EC2 Auto Scaling
• E. MySQL-compatible Amazon Aurora

Hide Answer
Suggested Answer: BC

Community vote distribution


BC (100%)
by potomac at Nov. 7, 2023, 2:26 a.m.
ーーーーー

Question #: : 638

A company collects and shares research data with the company's employees all over the world. The company wants
to collect and store the data in an Amazon S3 bucket and process the data in the AWS Cloud. The company will
share the data with the company's employees. The company needs a secure solution in the AWS Cloud that
minimizes operational overhead.

Which solution will meet these requirements?


• A. Use an AWS Lambda function to create an S3 presigned URL. Instruct employees to use the URL.
• B. Create an IAM user for each employee. Create an IAM policy for each employee to allow S3 access.
Instruct employees to use the AWS Management Console.
• C. Create an S3 File Gateway. Create a share for uploading and a share for downloading. Allow employees
to mount shares on their local computers to use S3 File Gateway.
• D. Configure AWS Transfer Family SFTP endpoints. Select the custom identity provider options. Use
AWS Secrets Manager to manage the user credentials Instruct employees to use Transfer Family.

Hide Answer
Suggested Answer: D

Community vote distribution


A (41%)
D (37%)
C (19%)
4%
by potomac at Nov. 7, 2023, 2:21 p.m.

ーーーーー

Question #: : 639

A company is building a new furniture inventory application. The company has deployed the application on a fleet
ofAmazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load
Balancer (ALB) in their VPC.

A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for
some requests.
What should the solutions architect do to resolve this issue?
• A. Disable session affinity (sticky sessions) on the ALB
• B. Replace the ALB with a Network Load Balancer
• C. Increase the number of EC2 instances in each Availability Zone
• D. Adjust the frequency of the health checks on the ALB's target group

Hide Answer
Suggested Answer: A

Community vote distribution


A (83%)
B (17%)
by potomac at Nov. 7, 2023, 2:24 p.m.

ーーーーー

Question #: : 640
A company has an application workflow that uses an AWS Lambda function to download and decrypt files from
Amazon S3. These files are encrypted using AWS Key Management Service (AWS KMS) keys. A solutions
architect needs to design a solution that will ensure the required permissions are set correctly.

Which combination of actions accomplish this? (Choose two.)


• A. Attach the kms:decrypt permission to the Lambda function’s resource policy
• B. Grant the decrypt permission for the Lambda IAM role in the KMS key's policy
• C. Grant the decrypt permission for the Lambda resource policy in the KMS key's policy.
• D. Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda
function.
• E. Create a new IAM role with the kms:decrypt permission and attach the execution role to the Lambda
function.

Hide Answer
Suggested Answer: BE

Community vote distribution


BE (85%)
Other
by potomac at Nov. 7, 2023, 2:48 p.m.

641-660

Question #: : 641
A company wants to monitor its AWS costs for financial review. The cloud operations team is designing an
architecture in the AWS Organizations management account to query AWS Cost and Usage Reports for all
member accounts. The team must run this query once a month and provide a detailed analysis of the bill.

Which solution is the MOST scalable and cost-effective way to meet these requirements?
• A. Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use
Amazon EMR for analysis.
• B. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use
Amazon Athena for analysis.
• C. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3 Use Amazon
Redshift for analysis.
• D. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use
Amazon QuickSight tor analysis.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by potomac at Nov. 7, 2023, 2:58 p.m.

Question #: : 642

A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in
the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the
application can scale out and in as traffic increases and decreases.

What should a solutions architect do to meet these requirements?


• A. Attach a Network Load Balancer to the Auto Scaling group.
• B. Attach an Application Load Balancer to the Auto Scaling group.
• C. Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately.
• D. Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto
Scaling group.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Sugarbear_01 at Nov. 2, 2023, 9:05 p.m.

Question #: : 643

A company runs several websites on AWS for its different brands. Each website generates tens of gigabytes of web
traffic logs each day. A solutions architect needs to design a scalable solution to give the company's developers the
ability to analyze traffic patterns across all the company's websites. This analysis by the developers will occur on
demand once a week over the course of several months. The solution must support queries with standard SQL.

Which solution will meet these requirements MOST cost-effectively?


• A. Store the logs in Amazon S3. Use Amazon Athena tor analysis.
• B. Store the logs in Amazon RDS. Use a database client for analysis.
• C. Store the logs in Amazon OpenSearch Service. Use OpenSearch Service for analysis.
• D. Store the logs in an Amazon EMR cluster Use a supported open-source framework for SQL-based
analysis.

Hide Answer
Suggested Answer: A

Community vote distribution


A (88%)
13%
by potomac at Nov. 7, 2023, 3:23 p.m.

Question #: : 644

An international company has a subdomain for each country that the company operates in. The subdomains are
formatted as example.com, country1.example.com, and country2.example.com. The company's workloads are
behind an Application Load Balancer. The company wants to encrypt the website data that is in transit.

Which combination of steps will meet these requirements? (Choose two.)


• A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top
domain example com and a wildcard certificate for *.example.com.
• B. Use the AWS Certificate Manager (ACM) console to request a private certificate for the apex top
domain example.com and a wildcard certificate for *.example.com.
• C. Use the AWS Certificate Manager (ACM) console to request a public and private certificate for the
apex top domain example.com.
• D. Validate domain ownership by email address. Switch to DNS validation by adding the required DNS
records to the DNS provider.
• E. Validate domain ownership for the domain by adding the required DNS records to the DNS provider.

Hide Answer
Suggested Answer: AE

Community vote distribution


AE (100%)
by potomac at Nov. 7, 2023, 3:29 p.m.

Question #: : 645

A company is required to use cryptographic keys in its on-premises key manager. The key manager is outside of
the AWS Cloud because of regulatory and compliance requirements. The company wants to manage encryption
and decryption by using cryptographic keys that are retained outside of the AWS Cloud and that support a variety
of external key managers from different vendors.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS CloudHSM key store backed by a CloudHSM cluster.
• B. Use an AWS Key Management Service (AWS KMS) external key store backed by an external key
manager.
• C. Use the default AWS Key Management Service (AWS KMS) managed key store.
• D. Use a custom key store backed by an AWS CloudHSM cluster.

Hide Answer
Suggested Answer: B

Community vote distribution


B (90%)
10%
by potomac at Nov. 7, 2023, 3:32 p.m.

Question #: : 646

A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The
workload will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to
enable distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously.
The workload requires access latency within 1 ms. After processing has completed, engineers will need access to
the dataset for manual postprocessing.

Which solution will meet these requirements?


• A. Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from
Amazon EFS.
• B. Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from
the S3 bucket.
• C. Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for
postprocessing.
• D. Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted
to all instances for processing and postprocessing.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by potomac at Nov. 7, 2023, 3:35 p.m.

Question #: : 647

A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to
users across the world. The application needs to be highly available with an automated failover across AWS
Regions. The company wants to minimize the latency of users without relying on IP address caching on user
devices.

What should a solutions architect do to meet these requirements?


• A. Use AWS Global Accelerator with health checks.
• B. Use Amazon Route 53 with a geolocation routing policy.
• C. Create an Amazon CloudFront distribution that includes multiple origins.
• D. Create an Application Load Balancer that uses path-based routing.

Hide Answer
Suggested Answer: A

Community vote distribution


A (94%)
6%
by Sugarbear_01 at Nov. 2, 2023, 8:48 p.m.

Question #: : 648

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The
company has a high performance computing (HPC) environment in its data center and wants to expand its
forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of
sustained throughput. Files that are stored in the solution should be accessible to thousands of compute instances
that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?


• A. Use Amazon FSx for Lustre scratch file systems.
• B. Use Amazon FSx for Lustre persistent file systems.
• C. Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
• D. Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by potomac at Nov. 7, 2023, 3:42 p.m.

Question #: : 649

An ecommerce company runs a PostgreSQL database on premises. The database stores data by using high IOPS
Amazon Elastic Block Store (Amazon EBS) block storage. The daily peak I/O transactions per second do not
exceed 15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision
disk IOPS performance independent of disk storage capacity.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure the General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.
• B. Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.
• C. Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
• D. Configure the EBS magnetic volume type to achieve maximum IOPS.
Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by potomac at Nov. 7, 2023, 3:46 p.m.

Question #: : 650

A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The
company's online application uses the database to process transactions. The data analysis team uses the same
production database to run reports for analytical processing. The company wants to reduce operational overhead
by moving to managed services wherever possible.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
• B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
• C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
• D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by potomac at Nov. 7, 2023, 3:47 p.m.

Question #: : 651

A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available
for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images need
to be archived but must be available instantly upon request. After 5 years, only auditors can access the images.
The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this process.

A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle
rule.

Which solution will meet these requirements MOST cost-effectively?


• A. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier
Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
• B. Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier
Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
• C. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier
Instant Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
• D. Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier
Flexible Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

Hide Answer
Suggested Answer: C

Community vote distribution


C (76%)
A (20%)
4%
by dilaaziz at Nov. 3, 2023, 8:41 a.m.

Question #: : 652

A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the
process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical
data workload.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances
and the task nodes on Spot Instances.
• B. Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances
and the task nodes on Spot Instances.
• C. Configure a transient cluster that runs the primary node on an On-Demand Instance and the core
nodes and task nodes on Spot Instances.
• D. Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core
nodes on Spot Instances, and the task nodes on Spot Instances.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by potomac at Nov. 7, 2023, 3:58 p.m.

Question #: : 653

A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an
organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a
specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user
who created the resource.

Which solution will meet these requirements?


• A. Move the specific AWS account to a new organizational unit (OU) in Organizations from the
management account. Create a service control policy (SCP) that requires all existing resources to have the correct
cost center tag before the resources are created. Apply the SCP to the new OU.
• B. Create an AWS Lambda function to tag the resources after the Lambda function looks up the
appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS
CloudTrail events to invoke the Lambda function.
• C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda
function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon
EventBridge scheduled rule to invoke the CloudFormation stack.
• D. Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon
EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing
the cost center tag.

Hide Answer
Suggested Answer: B

Community vote distribution


B (63%)
A (38%)
by t0nx at Nov. 22, 2023, 10:45 a.m.

Question #: : 654

A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance
to run multiple processes to host the application. The processes include an Apache web server that serves static
content. The Apache web server makes requests to a PHP application that uses a local Redis server for user
sessions.
The company wants to redesign the architecture to be highly available and to use AWS managed solutions.

Which solution will meet these requirements?


• A. Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic
Beanstalk to deploy its EC2 instance into a public subnet. Assign a public IP address.
• B. Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway
REST API to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the
domain name. Configure Amazon ElastiCache for Redis to handle session information.
• C. Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has
Multi-AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to
Amazon S3. Configure the backend code to reference the EC2 instance.
• D. Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is
configured to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic
Container Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the
PHP application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by TariqKipkemei at Dec. 7, 2023, 12:46 p.m.

Question #: : 655

A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group.
The company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the
endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

Which combination of steps will meet these requirements? (Choose two.)


• A. Create a public Network Load Balancer. Specify the application target group.
• B. Create a Gateway Load Balancer. Specify the application target group.
• C. Create a public Application Load Balancer. Specify the application target group.
• D. Create a second target group. Add Elastic IP addresses to the EC2 instances.
• E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

Hide Answer
Suggested Answer: CE

Community vote distribution


CE (100%)
by TariqKipkemei at Dec. 7, 2023, 12:51 p.m.

Question #: : 656

A company runs a website that stores images of historical events. Website users need the ability to search and view
images based on the year that the event in the image occurred. On average, users request each image only once
or twice a year. The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?


• A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon
EC2.
• B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon
EC2.
• C. Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static
website.
• D. Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to
directly deliver images by using a static website.

Hide Answer
Suggested Answer: C

Community vote distribution


D (87%)
13%
by chikuwan at Nov. 25, 2023, 5:54 a.m.

Question #: : 657

A company has multiple AWS accounts in an organization in AWS Organizations that different business units use.
The company has multiple offices around the world. The company needs to update security group rules to allow
new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize
the management of security group rules to minimize the administrative overhead that updating CIDR ranges
requires.
Which solution will meet these requirements MOST cost-effectively?
• A. Create VPC security groups in the organization's management account. Update the security groups
when a CIDR range update is necessary.
• B. Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.
• C. Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group
update across the organization. Use an AWS Lambda function to update the prefix list automatically when the
CIDR ranges change.
• D. Create security groups in a central administrative AWS account. Create an AWS Firewall Manager
common security group policy for the whole organization. Select the previously created security groups as primary
groups in the policy.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by achechen at Nov. 30, 2023, 10:41 a.m.

Question #: : 658

A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high
performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads
and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access
from the file system.

Which solution will meet these requirements with the LEAST latency? (Choose two.)
• A. Deploy compute optimized EC2 instances into a cluster placement group.
• B. Deploy compute optimized EC2 instances into a partition placement group.
• C. Attach the EC2 instances to an Amazon FSx for Lustre file system.
• D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
• E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

Hide Answer
Suggested Answer: AE

Community vote distribution


AE (74%)
AC (22%)
4%
by LemonGremlin at Nov. 22, 2023, 3:43 a.m.

Question #: : 659

A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The
existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.

Which AWS service should a solutions architect use to meet these requirements?
• A. AWS DataSync with a VPC endpoint
• B. AWS Direct Connect
• C. AWS Snowball Edge Storage Optimized
• D. AWS Storage Gateway

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by TariqKipkemei at Dec. 8, 2023, 8:33 a.m.

Question #: : 660

A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application
peak hours occur at the same time each day. Application users report slow application performance at the start of
peak hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure
that the application works properly at the start of peak hours.

Which solution will meet these requirements?


• A. Configure an Application Load Balancer to distribute traffic properly to the instances.
• B. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on
memory utilization.
• C. Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU
utilization.
• D. Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak
hours.
Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Arnaud92 at Nov. 23, 2023, 10:28 a.m.

661-680

Question #: : 661

A company runs applications on AWS that connect to the company's Amazon RDS database. The applications
scale on weekends and at peak times of the year. The company wants to scale the database more effectively for its
applications that connect to the database.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon DynamoDB with connection pooling with a target group configuration for the database.
Change the applications to use the DynamoDB endpoint.
• B. Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS
Proxy endpoint.
• C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the
applications to use the custom proxy endpoint.
• D. Use an AWS Lambda function to provide connection pooling with a target group configuration for
the database. Change the applications to use the Lambda function.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by TOR_0511 at Dec. 3, 2023, 7:42 p.m.

Question #: : 662

A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block
Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase
additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon
EBS Elastic Volumes to reduce the size of the EBS volumes.
• B. Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of
the EBS volumes.
• C. Delete all expired and unused snapshots to reduce snapshot costs.
• D. Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the
snapshots according to the company's snapshot policy requirements.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by t0nx at Nov. 22, 2023, 9:58 a.m.

Question #: : 663

A company is developing a new application on AWS. The application consists of an Amazon Elastic Container
Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS
for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The
company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the
data in the S3 bucket.

Which solution will meet these requirements?


• A. Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the
S3 bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt
permissions for the ECS task execution role.
• B. Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3
bucket and the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role
as a user.
• C. Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC
endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the
subnets that the ECS cluster will generate tasks in.
• D. Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to
allow access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon
S3. Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

Hide Answer
Suggested Answer: A

Community vote distribution


A (48%)
D (48%)
4%
by LemonGremlin at Nov. 22, 2023, 3:48 a.m.

Question #: : 664

A company has a web application that runs on premises. The application experiences latency issues during peak
hours. The latency issues occur twice each month. At the start of a latency issue, the application's CPU utilization
immediately increases to 10 times its normal amount.

The company wants to migrate the application to AWS to improve latency. The company also wants to scale the
application automatically when application demand increases. The company will use AWS Elastic Beanstalk for
application deployment.

Which solution will meet these requirements?


• A. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited
mode. Configure the environment to scale based on requests.
• B. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the
environment to scale based on requests.
• C. Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the
environment to scale on a schedule.
• D. Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited
mode. Configure the environment to scale on predictive metrics.

Hide Answer
Suggested Answer: B

Community vote distribution


D (57%)
A (43%)
by LemonGremlin at Nov. 22, 2023, 3:51 a.m.

Question #: : 665
A company has customers located across the world. The company wants to use automation to secure its systems
and network infrastructure. The company's security team must be able to track and audit all incremental changes
to the infrastructure.

Which solution will meet these requirements?


• A. Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
• B. Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
• C. Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
• D. Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by TariqKipkemei at Dec. 8, 2023, 9:14 a.m.

Question #: : 666

A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a
stateless Python application and a MySQL database. The website serves only a small amount of traffic. The
company is concerned about the reliability of the instance and needs to migrate to a highly available architecture.
The company cannot modify the application code.

Which combination of actions should a solutions architect take to achieve high availability for the website?
(Choose two.)
• A. Provision an internet gateway in each Availability Zone in use.
• B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
• C. Migrate the database to Amazon DynamoDB, and enable DynamoDB auto scaling.
• D. Use AWS DataSync to synchronize the database data across multiple EC2 instances.
• E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances
that are distributed across two Availability Zones.

Hide Answer
Suggested Answer: BE

Community vote distribution


BE (100%)
by TariqKipkemei at Dec. 11, 2023, 9:08 a.m.
Question #: : 667

A company is moving its data and applications to AWS during a multiyear migration project. The company wants
to securely access data on Amazon S3 from the company's AWS Region and from the company's on-premises
location. The data must not traverse the internet. The company has established an AWS Direct Connect
connection between its Region and its on-premises location.

Which solution will meet these requirements?


• A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from
the Region and the on-premises location.
• B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-
premises location.
• C. Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data
from the Region and the on-premises location.
• D. Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region
and the on-premises location.

Hide Answer
Suggested Answer: A

Community vote distribution


C (81%)
Other
by LemonGremlin at Nov. 22, 2023, 3:55 a.m.

Question #: : 668

A company created a new organization in AWS Organizations. The organization has multiple accounts for the
company's development teams. The development team members use AWS IAM Identity Center (AWS Single
Sign-On) to access the accounts. For each of the company's applications, the development teams must use a
predefined application name to tag resources that are created.

A solutions architect needs to design a solution that gives the development team the ability to create resources
only if the application name tag has an approved value.

Which solution will meet these requirements?


• A. Create an IAM group that has a conditional Allow policy that requires the application name tag to be
specified for resources to be created.
• B. Create a cross-account role that has a Deny policy for any resource that has the application name tag.
• C. Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources
in all accounts.
• D. Create a tag policy in Organizations that has a list of allowed application names.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by rcptryk at Dec. 2, 2023, 11:45 p.m.

Question #: : 669

A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage
the master user password by rotating the password every 30 days.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every
30 days.
• B. Use the modify-db-instance command in the AWS CLI to change the password.
• C. Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
• D. Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate
password rotation.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by rcptryk at Dec. 2, 2023, 11:40 p.m.

Question #: : 670

A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours
once a week. The company knows how many read and write operations the application performs to the table each
second during the tests. The company does not currently use DynamoDB for any other use case. A solutions
architect needs to optimize the costs for the table.
Which solution will meet these requirements?
• A. Choose on-demand mode. Update the read and write capacity units appropriately.
• B. Choose provisioned mode. Update the read and write capacity units appropriately.
• C. Purchase DynamoDB reserved capacity for a 1-year term.
• D. Purchase DynamoDB reserved capacity for a 3-year term.

Hide Answer
Suggested Answer: A

Community vote distribution


B (62%)
A (38%)
by meenkaza at Dec. 29, 2023, 4:59 p.m.

Question #: : 671

A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments
of its AWS costs. The company recently identified unusual spending.

The company needs a solution to prevent unusual spending. The solution must monitor costs and notify
responsible stakeholders in the event of unusual spending.

Which solution will meet these requirements?


• A. Use an AWS Budgets template to create a zero spend budget.
• B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
• C. Create AWS Pricing Calculator estimates for the current running workload pricing details.
• D. Use Amazon CloudWatch to monitor costs and to identify unusual spending.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by meenkaza at Dec. 29, 2023, 5:01 p.m.

Question #: : 672
A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign.
The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine
whether to process the data further in the data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
• B. Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
• C. Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
• D. Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use
SQL to query the data.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by meenkaza at Dec. 29, 2023, 5:03 p.m.

Question #: : 673

A company runs an SMB file server in its data center. The file server stores large files that the company frequently
accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files
with a maximum retrieval time of 24 hours.

Which solution will meet these requirements?


• A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
• B. Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle
policy to transition the data to S3 Glacier Deep Archive after 7 days.
• C. Create an Amazon FSx File Gateway to increase the company's storage space. Create an Amazon S3
Lifecycle policy to transition the data after 7 days.
• D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to
S3 Glacier Flexible Retrieval after 7 days.

Hide Answer
Suggested Answer: D

Community vote distribution


B (75%)
C (25%)
by meenkaza at Dec. 29, 2023, 5:07 p.m.

Question #: : 674

A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a
database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic
increases. The database experiences a heavy read load during periods of high traffic.

Which actions should a solutions architect take to resolve these performance issues? (Choose two.)
• A. Turn on auto scaling for the DB instance.
• B. Create a read replica for the DB instance. Configure the application to send read traffic to the read
replica.
• C. Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send
read traffic to the standby DB instance.
• D. Create an Amazon ElastiCache cluster. Configure the application to cache query results in the
ElastiCache cluster.
• E. Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the
same Availability Zone as the DB instance.

Hide Answer
Suggested Answer: AC

Community vote distribution


BD (79%)
AB (21%)
by meenkaza at Dec. 29, 2023, 5:10 p.m.

Question #: : 675

A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an
application. The company creates one snapshot of each EBS volume every day to meet compliance requirements.
The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots.
The solution must not change the administrative rights of the storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?
• A. Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance.
Use the AWS CLI from the new EC2 instance to delete snapshots.
• B. Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator
user.
• C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
• D. Lock the EBS snapshots to prevent deletion.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by meenkaza at Dec. 29, 2023, 5:11 p.m.

Question #: : 676

A company's application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and
databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and
from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to
Amazon OpenSearch Service for analysis.

Which solution will meet these requirements?


• A. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to
the log group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
• B. Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to
the log group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
• C. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use
Amazon Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
• D. Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use
Amazon Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by meenkaza at Dec. 29, 2023, 5:13 p.m.

Question #: : 677
A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon
EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.

The company needs a dedicated EKS cluster for development work. The company will use the development cluster
infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.

Which solution will meet these requirements MOST cost-effectively?


• A. Create a managed node group that contains only Spot Instances.
• B. Create two managed node groups. Provision one node group with On-Demand Instances. Provision
the second node group with Spot Instances.
• C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the
user data to add the nodes to the EKS cluster.
• D. Create a managed node group that contains only On-Demand Instances.

Hide Answer
Suggested Answer: D

Community vote distribution


A (64%)
B (36%)
by Naijaboy99 at Dec. 30, 2023, 4:41 a.m.

Question #: : 678

A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The
company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal
effort for any data that must be encrypted.

Which solution will meet these requirements?


• A. Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the
sensitive data.
• B. Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new
key to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
• C. Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key
to encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
• D. Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed
keys. Upload the encrypted objects back into Amazon S3.

Hide Answer
Suggested Answer: A

Community vote distribution


B (92%)
8%
by meenkaza at Dec. 29, 2023, 5:19 p.m.

Question #: : 679

A company wants to back up its on-premises virtual machines (VMs) to AWS. The company's backup solution
exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days
and must be automatically deleted after 30 days.

Which combination of steps will meet these requirements? (Choose three.)


• A. Create an S3 bucket that has S3 Object Lock enabled.
• B. Create an S3 bucket that has object versioning enabled.
• C. Configure a default retention period of 30 days for the objects.
• D. Configure an S3 Lifecycle policy to protect the objects for 30 days.
• E. Configure an S3 Lifecycle policy to expire the objects after 30 days.
• F. Configure the backup solution to tag the objects with a 30-day retention period

Hide Answer
Suggested Answer: CEF

Community vote distribution


ACE (70%)
ADE (30%)
by meenkaza at Dec. 29, 2023, 5:22 p.m.

Question #: : 680

A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon
EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original
S3 bucket consistently. The copied files should be overwritten only if the source file changes.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create
a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has
changed.
• B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event
notification to invoke the function when files are created and changed in Amazon S3. Configure the function to
copy files to the file system and the destination S3 bucket.
• C. Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create
a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
• D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create
a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and
the mounted file system.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by meenkaza at Dec. 29, 2023, 5:26 p.m.

681-699

Question #: : 681

A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes.
The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS).
The company must be able to control rotation of the encryption keys.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create a customer managed key. Use the key to encrypt the EBS volumes.
• B. Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
• C. Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
• D. Use an AWS owned key to encrypt the EBS volumes.

Hide Answer
Suggested Answer: C

Community vote distribution


A (91%)
9%
by meenkaza at Dec. 29, 2023, 5:28 p.m.
Question #: : 682

A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must
automatically identify noncompliant resources and enforce compliance policies on findings.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon
EBS) volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of
unencrypted EBS volumes.
• B. Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block
Store (Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and
remediation of unencrypted EBS volumes.
• C. Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use
AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
• D. Use Amazon inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes.
Use AWS Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by meenkaza at Dec. 29, 2023, 5:32 p.m.

Question #: : 683

A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node
MySQL database and a multi-node web tier. The company must minimize changes to the application during the
migration. The company wants to improve application resiliency after the migration.

Which combination of steps will meet these requirements? (Choose two.)


• A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer.
• B. Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load
Balancer.
• C. Migrate the database to an Amazon RDS Multi-AZ deployment.
• D. Migrate the web tier to an AWS Lambda function.
• E. Migrate the database to an Amazon DynamoDB table.

Hide Answer
Suggested Answer: CE

Community vote distribution


AC (100%)
by meenkaza at Dec. 29, 2023, 5:34 p.m.

Question #: : 684

A company wants to migrate its web applications from on premises to AWS. The company is located close to the
eu-central-1 Region. Because of regulations, the company cannot launch some of its applications in eu-central-1.
The company wants to achieve single-digit millisecond latency.

Which solution will meet these requirements?


• A. Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to an edge
location in Amazon CloudFront.
• B. Deploy the applications in AWS Local Zones by extending the company's VPC from eu-central-1 to
the chosen Local Zone.
• C. Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to the regional
edge caches in Amazon CloudFront.
• D. Deploy the applications in AWS Wavelength Zones by extending the company’s VPC from eu-central-
1 to the chosen Wavelength Zone.

Hide Answer
Suggested Answer: B

Community vote distribution


B (64%)
D (36%)
by meenkaza at Dec. 29, 2023, 5:37 p.m.

Question #: : 685

A company’s ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a
private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database
performance and ensure that the Lambda invocations do not overload the database with too many connections.

What should a solutions architect do to meet these requirements?


• A. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
• B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
• C. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
• D. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a VPC.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Moon239 at Feb. 7, 2024, 6:34 p.m.

Question #: : 686

A company is creating an application. The company stores data from tests of the application in multiple on-
premises locations.

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The
number of accounts and VPCs will increase during the next year. The network architecture must simplify the
administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the
on-premises locations.
• B. Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection
to connect all VPCs and on-premises locations.
• C. Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments
for the on-premises connections.
• D. Create an AWS Direct Connect connection between the on-premises locations and a central VPC.
Connect the central VPC to other VPCs by using peering connections.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 3:23 p.m.

Question #: : 687

A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each
month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company
has no machine learning (ML) experience and wants to use a managed service for the training and predictions.

Which combination of steps will meet these requirements? (Choose two.)


• A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.
• B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
• C. Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to
create predictions based on the inputs.
• D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to
create a prediction based on the inputs.
• E. Train an Amazon Forsecast predictor by using the historical data in the S3 bucket.

Hide Answer
Suggested Answer: CD

Community vote distribution


DE (63%)
BD (25%)
13%
by Andy_09 at Feb. 5, 2024, 3:32 p.m.

Question #: : 688

A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On)
and AWS Control Tower are configured for the accounts. The company wants to manage multiple user
permissions across all the accounts.

The permissions will be used by multiple IAM users and must be split between the developer and administrator
teams. Each team requires different permissions. The company wants a solution that includes new users that are
hired on both teams.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Create a custom IAM
policy for each group to set fine-grained permissions.
• B. Create individual users in IAM Identity Center for each account. Create separate developer and
administrator groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed
IAM policies to each user as needed for fine-grained permissions.
• C. Create individual users in IAM Identity Center. Create new developer and administrator groups in
IAM Identity Center. Create new permission sets that include the appropriate IAM policies for each group. Assign
the new groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users
are hired, add them to the appropriate group.
• D. Create individual users in IAM Identity Center. Create new permission sets that include the
appropriate IAM policies for each user. Assign the users to the appropriate accounts. Grant additional IAM
permissions to the users from within specific accounts. When new users are hired, add them to IAM Identity
Center and assign them to the accounts.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 3:35 p.m.

Question #: : 689

A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The
company also wants to minimize the cost and configuration effort required to operate the volume encryption check.

Which solution will meet these requirements?


• A. Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use
Amazon EventBridge to schedule an AWS Lambda function to run the API calls.
• B. Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the
API calls on an AWS Fargate task.
• C. Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS
volumes. Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged
resources manually.
• D. Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the
volume if it is not encrypted.

Hide Answer
Suggested Answer: C
Community vote distribution
D (100%)
by Andy_09 at Feb. 5, 2024, 3:43 p.m.

Question #: : 690

A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses
a fleet of Amazon EC2 Spot Instances to transcode the file format. The company needs to scale throughput when
the company uploads data from the on-premises data center to Amazon S3 and when the company downloads
data from Amazon S3 to the EC2 instances.

Which solutions will meet these requirements? (Choose two.)


• A. Use the S3 bucket access point instead of accessing the S3 bucket directly.
• B. Upload the files into multiple S3 buckets.
• C. Use S3 multipart uploads.
• D. Fetch multiple byte-ranges of an object in parallel.
• E. Add a random prefix to each object when uploading the files.

Hide Answer
Suggested Answer: AC

Community vote distribution


CD (100%)
by Andy_09 at Feb. 5, 2024, 4:24 p.m.

Question #: : 691

A solutions architect is designing a shared storage solution for a web application that is deployed across multiple
Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The
company plans to make frequent changes to the content. The solution must have strong consistency in returning
the new content as soon as the changes occur.

Which solutions meet these requirements? (Choose two.)


• A. Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI)
block storage that is mounted to the individual EC2 instances.
• B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the
individual EC2 instances.
• C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the
individual EC2 instances.
• D. Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto
Scaling group.
• E. Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header
to no-cache. Use Amazon CloudFront to deliver the content.

Hide Answer
Suggested Answer: AD

Community vote distribution


BE (100%)
by Andy_09 at Feb. 5, 2024, 4:29 p.m.

Question #: : 692

A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route
53 will be used to distribute traffic between these Regions.

Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
• A. Create an A record with a latency policy.
• B. Create an A record with a geolocation policy.
• C. Create a CNAME record with a failover policy.
• D. Create a CNAME record with a geoproximity policy.

Hide Answer
Suggested Answer: D

Community vote distribution


A (60%)
D (20%)
B (20%)
by Andy_09 at Feb. 5, 2024, 4:31 p.m.

Question #: : 693

A company has a web application that includes an embedded NoSQL database. The application runs on Amazon
EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling
group in a single Availability Zone.

A recent increase in traffic requires the application to be highly available and for the database to be eventually
consistent.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Replace the ALB with a Network Load Balancer. Maintain the embedded NoSQL database with its
replication service on the EC2 instances.
• B. Replace the ALB with a Network Load Balancer. Migrate the embedded NoSQL database to Amazon
DynamoDB by using AWS Database Migration Service (AWS DMS).
• C. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the
embedded NoSQL database with its replication service on the EC2 instances.
• D. Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the
embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).

Hide Answer
Suggested Answer: A

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 4:32 p.m.

Question #: : 694

A company is building a shopping application on AWS. The application offers a catalog that changes once each
month and needs to scale with traffic volume. The company wants the lowest possible latency from the application.
Data from each user's shopping cart needs to be highly available. User session data must be available even if the
user is disconnected and reconnects.

What should a solutions architect do to ensure that the shopping cart data is preserved at all times?
• A. Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for
access to the catalog in Amazon Aurora.
• B. Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and
shopping cart data from the user's session.
• C. Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping
cart data from the user's session.
• D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the
catalog and shopping cart. Configure automated snapshots.
Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 4:34 p.m.

Question #: : 695

A company is building a microservices-based application that will be deployed on Amazon Elastic Kubernetes
Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the
application is observable to identify performance issues in the future.

Which solution will meet these requirements?


• A. Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent
to the microservices.
• B. Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters.
Configure AWS X-Ray to trace the requests between the microservices.
• C. Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to observe
the microservice interactions.
• D. Use AWS Trusted Advisor to understand the performance of the application.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 4:36 p.m.

Question #: : 696

A company needs to provide customers with secure access to its data. The company processes customer data and
stores the results in an Amazon S3 bucket.

All the data is subject to strong regulations and security requirements. The data must be encrypted at rest. Each
customer must be able to access only their data from their AWS account. Company employees must not be able
to access the data.
Which solution will meet these requirements?
• A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-
side. In the private certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
• B. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the
data server-side. In the S3 bucket policy, deny decryption of data for all principals except an IAM role that the
customer provides.
• C. Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the
data server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.
• D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-
side. In the public certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.

Hide Answer
Suggested Answer: D

Community vote distribution


C (67%)
B (33%)
by Andy_09 at Feb. 5, 2024, 4:40 p.m.

Question #: : 697

A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security
mandate requires the solutions architect to launch all Amazon EC2 instances in a private subnet. However, when
the solutions architect launches an EC2 instance that runs a web server on ports 80 and 443 in a private subnet,
no external internet traffic can connect to the server.

What should the solutions architect do to resolve this issue?


• A. Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record for
the website resolves to the Auto Scaling group identifier.
• B. Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2
instance to the target group that is associated with the ALEnsure that the DNS record for the website resolves to
the ALB.
• C. Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a
default route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
• D. Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80
and HTTPS traffic on port 443. Ensure that the DNS record for the website resolves to the public IP address of
the EC2 instance.
Hide Answer
Suggested Answer: D

Community vote distribution


B (88%)
13%
by Andy_09 at Feb. 5, 2024, 4:43 p.m.

Question #: : 698

A company is deploying a new application to Amazon Elastic Kubernetes Service (Amazon EKS) with an AWS
Fargate cluster. The application needs a storage solution for data persistence. The solution must be highly
available and fault tolerant. The solution also must be shared between multiple application containers.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS
worker nodes are placed. Register the volumes in a StorageClass object on an EKS cluster. Use EBS Multi-Attach
to share the data between containers.
• B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a
StorageClass object on an EKS cluster. Use the same file system for all containers.
• C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in a StorageClass
object on an EKS cluster. Use the same volume for all containers.
• D. Create Amazon Elastic File System (Amazon EFS) file systems in the same Availability Zones where
EKS worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS
Lambda function to synchronize the data between file systems.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 4:45 p.m.

Question #: : 699

A company has an application that uses Docker containers in its local data center. The application runs on a
container host that stores persistent data in a volume on the host. The container instances use the stored persistent
data.
The company wants to move the application to a fully managed service because the company does not want to
manage any servers or storage infrastructure.

Which solution will meet these requirements?


• A. Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon
Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a
persistent volume mounted in the containers.
• B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an
Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted
in the containers.
• C. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an
Amazon S3 bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
• D. Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an
Amazon Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted
in the containers.

Hide Answer
Suggested Answer: B

Community vote distribution


B (75%)
C (25%)
by Andy_09 at Feb. 5, 2024, 4:49 p.m.

700-799

Question #: : 700
A gaming company wants to launch a new internet-facing application in multiple AWS Regions. The application
will use the TCP and UDP protocols for communication. The company needs to provide high availability and
minimum latency for global users.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
• A. Create internal Network Load Balancers in front of the application in each Region.
• B. Create external Application Load Balancers in front of the application in each Region.
• C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
• D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
• E. Configure Amazon CloudFront to handle the traffic and route requests to the application in each
Region
Hide Answer
Suggested Answer: BC

Community vote distribution


AC (100%)
by Andy_09 at Feb. 5, 2024, 4:52 p.m.

Explaim:
Use AWS Global Accelerator:

AWS Global Accelerator is a service that improves the availability and performance of your applications with local
or global users.
Configure AWS Global Accelerator with TCP and UDP listeners to route traffic to the application deployed in
multiple AWS Regions.
AWS Global Accelerator intelligently routes traffic to the nearest healthy endpoint based on latency and health
checks, minimizing latency for global users and providing high availability.
Ensure that the internet-facing application endpoints in each AWS Region are registered as accelerators in AWS
Global Accelerator.
Deploy Network Load Balancers (NLBs):

Use Network Load Balancers to distribute TCP and UDP traffic to the application instances within each AWS
Region.
Network Load Balancers are highly scalable and capable of handling millions of requests per second with low
latency.
Configure NLBs with target groups containing the application instances within each AWS Region to ensure high
availability and fault tolerance.
By combining AWS Global Accelerator with Network Load Balancers, the gaming company can achieve high
availability and minimum latency for global users accessing the application using both TCP and UDP protocols.
AWS Global Accelerator intelligently routes traffic to the nearest healthy endpoint, while Network Load Balancers
distribute traffic within each AWS Region, ensuring scalability and fault tolerance.
Sử dụng Bộ tăng tộc toàn cầu AWS:

AWS Global Accelerator là dịch vụ giúp cầi thiện tính khầ dụng và hiệu suầt cụa ửng dụng cụa bần với ngửới dùng
địa phửớng hoầc toàn cầu.
Định cầu hình AWS Global Accelerator với trình nghe TCP và UDP đệ định tuyện lửu lửớng truy cầp đện ửng
dụng đửớc triện khai ớ nhiệu Khu vửc AWS.
AWS Global Accelerator định tuyện lửu lửớng truy cầp đện điệm cuội hoầt động tột gần nhầt một cách thông
minh dửa trên độ trệ và kiệm tra tình trầng, giầm thiệu độ trệ cho ngửới dùng toàn cầu và mang lầi tính sần sàng
cao.
Đầm bầo rầng các điệm cuội ửng dụng kệt nội Internet ớ tửng Khu vửc AWS đửớc đăng ký làm bộ tăng tộc trong
AWS Global Accelerator.
Triện khai Cân bầng tầi mầng (NLB):
Sử dụng Cân bầng tầi mầng đệ phân phội lửu lửớng TCP và UDP đện các phiên bần ửng dụng trong tửng Khu
vửc AWS.
Cân bầng tầi mầng có khầ năng mớ rộng cao và có khầ năng xử lý hàng triệu yêu cầu mội giây với độ trệ thầp.
Định cầu hình NLB với các nhóm mục tiêu chửa phiên bần ửng dụng trong tửng Khu vửc AWS đệ đầm bầo tính
khầ dụng và khầ năng chịu lội cao.
Bầng cách kệt hớp AWS Global Accelerator với Network Load Balancer, công ty game có thệ đầt đửớc độ sần
sàng cao và độ trệ tội thiệu cho ngửới dùng toàn cầu truy cầp ửng dụng bầng cầ giao thửc TCP và UDP. AWS
Global Accelerator định tuyện lửu lửớng truy cầp đện điệm cuội hoầt động tột gần nhầt một cách thông minh,
trong khi Cân bầng tầi mầng phân phội lửu lửớng trong tửng Khu vửc AWS, đầm bầo khầ năng mớ rộng và khầ
năng chịu lội.

Question #: : 701

A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer
(ALB). The application's users have reported sporadic performance, which appears to be related to DDoS attacks
originating from random IP addresses. The city needs a solution that requires minimal configuration changes and
provides an audit trail for the DDoS sources.

Which solution meets these requirements?


• A. Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic from unknown sources.
• B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate
mitigating controls into the service.
• C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate
mitigating controls into the service.
• D. Create an Amazon CloudFront distribution for the application, and set the ALB as the origin. Enable
an AWS WAF web ACL on the distribution, and configure rules to block traffic from unknown sources

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 5:01 p.m.
Explain:
Enable AWS Shield Advanced for the Application Load Balancer (ALB).
Explanation:
1. AWS Shield Advanced provides comprehensive DDoS protection for AWS resources, including Amazon
EC2 instances behind an Application Load Balancer (ALB). It offers advanced detection and mitigation
capabilities to protect against volumetric, state-exhaustion, and application layer attacks.
2. Minimal Configuration Changes: Enabling AWS Shield Advanced for the ALB requires minimal
configuration changes. It involves subscribing to the service and enabling it for the desired AWS resources, such
as the ALB. There is no need to modify the application code or architecture.
3. Audit Trail for DDoS Sources: AWS Shield Advanced provides detailed attack logs and reports, including
information about the sources of DDoS attacks. The city can use these logs and reports to analyze attack patterns,
identify the IP addresses responsible for the attacks, and take appropriate measures to mitigate them. This audit
trail helps in understanding the nature and scope of the DDoS attacks and facilitates proactive security measures.
Kích hoầt AWS Shield Advanced cho Cân bầng tầi ửng dụng (ALB).

Giầi trình:

AWS Shield Advanced cung cầp khầ năng bầo vệ DDoS toàn diện cho các tài nguyên AWS, bao gộm cầ các phiên
bần Amazon EC2 sử dụng Cân bầng tầi ửng dụng (ALB). Nó cung cầp khầ năng phát hiện và giầm nhệ nâng cao
đệ bầo vệ chộng lầi các cuộc tần công theo khội, cần kiệt trầng thái và lớp ửng dụng.

Thay đội cầu hình tội thiệu: Việc kích hoầt AWS Shield Advanced cho ALB yêu cầu thay đội cầu hình tội thiệu.
Nó liên quan đện việc đăng ký dịch vụ và kích hoầt dịch vụ cho các tài nguyên AWS mong muộn, chầng hần nhử
ALB. Không cần phầi sửa đội mã ửng dụng hoầc kiện trúc.

Bần kiệm tra các nguộn DDoS: AWS Shield Advanced cung cầp nhầt ký và báo cáo tần công chi tiệt, bao gộm
thông tin vệ các nguộn tần công DDoS. Thành phộ có thệ sử dụng các nhầt ký và báo cáo này đệ phân tích các
kiệu tần công, xác định địa chỉ IP gây ra các cuộc tần công và thửc hiện các biện pháp thích hớp đệ giầm thiệu
chúng. Quá trình kiệm tra này giúp hiệu rõ bần chầt và phầm vi cụa các cuộc tần công DDoS và tầo điệu kiện cho
các biện pháp bầo mầt chụ động.

Question #: : 702

A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices.
The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas
deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-
throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the
devices back to AWS.

Which solution will meet these requirements?


• A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway
file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
• B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre
file system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster
instances.
• C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import
the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system
from the HPC cluster instances.
• D. Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file
system. Access the FSx for Lustre file system from the HPC cluster instances.

Hide Answer
Suggested Answer: C

Community vote distribution


D (64%)
B (36%)
by Andy_09 at Feb. 5, 2024, 5:02 p.m.

Explain:
Designed for HPC: FSx for Lustre is a high-performance parallel file system specifically built for HPC workloads.
It offers low latency and high throughput, ideal for the company's needs.
Direct Data Import: Importing data directly into the FSx for Lustre file system eliminates the need for
intermediate storage solutions like S3 or EFS, minimizing latency.
Đửớc thiệt kệ cho HPC: FSx for Lustre là hệ thộng tệp song song hiệu suầt cao đửớc xây dửng riêng cho khội
lửớng công việc HPC. Nó cung cầp độ trệ thầp và thông lửớng cao, lý tửớng cho nhu cầu cụa công ty.
Nhầp dử liệu trửc tiệp: Nhầp dử liệu trửc tiệp vào hệ thộng tệp FSx for Lustre giúp loầi bộ nhu cầu vệ các giầi
pháp lửu trử trung gian nhử S3 hoầc EFS, giầm thiệu độ trệ.

Question #: : 703

A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data
to Amazon S3.

Which solution meets these requirements and is MOST cost-effective?


• A. Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
• B. Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
• C. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
• D. Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy
the data to Amazon S3.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 5:05 p.m.
Explain:
AWS DataSync:
AWS DataSync is a data transfer service that makes it easy to automate and accelerate copying data between on-
premises storage systems and Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server.
By setting up an AWS DataSync agent on the on-premises servers, the company can efficiently and securely sync
small amounts of data to Amazon S3.
DataSync optimizes data transfer and minimizes costs associated with data transfer and storage, making it a cost-
effective solution for periodic backups.
Động bộ dử liệu AWS:
AWS DataSync là dịch vụ truyện dử liệu giúp dệ dàng tử động hóa và tăng tộc sao chép dử liệu giửa các hệ thộng
lửu trử tầi chộ và Amazon S3, Amazon EFS hoầc Amazon FSx cho Windows File Server.
Bầng cách thiệt lầp tác nhân AWS DataSync trên máy chụ tầi chộ, công ty có thệ động bộ hóa một lửớng nhộ dử
liệu với Amazon S3 một cách hiệu quầ và an toàn.
DataSync tội ửu hóa việc truyện dử liệu và giầm thiệu chi phí liên quan đện việc truyện và lửu trử dử liệu, khiện
nó trớ thành giầi pháp tiệt kiệm chi phí cho việc sao lửu định kỳ.

Question #: : 704

An online video game company must maintain ultra-low latency for its game servers. The game servers run on
Amazon EC2 instances. The company needs a solution that can handle millions of UDP internet traffic requests
each second.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure an Application Load Balancer with the required protocol and ports for the internet traffic.
Specify the EC2 instances as the targets.
• B. Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances as the targets.
• C. Configure a Network Load Balancer with the required protocol and ports for the internet traffic.
Specify the EC2 instances as the targets.
• D. Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet
traffic to both sets of EC2 instances.

Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 5:06 p.m.
Explain:
Network Load Balancer (NLB):
• NLB is designed to handle high-throughput, low-latency traffic and is well-suited for scenarios requiring
ultra-low latency, such as online gaming.
• NLB operates at the connection level (Layer 4) of the OSI model, making it ideal for UDP traffic.
• By configuring an NLB with the required protocol and ports for UDP internet traffic and specifying the
EC2 instances as targets, the company can efficiently distribute incoming traffic while maintaining ultra-low
latency.
• NLB offers high throughput and low-latency load balancing without adding significant overhead, making
it a cost-effective solution for handling millions of UDP internet traffic requests each second.
Cân bầng tầi mầng (NLB):
NLB đửớc thiệt kệ đệ xử lý lửu lửớng truy cầp có thông lửớng cao, độ trệ thầp và rầt phù hớp cho các tình huộng
yêu cầu độ trệ cửc thầp, chầng hần nhử chới game trửc tuyện.
NLB hoầt động ớ cầp độ kệt nội (Lớp 4) cụa mô hình OSI, khiện nó trớ nên lý tửớng cho lửu lửớng UDP.
Bầng cách định cầu hình NLB với giao thửc và cộng cần thiệt cho lửu lửớng truy cầp Internet UDP, động thới chỉ
định phiên bần EC2 làm mục tiêu, công ty có thệ phân phội lửu lửớng truy cầp đện một cách hiệu quầ trong khi
vần duy trì độ trệ cửc thầp.
NLB cung cầp thông lửớng cao và cân bầng tầi có độ trệ thầp mà không cần thêm chi phí đáng kệ, khiện NLB trớ
thành giầi pháp tiệt kiệm chi phí đệ xử lý hàng triệu yêu cầu lửu lửớng truy cầp Internet UDP mội giây.

Question #: : 705
A company runs a three-tier application in a VPC. The database tier uses an Amazon RDS for MySQL DB instance.

The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster.
The company needs a solution that replicates the data changes that happen during the migration to the new
database.

Which combination of steps will meet these requirements? (Choose two.)


• A. Use AWS Database Migration Service (AWS DMS) Schema Conversion to transform the database
objects.
• B. Use AWS Database Migration Service (AWS DMS) Schema Conversion to create an Aurora
PostgreSQL read replica on the RDS for MySQL DB instance.
• C. Configure an Aurora MySQL read replica for the RDS for MySQL DB instance.
• D. Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to
migrate the data.
• E. Promote the Aurora PostgreSQL read replica to a standalone Aurora PostgreSQL DB cluster when
the replica lag is zero.

Hide Answer
Suggested Answer: AE

Community vote distribution


AD (100%)
by Andy_09 at Feb. 5, 2024, 5:11 p.m.
Explanation:

A. Using AWS DMS Schema Conversion to transform the database objects:

AWS DMS Schema Conversion Tool (SCT) helps convert the database schema from MySQL to PostgreSQL,
ensuring compatibility and proper structure in the target Aurora PostgreSQL DB cluster.
D. Defining an AWS DMS task with change data capture (CDC) to migrate the data:

By defining an AWS DMS task with CDC enabled, the data changes that occur during the migration process will
be captured and replicated to the target Aurora PostgreSQL DB cluster in real-time.
This ensures that any changes made to the MySQL database during the migration process are replicated to the
Aurora PostgreSQL DB cluster, allowing for a seamless transition without data loss or inconsistencies.
Combining these two steps ensures that both the database schema and data changes are properly replicated from
the RDS for MySQL DB instance to the Aurora PostgreSQL DB cluster, facilitating a smooth migration process
while minimizing downtime and ensuring data consistency.

Giầi trình:
A. Sử dụng Chuyện đội lửớc độ AWS DMS đệ chuyện đội các đội tửớng cớ sớ dử liệu:

Công cụ chuyện đội lửớc độ AWS DMS (SCT) giúp chuyện đội lửớc độ cớ sớ dử liệu tử MySQL sang PostgreSQL,
đầm bầo tính tửớng thích và cầu trúc phù hớp trong cụm cớ sớ dử liệu Aurora PostgreSQL đích.
D. Xác định tác vụ AWS DMS bầng tính năng thu thầp dử liệu thay đội (CDC) đệ di chuyện dử liệu:

Bầng cách xác định tác vụ AWS DMS có bầt CDC, nhửng thay đội dử liệu xầy ra trong quá trình di chuyện sẽ
đửớc ghi lầi và sao chép sang cụm cớ sớ dử liệu Aurora PostgreSQL mục tiêu trong thới gian thửc.
Điệu này đầm bầo rầng mội thay đội đửớc thửc hiện đội với cớ sớ dử liệu MySQL trong quá trình di chuyện đệu
đửớc sao chép sang cụm cớ sớ dử liệu Aurora PostgreSQL, cho phép chuyện đội liện mầch mà không mầt dử liệu
hoầc không nhầt quán.
Việc kệt hớp hai bửớc này sẽ đầm bầo rầng cầ sớ độ cớ sớ dử liệu và các thay đội vệ dử liệu đệu đửớc sao chép
chính xác tử phiên bần RDS cho MySQL DB sang cụm cớ sớ dử liệu Aurora PostgreSQL, tầo điệu kiện cho quá
trình di chuyện suôn sệ động thới giầm thiệu thới gian ngửng hoầt động và đầm bầo tính nhầt quán cụa dử liệu.

Question #: : 706

A company hosts a database that runs on an Amazon RDS instance that is deployed to multiple Availability Zones.
The company periodically runs a script against the database to report new entries that are added to the database.
The script that runs against the database negatively affects the performance of a critical application. The company
needs to improve application performance with minimal costs.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Add functionality to the script to identify the instance that has the fewest active connections.
Configure the script to read from that instance to report the total new entries.
• B. Create a read replica of the database. Configure the script to query only the read replica to report the
total new entries.
• C. Instruct the development team to manually export the new entries for the day in the database at the
end of each day.
• D. Use Amazon ElastiCache to cache the common queries that the script runs against the database.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by mestule at Feb. 6, 2024, 11:54 p.m.
Explain’
Among the options provided, the solution that will meet the requirements with the least operational overhead is:
Option B. Create a read replica of the database. Configure the script to query only the read replica to report the
total new entries.
Explanation:
Creating a Read Replica: By creating a read replica of the Amazon RDS database instance, the company can offload
read-heavy workloads from the primary database to the read replica. This helps improve the performance of the
critical application hosted on the primary database.
Minimal Operational Overhead: Setting up a read replica in Amazon RDS is straightforward and requires minimal
operational overhead. Once the read replica is configured, it automatically replicates data from the primary
database instance. There's no need to modify the script or add additional logic to identify the instance with the
fewest active connections, as the read replica is automatically available for querying.
By configuring the reporting script to query only the read replica, the company can reduce the load on the primary
database and minimize the impact on the performance of the critical application, all with minimal operational
overhead. This solution ensures that the reporting script can continue to function without negatively affecting the
primary database or the critical application.

Giầi trình:
Tầo Bần sao chỉ có quyện độc: Bầng cách tầo bần sao chỉ có quyện độc cụa phiên bần cớ sớ dử liệu Amazon RDS,
công ty có thệ giầm tầi khội lửớng công việc có cửớng độ độc cao tử cớ sớ dử liệu chính sang bần sao chỉ có quyện
độc. Điệu này giúp cầi thiện hiệu suầt cụa ửng dụng quan trộng đửớc lửu trử trên cớ sớ dử liệu chính.

Chi phí hoầt động tội thiệu: Việc thiệt lầp bần sao chỉ có quyện độc trong Amazon RDS rầt đớn giần và yêu cầu
chi phí hoầt động tội thiệu. Sau khi cầu hình bần sao chỉ có quyện độc, nó sẽ tử động sao chép dử liệu tử phiên
bần cớ sớ dử liệu chính. Không cần phầi sửa đội tầp lệnh hoầc thêm logic bộ sung đệ xác định phiên bần có ít kệt
nội hoầt động nhầt vì bần sao chỉ có quyện độc sẽ tử động có sần đệ truy vần.

Bầng cách định cầu hình tầp lệnh báo cáo đệ chỉ truy vần bần sao đã độc, công ty có thệ giầm tầi cho cớ sớ dử liệu
chính và giầm thiệu tác động đện hiệu suầt cụa ửng dụng quan trộng, tầt cầ đệu có chi phí hoầt động tội thiệu.
Giầi pháp này đầm bầo rầng tầp lệnh báo cáo có thệ tiệp tục hoầt động mà không ầnh hửớng tiêu cửc đện cớ sớ
dử liệu chính hoầc ửng dụng quan trộng.

Question #: : 707
abnormal :bầt thửớng

A company is using an Application Load Balancer (ALB) to present its application to the internet. The company
finds abnormal traffic access patterns across the application. A solutions architect needs to improve visibility into
the infrastructure to help the company understand these abnormalities better.

What is the MOST operationally efficient solution that meets these requirements?
• A. Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information.
• B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
• C. Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the
relevant information.
• D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic
access log information.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 5:20 p.m.
Explain:
Option B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.

Explanation:

ALB Access Logging: By enabling access logging on the Application Load Balancer (ALB), the company can
capture detailed information about incoming requests, including client IP addresses, request times, request paths,
response codes, etc.

Amazon S3: Enabling ALB access logging sends the log files to an Amazon S3 bucket, providing a durable and
scalable storage solution for the logs.

Amazon Athena: Amazon Athena allows for interactive querying of data stored in Amazon S3 using standard SQL
syntax. By creating a table in Athena that points to the ALB access logs stored in S3, the company can easily query
and analyze the logs to gain insights into abnormal traffic access patterns without the need for manual processing
or parsing.
Option A (Create a table in Amazon Athena for AWS CloudTrail logs) does not directly address the abnormal
traffic access patterns observed across the application. CloudTrail logs provide information about AWS API
activity and management events, which may not be relevant for diagnosing application-level abnormalities.

Option C (Manually searching through ALB access log files) is highly inefficient and impractical, especially for
large volumes of log data. It would require significant manual effort and is prone to errors.

Option D (Using Amazon EMR on a dedicated EC2 instance to directly query the ALB) introduces unnecessary
complexity and operational overhead. Amazon EMR is a managed big data processing service primarily used for
processing large datasets, which might be overkill for this scenario. Additionally, setting up and managing an EMR
cluster adds operational complexity compared to leveraging native AWS services like Athena.
Tùy chộn B. Bầt ghi nhầt ký truy cầp ALB vào Amazon S3. Tầo một bầng trong Amazon Athena và truy vần nhầt
ký.

Giầi trình:

Ghi nhầt ký truy cầp ALB: Bầng cách bầt ghi nhầt ký truy cầp trên Cân bầng tầi ửng dụng (ALB), công ty có thệ
nầm bầt thông tin chi tiệt vệ các yêu cầu đện, bao gộm địa chỉ IP cụa khách hàng, thới gian yêu cầu, đửớng dần
yêu cầu, mã phần hội, v.v.

Amazon S3: Kích hoầt tính năng ghi nhầt ký truy cầp ALB sẽ gửi tệp nhầt ký đện bộ chửa Amazon S3, cung cầp
giầi pháp lửu trử bện bỉ và có thệ mớ rộng cho nhầt ký.

Amazon Athena: Amazon Athena cho phép truy vần tửớng tác dử liệu đửớc lửu trử trong Amazon S3 bầng cú
pháp SQL tiêu chuần. Bầng cách tầo một bầng trong Athena trộ đện nhầt ký truy cầp ALB đửớc lửu trử trong S3,
công ty có thệ dệ dàng truy vần và phân tích nhầt ký đệ hiệu rõ hớn vệ các kiệu truy cầp lửu lửớng truy cầp bầt
thửớng mà không cần xử lý hoầc phân tích cú pháp thụ công.

Tùy chộn A (Tầo bầng trong Amazon Athena cho nhầt ký AWS CloudTrail) không giầi quyệt trửc tiệp các mầu
truy cầp lửu lửớng truy cầp bầt thửớng đửớc quan sát trên ửng dụng. Nhầt ký CloudTrail cung cầp thông tin vệ
các sử kiện quần lý và hoầt động API AWS, nhửng sử kiện này có thệ không liên quan đện việc chần đoán nhửng
bầt thửớng ớ cầp ửng dụng.

Tùy chộn C (Tìm kiệm thụ công thông qua tệp nhầt ký truy cầp ALB) rầt kém hiệu quầ và không thửc tệ, đầc biệt
đội với khội lửớng lớn dử liệu nhầt ký. Nó sẽ đòi hội nộ lửc thụ công đáng kệ và dệ bị lội.

Tùy chộn D (Sử dụng Amazon EMR trên phiên bần EC2 chuyên dụng đệ truy vần trửc tiệp ALB) gây ra sử phửc
tầp và chi phí vần hành không cần thiệt. Amazon EMR là dịch vụ xử lý dử liệu lớn đửớc quần lý, chụ yệu đửớc sử
dụng đệ xử lý các tầp dử liệu lớn, có thệ là quá mửc cần thiệt trong trửớng hớp này. Ngoài ra, việc thiệt lầp và
quần lý cụm EMR sẽ tăng thêm độ phửc tầp trong vần hành so với việc tần dụng các dịch vụ AWS gộc nhử Athena.

Question #: : 708

A company wants to use NAT gateways in its AWS environment. The company's Amazon EC2 instances in private
subnets must be able to connect to the public internet through the NAT gateways.

Which solution will meet these requirements?


• A. Create public NAT gateways in the same private subnets as the EC2 instances.
• B. Create private NAT gateways in the same private subnets as the EC2 instances.
• C. Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.
• D. Create private NAT gateways in public subnets in the same VPCs as the EC2 instances.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 5:26 p.m.
Explain:
NAT gateways are used to allow instances in private subnets to access the internet while remaining private. They
do so by translating the private IP addresses of the instances to a public IP address before sending the traffic to
the internet.

Placing NAT gateways in public subnets ensures that they have access to the internet gateway and can route traffic
to the public internet. Private subnets do not have direct access to the internet gateway, which is why NAT
gateways need to be in public subnets.

The private EC2 instances in the private subnets can then route their internet-bound traffic through the public
NAT gateways located in the public subnets.

Options A and B are incorrect because placing NAT gateways in private subnets would not allow them to access
the internet gateway, preventing them from performing the required translation of private IP addresses to public
IP addresses.

Option D is incorrect because NAT gateways should be placed in public subnets to access the internet gateway.
Placing them in public subnets does not make them "public" NAT gateways; they are still private NAT gateways
because they serve private instances in private subnets.

Therefore, Option C is the correct solution to ensure that Amazon EC2 instances in private subnets can connect
to the public internet through the NAT gateways.
Cộng NAT đửớc sử dụng đệ cho phép các phiên bần trong mầng con riêng tử truy cầp Internet trong khi vần ớ
chệ độ riêng tử. Hộ làm nhử vầy bầng cách dịch địa chỉ IP riêng cụa các phiên bần sang địa chỉ IP công cộng trửớc
khi gửi lửu lửớng truy cầp lên internet.

Việc đầt các cộng NAT trong mầng con công cộng đầm bầo rầng chúng có quyện truy cầp vào cộng internet và có
thệ định tuyện lửu lửớng truy cầp đện internet công cộng. Mầng con riêng tử không có quyện truy cầp trửc tiệp
vào cộng internet, đó là lý do tầi sao cộng NAT cần phầi nầm trong mầng con công cộng.

Sau đó, các phiên bần EC2 riêng tử trong mầng con riêng tử có thệ định tuyện lửu lửớng truy cầp trên Internet
thông qua các cộng NAT công cộng nầm trong mầng con công cộng.

Tùy chộn A và B không chính xác vì việc đầt các cộng NAT trong mầng con riêng tử sẽ không cho phép chúng
truy cầp vào cộng internet, ngăn chúng thửc hiện việc dịch địa chỉ IP riêng tử sang địa chỉ IP công cộng theo yêu
cầu.

Tùy chộn D không chính xác vì cộng NAT phầi đửớc đầt trong mầng con công cộng đệ truy cầp cộng internet.
Việc đầt chúng trong các mầng con công cộng không làm cho chúng trớ thành các cộng NAT "công khai"; chúng
vần là các cộng NAT riêng vì chúng phục vụ các phiên bần riêng trong mầng con riêng.

Do đó, Tùy chộn C là giầi pháp phù hớp đệ đầm bầo rầng các phiên bần Amazon EC2 trong mầng con riêng tử
có thệ kệt nội với Internet công cộng thông qua cộng NAT.

Question #: : 709
prohibit: cầm
A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four
AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production
account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction
accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the
prohibited types.

Which solutions to deploy the SCP will meet these requirements? (Choose two.)
• A. Attach the SCP to the root OU for the organization.
• B. Attach the SCP to the three nonproduction Organizations member accounts.
• C. Attach the SCP to the Organizations management account.
• D. Create an OU for the production account. Attach the SCP to the OU. Move the production member
account into the new OU.
• E. Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.

Hide Answer
Suggested Answer: DE

Community vote distribution


BE (73%)
AD (18%)
9%
by Andy_09 at Feb. 5, 2024, 5:30 p.m.
Explain:
B. Attach the SCP to the three nonproduction Organizations member accounts.
This directly applies the policy to the specific accounts where you want the restrictions.

E. Create an OU for the nonproduction accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.
This approach groups the non-production accounts under a dedicated OU and then applies the SCP to the OU.
This can be useful for managing policies for multiple accounts at once.

Here's why the other options are not ideal:

A. Attach the SCP to the root OU: This would apply the restriction to all accounts in the organization, including
the production account, which is not desirable.
C. Attach the SCP to the Organizations management account: The management account is used for managing
the organization itself, not for controlling individual accounts' actions.
D. Create an OU for the production account and attach SCP: This solution creates unnecessary complexity by
creating a separate OU for a single account. Additionally, it doesn't address restricting launches in the non-
production accounts.
In conclusion, either attaching the SCP directly to the non-production accounts (B) or creating a dedicated OU
for them and attaching the SCP to the OU (E) will achieve the desired outcome of restricting EC2 instance
launches of specific sizes in those accounts. Choose the approach that best suits your organizational structure and
management preferences.

B. Đính kèm SCP vào ba tài khoần thành viên cụa Tộ chửc phi sần xuầt.
Điệu này trửc tiệp áp dụng chính sách cho các tài khoần cụ thệ mà bần muộn có các hần chệ.

E. Tầo OU cho các tài khoần phi sần xuầt. Gần SCP vào OU. Di chuyện các tài khoần thành viên không sần xuầt
vào OU mới.
Cách tiệp cần này nhóm các tài khoần phi sần xuầt vào một OU chuyên dụng và sau đó áp dụng SCP cho OU.
Điệu này có thệ hửu ích cho việc quần lý chính sách cho nhiệu tài khoần cùng một lúc.

Đây là lý do tầi sao các tùy chộn khác không lý tửớng:

A. Đính kèm SCP vào OU gộc: Điệu này sẽ áp dụng hần chệ cho tầt cầ các tài khoần trong tộ chửc, bao gộm cầ
tài khoần sần xuầt, điệu này là không mong muộn.
C. Đính kèm SCP vào tài khoần quần lý cụa Tộ chửc: Tài khoần quần lý đửớc sử dụng đệ quần lý chính tộ chửc
đó chử không phầi đệ kiệm soát hành động cụa các tài khoần cá nhân.
D. Tầo OU cho tài khoần sần xuầt và đính kèm SCP: Giầi pháp này tầo ra sử phửc tầp không cần thiệt bầng cách
tầo OU riêng cho một tài khoần. Ngoài ra, nó không giầi quyệt việc hần chệ khới chầy trong các tài khoần phi sần
xuầt.
Tóm lầi, việc gần SCP trửc tiệp vào các tài khoần phi sần xuầt (B) hoầc tầo OU dành riêng cho chúng và gần SCP
vào OU (E) sẽ đầt đửớc kệt quầ mong muộn là hần chệ việc khới chầy phiên bần EC2 có kích thửớc cụ thệ trong
các tài khoần đó . Chộn cách tiệp cần phù hớp nhầt với cớ cầu tộ chửc và sớ thích quần lý cụa bần.

Question #: : 710

A company’s website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to
security concerns, the company requires a private and secure connection between its EC2 resources and Amazon
S3.

Which solution meets these requirements?


• A. Set up S3 bucket policies to allow access from a VPC endpoint.
• B. Set up an IAM policy to grant read-write access to the S3 bucket.
• C. Set up a NAT gateway to access resources outside the private subnet.
• D. Set up an access key ID and a secret access key to access the S3 bucket.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Ashy1313 at Feb. 9, 2024, 7:46 p.m.
Explain:
By creating VPC Endpoints for Amazon S3 within the company's Virtual Private Cloud (VPC), you can establish
a private and secure connection between the EC2 instances and S3 without the need to traverse the public internet.
VPC Endpoints allow EC2 instances within the VPC to access S3 using private IP addresses, enhancing security
by keeping traffic within the AWS network.

Bầng cách tầo Điệm cuội VPC cho Amazon S3 trong Đám mây riêng ầo (VPC) cụa công ty, bần có thệ thiệt lầp
kệt nội riêng tử và an toàn giửa các phiên bần EC2 và S3 mà không cần phầi truy cầp Internet công cộng. Điệm
cuội VPC cho phép các phiên bần EC2 trong VPC truy cầp S3 bầng địa chỉ IP riêng, tăng cửớng bầo mầt bầng
cách duy trì lửu lửớng truy cầp trong mầng AWS.

Question #: : 711
cluster’ cụm
An ecommerce company runs its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster
in Multi-AZ mode for the underlying database. During a recent promotional campaign, the application
experienced heavy read load and write load. Users experienced timeout issues when they attempted to access the
application.

A solutions architect needs to make the application architecture more scalable and highly available.

Which solution will meet these requirements with the LEAST downtime?
• A. Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda
function to log the state change events of the Aurora cluster. Add the Lambda function as a target for the
EventBridge rule. Add additional reader nodes to fail over to.
• B. Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database
Activity Streams on the cluster to track the cluster status.
• C. Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for
the Aurora cluster.
• D. Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by
using AWS Database Migration Service (AWS DMS) with a write-around approach.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 5:49 p.m.
Explain’
Adding additional reader instances: By adding more reader instances to the Aurora cluster, you distribute the read
load across multiple instances, which helps to handle heavy read loads more efficiently and improves scalability.

Creating an Amazon RDS Proxy target group: Amazon RDS Proxy helps to manage connections to the Aurora
cluster, improving scalability and fault tolerance for read-heavy applications. It provides features like connection
pooling and automated failover, which can help improve application availability.

This solution addresses the scalability and availability requirements of the application architecture with minimal
downtime. It leverages Aurora's Multi-AZ configuration and enhances it by adding more read replicas to handle
heavy read loads efficiently. Additionally, using RDS Proxy helps to manage connections more effectively and
provides fault tolerance capabilities.

Options A, B, and D may introduce additional complexity or involve changes that could potentially cause
downtime or disrupt the application. For example:

Option A: Involves setting up EventBridge rules and Lambda functions, which may not directly address the
scalability and availability requirements of the database.

Option B: Activating zero-downtime restart (ZDR) and using Database Activity Streams may not directly address
the heavy read load issue or improve scalability.
Option D: Adding an ElastiCache for Redis cache with data replication from Aurora via DMS introduces a separate
caching layer, which may not directly address the scalability and availability requirements of the database and
could introduce additional complexity. Additionally, using a write-around approach may not be suitable for all use
cases and may not provide the desired level of consistency.
Thêm phiên bần trình độc bộ sung: Bầng cách thêm nhiệu phiên bần trình độc hớn vào cụm Aurora, bần phân
phội tầi độc trên nhiệu phiên bần, giúp xử lý tầi độc nầng hiệu quầ hớn và cầi thiện khầ năng mớ rộng.

Tầo nhóm mục tiêu Amazon RDS Proxy: Amazon RDS Proxy giúp quần lý các kệt nội đện cụm Aurora, cầi thiện
khầ năng mớ rộng và khầ năng chịu lội cho các ửng dụng có cửớng độ độc cao. Nó cung cầp các tính năng nhử
tộng hớp kệt nội và chuyện đội dử phòng tử động, có thệ giúp cầi thiện tính khầ dụng cụa ửng dụng.

Giầi pháp này giầi quyệt các yêu cầu vệ khầ năng mớ rộng và tính khầ dụng cụa kiện trúc ửng dụng với thới gian
ngửng hoầt động ớ mửc tội thiệu. Nó tần dụng cầu hình Multi-AZ cụa Aurora và nâng cao cầu hình này bầng cách
thêm nhiệu bần sao chỉ có quyện độc hớn đệ xử lý tầi độc nầng một cách hiệu quầ. Ngoài ra, việc sử dụng RDS
Proxy giúp quần lý kệt nội hiệu quầ hớn và cung cầp khầ năng chịu lội.

Các tùy chộn A, B và D có thệ gây ra sử phửc tầp hớn hoầc liên quan đện nhửng thay đội có khầ năng gây ra thới
gian ngửng hoầt động hoầc làm gián đoần ửng dụng. Ví dụ:

Tùy chộn A: Liên quan đện việc thiệt lầp các quy tầc EventBridge và hàm Lambda, có thệ không trửc tiệp giầi
quyệt các yêu cầu vệ khầ năng mớ rộng và tính khầ dụng cụa cớ sớ dử liệu.

Tùy chộn B: Kích hoầt khới động lầi không có thới gian ngửng hoầt động (ZDR) và sử dụng Luộng hoầt động cớ
sớ dử liệu có thệ không trửc tiệp giầi quyệt đửớc vần đệ tầi độc nầng hoầc cầi thiện khầ năng mớ rộng.

Tùy chộn D: Việc thêm bộ nhớ đệm ElastiCache cho Redis bầng tính năng sao chép dử liệu tử Aurora qua DMS
sẽ tầo ra một lớp bộ nhớ đệm riêng biệt, có thệ không giầi quyệt trửc tiệp các yêu cầu vệ khầ năng mớ rộng và
tính khầ dụng cụa cớ sớ dử liệu và có thệ gây ra sử phửc tầp hớn nửa. Ngoài ra, việc sử dụng phửớng pháp việt
xung quanh có thệ không phù hớp với mội trửớng hớp sử dụng và có thệ không mang lầi mửc độ nhầt quán nhử
mong muộn.

Question #: : 712

A company is designing a web application on AWS. The application will use a VPN connection between the
company’s existing data centers and the company's VPCs.

The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to
communicate with the on-premises services from a VPC.

Which solution will meet these requirements in the MOST secure manner?
• A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.
• B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.
• C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
• D. Create a Route 53 public hosted zone. Create a record for each service to allow service communication

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 5:52 p.m.
Explain:
※Inbound Resolver endpoints allow DNS queries to your VPC from your on-premises network or another VPC.
các truy vần DNS tới VPC cụa bần tử mầng tầi chộ hoầc một VPC khác.
Outbound Resolver endpoints allow DNS queries from your VPC to your on-premises network or another VPC.
truy vần DNS tử VPC đện mầng tầi chộ cụa bần hoầc một VPC khác.
Create a Route 53 Resolver outbound endpoint: This outbound endpoint enables communication from the VPC
to on-premises networks. It allows DNS queries originating from resources in the VPC to be forwarded to the
DNS resolvers in the on-premises environment securely.

Create a resolver rule: With the resolver rule, you specify the domain names for which DNS queries should be
forwarded to the outbound endpoint. In this case, you would define the domain names corresponding to the on-
premises services.

Associate the resolver rule with the VPC: By associating the resolver rule with the VPC, you ensure that DNS
queries from resources within the VPC for the specified domain names are forwarded to the on-premises DNS
resolvers via the outbound endpoint.
Tầo điệm cuội gửi đi cụa Bộ giầi quyệt Route 53: Điệm cuội gửi đi này cho phép liên lầc tử VPC đện các mầng tầi
chộ. Nó cho phép các truy vần DNS bầt nguộn tử tài nguyên trong VPC đửớc chuyện tiệp đện bộ phân giầi DNS
trong môi trửớng tầi chộ một cách an toàn.

Tầo quy tầc trình phân giầi: Với quy tầc trình phân giầi, bần chỉ định tên miện mà truy vần DNS sẽ đửớc chuyện
tiệp đện điệm cuội gửi đi. Trong trửớng hớp này, bần sẽ xác định tên miện tửớng ửng với các dịch vụ tầi chộ.

Liên kệt quy tầc trình phân giầi với VPC: Bầng cách liên kệt quy tầc trình phân giầi với VPC, bần đầm bầo rầng
các truy vần DNS tử tài nguyên trong VPC cho các tên miện đửớc chỉ định sẽ đửớc chuyện tiệp đện trình phân
giầi DNS tầi chộ thông qua điệm cuội gửi đi.

Giầi pháp này tần dụng Trình phân giầi Route 53 đệ định tuyện các truy vần DNS tử VPC đện môi trửớng tầi chộ
một cách an toàn, đầm bầo rầng các bần ghi DNS riêng cho các dịch vụ tầi chộ có thệ đửớc giầi quyệt một cách
an toàn.

Option B: Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with
the VPC.

Explanation: Inbound endpoints are used to route DNS queries from on-premises networks into the VPC. They
are not intended for resolving DNS queries originating from resources within the VPC to on-premises services.
Therefore, using an inbound endpoint would not address the requirement for the web application to communicate
with on-premises services using private DNS records.

Option C: Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.

Explanation: While private hosted zones can be used to define custom DNS records for private communication
within a VPC, they are not suitable for resolving DNS queries to on-premises services. Private hosted zones are
specific to AWS environments and do not extend to on-premises networks. Therefore, this option would not
enable resolution of on-premises service names from the VPC.

Option D: Create a Route 53 public hosted zone. Create a record for each service to allow service communication.

Explanation: Public hosted zones are intended for resolving DNS queries from the public internet and are not
suitable for private communication within a VPC or with on-premises services. Additionally, exposing on-
premises service names in a public hosted zone could pose security risks by exposing internal infrastructure details
to the public internet.

In summary, options B, C, and D are not the most appropriate solutions because they do not address the
requirement for secure communication between the web application in the VPC and the on-premises services
using private DNS records. Option A, which involves creating an outbound endpoint and resolver rule, is the
correct solution as it enables DNS resolution from the VPC to on-premises services securely.
Tùy chộn B: Tầo điệm cuội gửi đện Bộ giầi quyệt Tuyện 53. Tầo quy tầc giầi quyệt. Liên kệt quy tầc trình phân
giầi với VPC.

Giầi thích: Điệm cuội gửi đện đửớc dùng đệ định tuyện các truy vần DNS tử mầng tầi chộ đện VPC. Chúng không
nhầm mục đích giầi quyệt các truy vần DNS bầt nguộn tử tài nguyên trong VPC tới các dịch vụ tầi chộ. Do đó,
việc sử dụng điệm cuội gửi đện sẽ không giầi quyệt đửớc yêu cầu ửng dụng web giao tiệp với các dịch vụ tầi chộ
bầng bần ghi DNS riêng.

Tùy chộn C: Tầo vùng lửu trử riêng cụa Tuyện 53. Liên kệt vùng lửu trử riêng với VPC.

Giầi thích: Mầc dù có thệ sử dụng các vùng lửu trử riêng đệ xác định bần ghi DNS tùy chỉnh cho hoầt động liên
lầc riêng tử trong VPC nhửng chúng không phù hớp đệ giầi quyệt các truy vần DNS cho các dịch vụ tầi chộ. Các
vùng đửớc lửu trử riêng dành riêng cho môi trửớng AWS và không mớ rộng sang mầng tầi chộ. Do đó, tùy chộn
này sẽ không cho phép phân giầi tên dịch vụ tầi chộ tử VPC.
Tùy chộn D: Tầo vùng lửu trử công khai Tuyện 53. Tầo bần ghi cho tửng dịch vụ đệ cho phép liên lầc dịch vụ.

Giầi thích: Các vùng đửớc lửu trử công khai nhầm mục đích giầi quyệt các truy vần DNS tử Internet công cộng
và không phù hớp cho hoầt động liên lầc riêng tử trong VPC hoầc với các dịch vụ tầi chộ. Ngoài ra, việc tiệt lộ tên
dịch vụ tầi chộ trong vùng lửu trử công cộng có thệ gây ra rụi ro bầo mầt bầng cách tiệt lộ thông tin chi tiệt vệ cớ
sớ hầ tầng nội bộ trên Internet công cộng.

Tóm lầi, các tùy chộn B, C và D không phầi là giầi pháp phù hớp nhầt vì chúng không giầi quyệt đửớc yêu cầu
liên lầc an toàn giửa ửng dụng web trong VPC và các dịch vụ tầi chộ sử dụng bần ghi DNS riêng. Tùy chộn A, bao
gộm việc tầo quy tầc trình phân giầi và điệm cuội gửi đi, là giầi pháp phù hớp vì nó cho phép phân giầi DNS tử
VPC đện các dịch vụ tầi chộ một cách an toàn.

Question #: : 713
determine:xác định
A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple
countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less
than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata
to determine which photos to display to each user.

Which solution provides the appropriate user access MOST cost-effectively?


• A. Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently
viewed items.
• B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its
S3 location in DynamoDB.
• C. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move
photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object
tags to keep track of metadata.
• D. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos
older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in
Amazon OpenSearch Service.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 5:59 p.m.
Explain;
Amazon S3 Intelligent-Tiering: This storage class automatically optimizes storage costs by moving objects
between two access tiers: frequent access and infrequent access. As some photos are heavily viewed for months
while others are viewed for a shorter duration, Intelligent-Tiering efficiently manages the storage of both types of
photos, ensuring cost-effectiveness.

DynamoDB for storing metadata: DynamoDB is a highly scalable, fully managed NoSQL database service. Storing
photo metadata and their S3 locations in DynamoDB allows for efficient querying and retrieval of photo
information based on user requests. DynamoDB's flexible scaling and low latency retrieval make it suitable for
storing and querying metadata.

Option A (Using DynamoDB with DAX) may not be the most cost-effective solution as DynamoDB can be more
expensive compared to S3, especially considering the volume of data involved in storing photos.

Option C (Using S3 Standard with Lifecycle policy) and Option D (Using S3 Glacier with Lifecycle policy and
OpenSearch Service) are not the most cost-effective solutions considering the varying access patterns of the
photos. Storing all photos in a single storage class without automatically optimizing for access patterns may lead
to higher costs, especially if frequently accessed photos are stored in more expensive storage classes.

Therefore, Option B is the most appropriate solution as it effectively addresses the requirements of the photo
hosting service while ensuring cost-effectiveness.
Phân bầc thông minh cụa Amazon S3: Lớp lửu trử này tử động tội ửu hóa chi phí lửu trử bầng cách di chuyện các
đội tửớng giửa hai tầng truy cầp: truy cầp thửớng xuyên và truy cầp không thửớng xuyên. Vì một sộ ầnh đửớc
xem nhiệu trong nhiệu tháng trong khi nhửng ầnh khác đửớc xem trong thới gian ngần hớn, Phân bầc thông minh
quần lý hiệu quầ việc lửu trử cầ hai loầi ầnh, đầm bầo hiệu quầ vệ chi phí.

DynamoDB đệ lửu trử siêu dử liệu: DynamoDB là dịch vụ cớ sớ dử liệu NoSQL đửớc quần lý toàn diện và có khầ
năng mớ rộng cao. Việc lửu trử siêu dử liệu ầnh và vị trí S3 cụa chúng trong DynamoDB cho phép truy vần và
truy xuầt thông tin ầnh hiệu quầ dửa trên yêu cầu cụa ngửới dùng. Khầ năng thay đội quy mô linh hoầt và khầ
năng truy xuầt có độ trệ thầp cụa DynamoDB giúp nó phù hớp đệ lửu trử và truy vần siêu dử liệu.

Tùy chộn A (Sử dụng DynamoDB với DAX) có thệ không phầi là giầi pháp tiệt kiệm chi phí nhầt vì DynamoDB
có thệ đầt hớn so với S3, đầc biệt khi xét đện khội lửớng dử liệu liên quan đện việc lửu trử ầnh.

Tùy chộn C (Sử dụng Tiêu chuần S3 với chính sách Vòng đới) và Tùy chộn D (Sử dụng S3 Glacier với chính sách
Vòng đới và Dịch vụ OpenSearch) không phầi là giầi pháp tiệt kiệm chi phí nhầt khi xét đện các kiệu truy cầp ầnh
khác nhau. Việc lửu trử tầt cầ ầnh trong một lớp lửu trử duy nhầt mà không tử động tội ửu hóa cho các kiệu truy
cầp có thệ dần đện chi phí cao hớn, đầc biệt nệu nhửng bửc ầnh đửớc truy cầp thửớng xuyên đửớc lửu trử trong
các lớp lửu trử đầt tiện hớn.

Vì vầy, Phửớng án B là giầi pháp phù hớp nhầt vì nó giầi quyệt hiệu quầ các yêu cầu cụa dịch vụ lửu trử ầnh mà
vần đầm bầo hiệu quầ vệ chi phí.

Question #: : 714
A company runs a highly available web application on Amazon EC2 instances behind an Application Load Balancer.
The company uses Amazon CloudWatch metrics.

As the traffic to the web application increases, some EC2 instances become overloaded with many outstanding
requests. The CloudWatch metrics show that the number of requests processed and the time to receive the
responses from some EC2 instances are both higher compared to other EC2 instances. The company does not
want new requests to be forwarded to the EC2 instances that are already overloaded.

Which solution will meet these requirements?


• A. Use the round robin routing algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.
• B. Use the least outstanding requests algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.
• C. Use the round robin routing algorithm based on the RequestCount and TargetResponseTime
CloudWatch metrics.
• D. Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime
CloudWatch metrics.

Hide Answer
Suggested Answer: C

Community vote distribution


B (63%)
D (38%)
by Andy_09 at Feb. 5, 2024, 6:07 p.m.
Explain:
1. Least Outstanding Requests Algorithm: This algorithm directs new requests to the EC2 instances with
the least number of outstanding requests, helping to distribute the load evenly across all instances. By using this
algorithm, the company can prevent new requests from being forwarded to already overloaded instances,
effectively managing the workload distribution.
2. RequestCountPerTarget and ActiveConnectionCount CloudWatch Metrics: These metrics provide
valuable insights into the current workload and the number of active connections on each EC2 instance. By
monitoring these metrics, the company can identify instances that are becoming overloaded and take appropriate
actions to redistribute the load.
Option A (Round Robin Routing Algorithm) and Option C (Round Robin Routing Algorithm based on
RequestCount and TargetResponseTime CloudWatch metrics) do not specifically address the requirement to
avoid forwarding requests to overloaded instances. Round robin routing distributes requests evenly across all
instances, regardless of their current workload, which may lead to overloading.
Option D (Least Outstanding Requests Algorithm based on RequestCount and TargetResponseTime
CloudWatch metrics) is not ideal because the TargetResponseTime metric alone may not accurately reflect the
current workload or the number of outstanding requests on each instance.
Therefore, Option B is the most appropriate solution as it aligns with the company's requirement to prevent new
requests from being forwarded to overloaded EC2 instances based on real-time workload metrics.
Thuầt toán yêu cầu ít tộn động nhầt: Thuầt toán này chuyện các yêu cầu mới đện các phiên bần EC2 có sộ lửớng
yêu cầu chửa xử lý ít nhầt, giúp phân bộ tầi động đệu trên tầt cầ các phiên bần. Bầng cách sử dụng thuầt toán này,
công ty có thệ ngăn chần việc chuyện tiệp các yêu cầu mới đện các phiên bần vộn đã quá tầi, quần lý hiệu quầ việc
phân bộ khội lửớng công việc.
Chỉ sộ requestCountPerTarget và ActiveConnectionCount CloudWatch: Các chỉ sộ này cung cầp thông tin chuyên
sâu có giá trị vệ khội lửớng công việc hiện tầi và sộ lửớng kệt nội đang hoầt động trên mội phiên bần EC2. Bầng
cách giám sát các sộ liệu này, công ty có thệ xác định các phiên bần đang trớ nên quá tầi và thửc hiện các hành
động thích hớp đệ phân phội lầi tầi.
Tùy chộn A (Thuầt toán định tuyện Round Robin) và Tùy chộn C (Thuầt toán định tuyện Round Robin dửa trên
các sộ liệu cụa requestCount và TargetResponseTime CloudWatch) không giầi quyệt cụ thệ yêu cầu tránh chuyện
tiệp yêu cầu đện các phiên bần bị quá tầi. Định tuyện vòng tròn phân phội động đệu các yêu cầu trên tầt cầ các
phiên bần, bầt kệ khội lửớng công việc hiện tầi cụa chúng nhử thệ nào, điệu này có thệ dần đện tình trầng quá tầi.
Tùy chộn D (Thuầt toán yêu cầu ít nhầt dửa trên sộ liệu CloudWatch cụa requestCount và TargetResponseTime)
không lý tửớng vì chỉ sộ TargetResponseTime có thệ không phần ánh chính xác khội lửớng công việc hiện tầi hoầc
sộ lửớng yêu cầu chửa xử lý trên mội phiên bần.

Question #: : 715

A company uses Amazon EC2, AWS Fargate, and AWS Lambda to run multiple workloads in the company's AWS
account. The company wants to fully make use of its Compute Savings Plans. The company wants to receive
notification when coverage of the Compute Savings Plans drops.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a
coverage threshold to send notifications to the appropriate email message recipients.
• B. Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple
Email Service (Amazon SES) to email the report to the appropriate email message recipients.
• C. Create an AWS Budgets report for the Savings Plans budget. Set the frequency to daily.
• D. Create a Savings Plans alert subscription. Enable all notification options. Enter an email address to
receive notifications.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 6:08 p.m.
Explain:
AWS Budgets: AWS Budgets is a service that allows you to set custom cost and usage budgets. You can create
budgets to track your spending and usage for various AWS services, including Savings Plans.

Coverage Threshold: With AWS Budgets, you can set up a budget specifically for Savings Plans coverage. By
configuring a coverage threshold, you can define the minimum coverage percentage that you want to maintain. If
the coverage drops below this threshold, AWS Budgets will automatically trigger a notification.

Email Notifications: AWS Budgets supports sending notifications via email when budget thresholds are breached.
This ensures that the appropriate stakeholders are promptly informed when the coverage of Savings Plans drops
below the specified threshold.

This solution offers the most operational efficiency because it leverages AWS Budgets, which is specifically
designed for managing cost and usage budgets in AWS. It automates the monitoring process and sends
notifications directly to the relevant stakeholders when coverage drops, minimizing manual intervention and
ensuring timely awareness of any deviations from expected Savings Plans coverage.
Option B: Creating a Lambda function to run a coverage report against the Savings Plans and using Amazon SES
to email the report is less efficient because it involves custom development and maintenance overhead. You would
need to regularly update and maintain the Lambda function to generate the coverage report accurately, which
adds complexity and operational burden.

Option C: Creating an AWS Budgets report for the Savings Plans budget with a daily frequency is less efficient
because it only generates a report without actively monitoring the coverage in real-time. While you can view
historical data, it does not provide immediate notifications when the coverage drops below the threshold, resulting
in delayed awareness of potential issues.

Option D: Creating a Savings Plans alert subscription and enabling all notification options is less efficient because
it lacks granularity in defining specific thresholds for coverage. Additionally, it may lead to unnecessary
notifications for minor fluctuations in coverage, potentially causing alert fatigue. This option also doesn't allow
for customization of notification recipients or thresholds.
Ngân sách AWS: Ngân sách AWS là dịch vụ cho phép bần đầt ngân sách sử dụng và chi phí tùy chỉnh. Bần có thệ
tầo ngân sách đệ theo dõi chi tiêu và mửc sử dụng cụa mình cho các dịch vụ AWS khác nhau, bao gộm cầ Savings
Plans.

Ngửớng bầo hiệm: Với AWS Budgets, bần có thệ thiệt lầp ngân sách cụ thệ cho phầm vi bầo hiệm cụa Savings
Plans. Bầng cách định cầu hình ngửớng phụ sóng, bần có thệ xác định tỷ lệ phần trăm phụ sóng tội thiệu mà bần
muộn duy trì. Nệu mửc độ phù hớp giầm xuộng dửới ngửớng này, Ngân sách AWS sẽ tử động kích hoầt thông
báo.

Thông báo qua email: AWS Budgets hộ trớ gửi thông báo qua email khi vửớt ngửớng ngân sách. Điệu này đầm
bầo rầng các bên liên quan phù hớp sẽ đửớc thông báo kịp thới khi phầm vi áp dụng cụa Kệ hoầch Tiệt kiệm giầm
xuộng dửới ngửớng quy định.

Giầi pháp này mang lầi hiệu quầ hoầt động cao nhầt vì nó tần dụng Ngân sách AWS, đửớc thiệt kệ đầc biệt đệ
quần lý ngân sách chi phí và mửc sử dụng trong AWS. Nó tử động hóa quá trình giám sát và gửi thông báo trửc
tiệp đện các bên liên quan khi phầm vi bầo hiệm giầm xuộng, giầm thiệu sử can thiệp thụ công và đầm bầo nhần
thửc kịp thới vệ bầt kỳ sai lệch nào so với phầm vi bầo hiệm cụa Kệ hoầch Tiệt kiệm dử kiện.

Tùy chộn B: Tầo hàm Lambda đệ chầy báo cáo chính sách dửa trên Savings Plans và sử dụng Amazon SES đệ gửi
báo cáo qua email sẽ kém hiệu quầ hớn vì nó liên quan đện chi phí phát triện và bầo trì tùy chỉnh. Bần sẽ cần phầi
thửớng xuyên cầp nhầt và duy trì hàm Lambda đệ tầo báo cáo phầm vi chính xác, điệu này làm tăng thêm độ phửc
tầp và gánh nầng vần hành.

Tùy chộn C: Tầo báo cáo Ngân sách AWS cho ngân sách Kệ hoầch tiệt kiệm với tần suầt hàng ngày sẽ kém hiệu
quầ hớn vì nó chỉ tầo báo cáo mà không chụ động giám sát phầm vi trong thới gian thửc. Mầc dù bần có thệ xem
dử liệu lịch sử nhửng nó không cung cầp thông báo ngay lầp tửc khi mửc độ phù hớp giầm xuộng dửới ngửớng,
dần đện nhần thửc chầm vệ các vần đệ tiệm ần.

Tùy chộn D: Tầo đăng ký cầnh báo Kệ hoầch tiệt kiệm và bầt tầt cầ các tùy chộn thông báo sẽ kém hiệu quầ hớn
vì nó thiệu tính chi tiệt trong việc xác định các ngửớng cụ thệ cho phầm vi bầo hiệm. Ngoài ra, nó có thệ dần đện
nhửng thông báo không cần thiệt vệ nhửng biện động nhộ trong phầm vi phụ sóng, có thệ gây ra tình trầng mệt
mội khi cầnh báo. Tùy chộn này cũng không cho phép tùy chỉnh ngửới nhần hoầc ngửớng thông báo.

Question #: : 716
real-time data ingestion:nhầp dử liệu theo thới gian thửc
encrypted.:mã hóa.
existing VPC:VPC hiện có
A company runs a real-time data ingestion solution on AWS. The solution consists of the most recent version of
Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC in private
subnets across three Availability Zones.

A solutions architect needs to redesign the data ingestion solution to be publicly available over the internet. The
data in transit must also be encrypted.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update
the MSK cluster security settings to enable mutual TLS authentication.
• B. Create a new VPC that has public subnets. Deploy an MSK cluster in the public subnets. Update the
MSK cluster security settings to enable mutual TLS authentication.
• C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALB security
group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPS protocol.
• D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLB listener for
HTTPS communication over the internet.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 6:12 p.m.
Explain:
Option A: Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update
the MSK cluster security settings to enable mutual TLS authentication.
Explanation:
• This option leverages the existing VPC infrastructure, reducing the need for additional networking setup
and complexity.
• By configuring public subnets within the existing VPC, the MSK cluster can be deployed directly without
the need for establishing connectivity between separate VPCs.
• Enabling mutual TLS authentication ensures that data in transit is encrypted, meeting the requirement
for secure communication.
• Overall, this option provides a streamlined approach with minimal changes to the existing environment,
making it the most operationally efficient solution.

Question #: : 717
immediately.:ngay lầp tửc.
A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order
files from an on-premises enterprise resource planning (ERP) system. The application then uploads the files to
an SFTP server. The application uses a scheduled job that checks for order files every hour.

The company already has an AWS account that has connectivity to the on-premises network. The new application
on AWS must support integration with the existing ERP system. The new application must be secure and resilient
and must use the SFTP protocol to process orders from the ERP system immediately.

Which solution will meet these requirements?


• A. Create an AWS Transfer Family SFTP internet-facing server in two Availability Zones. Use Amazon
S3 storage. Create an AWS Lambda function to process order files. Use S3 Event Notifications to send
s3:ObjectCreated:* events to the Lambda function.
• B. Create an AWS Transfer Family SFTP internet-facing server in one Availability Zone. Use Amazon
Elastic File System (Amazon EFS) storage. Create an AWS Lambda function to process order files. Use a Transfer
Family managed workflow to invoke the Lambda function.
• C. Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon Elastic
File System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use
Amazon EventBridge Scheduler to invoke the state machine to periodically check Amazon EFS for order files.
• D. Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon S3
storage. Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to
invoke the Lambda function.

Hide Answer
Suggested Answer: A

Community vote distribution


D (88%)
13%
by Andy_09 at Feb. 5, 2024, 6:22 p.m.
Explain:
Option D:
• Creating an AWS Transfer Family SFTP internal server in two Availability Zones ensures high
availability and resilience.
• Using Amazon S3 storage for storing order files provides durability and scalability.
• Creating an AWS Lambda function to process order files allows for immediate processing.
• Utilizing a Transfer Family managed workflow to invoke the Lambda function ensures efficient and
reliable execution.
This option meets the requirement for secure and resilient processing of order files using the SFTP protocol,
integration with the existing ERP system, and immediate processing of orders.
Tùy chộn D:

Việc tầo máy chụ nội bộ SFTP AWS Transfer Family trong hai Vùng sần sàng đầm bầo tính sần sàng và khầ năng
phục hội cao.
Việc sử dụng bộ lửu trử Amazon S3 đệ lửu trử tệp đớn hàng mang lầi độ bện và khầ năng mớ rộng.
Việc tầo hàm AWS Lambda đệ xử lý tệp đớn hàng sẽ cho phép xử lý ngay lầp tửc.
Việc sử dụng quy trình làm việc do Transfer Family quần lý đệ gội hàm Lambda đầm bầo việc thửc thi hiệu quầ
và đáng tin cầy.
Tùy chộn này đáp ửng yêu cầu xử lý an toàn và linh hoầt các tệp đớn hàng bầng giao thửc SFTP, tích hớp với hệ
thộng ERP hiện có và xử lý đớn hàng ngay lầp tửc.

Option A: using S3 Event Notifications to trigger a Lambda function every hour may not meet the requirement
for immediate processing of order files.
Option B:

Similar to option A, this solution lacks resilience and high availability as it deploys the SFTP server in only one
Availability Zone.
Although it utilizes Amazon EFS storage, which supports file system access from multiple EC2 instances and AWS
Lambda, it may introduce unnecessary complexity and may not be the most cost-effective solution for this use
case.
Option C:

Deploying an SFTP internal server in two Availability Zones addresses the resilience requirement, but it doesn't
support immediate processing of order files. Instead, it relies on periodic checks using Amazon EventBridge
Scheduler, which may not meet the requirement for immediate processing.
Using Amazon EFS storage adds complexity without clear benefits for this use case.

Question #: : 718

A company’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing
infrastructure is not scalable and is complex to manage.

A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep
the data processing on premises.

Which solution will meet these requirements?


• A. Use AWS Site-to-Site VPN to access the on-premises Hadoop Distributed File System (HDFS) data
and application. Use an Amazon EMR cluster to process the data.
• B. Use AWS DataSync to connect to the on-premises Hadoop Distributed File System (HDFS) cluster.
Create an Amazon EMR cluster to process the data.
• C. Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters
on AWS Outposts. Use the EMR clusters to process the data.
• D. Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Create an Amazon EMR
cluster to process the data.

Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 6:24 p.m.
Explanation:

Amazon EMR on AWS Outposts: This option involves migrating the Apache Hadoop and Apache Spark
applications to Amazon EMR clusters deployed on AWS Outposts. AWS Outposts extend AWS infrastructure to
on-premises environments, allowing you to run AWS services locally. By deploying EMR clusters on AWS
Outposts, you can leverage the scalability and managed services of EMR while keeping your data processing on
premises.

Scalability and Reduced Complexity: EMR clusters on AWS Outposts provide scalability without the need to
manage complex on-premises infrastructure. AWS manages the underlying infrastructure, including hardware
provisioning, cluster setup, and software updates, reducing operational complexity for your team.

Integration with On-Premises Data: This solution allows seamless integration with on-premises data sources, as
the EMR clusters can access data stored on local Hadoop Distributed File System (HDFS) clusters. It ensures that
data processing remains localized while benefiting from the scalability and managed services of AWS EMR.

Operational Efficiency: By migrating to EMR on AWS Outposts, you can achieve operational efficiency by
offloading infrastructure management tasks to AWS, allowing your team to focus on data processing tasks rather
than infrastructure maintenance.

In contrast, the other options involve either migrating data to the cloud (Option D) or using AWS services in the
cloud without keeping data processing on premises (Options A and B), which do not align with the requirement
to keep data processing on premises. Therefore, Option C is the most appropriate solution for the given scenario.

Amazon EMR trên AWS Outposts: Tùy chộn này liên quan đện việc di chuyện các ửng dụng Apache Hadoop và
Apache Spark sang các cụm Amazon EMR đửớc triện khai trên AWS Outposts. AWS Outposts mớ rộng cớ sớ hầ
tầng AWS sang môi trửớng tầi chộ, cho phép bần chầy các dịch vụ AWS cục bộ. Bầng cách triện khai cụm EMR
trên AWS Outposts, bần có thệ tần dụng khầ năng mớ rộng và dịch vụ đửớc quần lý cụa EMR trong khi vần duy
trì hoầt động xử lý dử liệu tầi cớ sớ.

Khầ năng mớ rộng và giầm độ phửc tầp: Các cụm EMR trên AWS Outposts cung cầp khầ năng mớ rộng mà không
cần quần lý cớ sớ hầ tầng tầi chộ phửc tầp. AWS quần lý cớ sớ hầ tầng cớ bần, bao gộm cung cầp phần cửng, thiệt
lầp cụm và cầp nhầt phần mệm, giúp giầm độ phửc tầp trong vần hành cho nhóm cụa bần.

Tích hớp với dử liệu tầi chộ: Giầi pháp này cho phép tích hớp liện mầch với các nguộn dử liệu tầi chộ, vì các cụm
EMR có thệ truy cầp dử liệu đửớc lửu trử trên các cụm Hệ thộng tệp phân tán Hadoop (HDFS) cục bộ. Nó đầm
bầo rầng việc xử lý dử liệu vần đửớc bần địa hóa động thới đửớc hửớng lới tử khầ năng mớ rộng và các dịch vụ
đửớc quần lý cụa AWS EMR.

Hiệu quầ hoầt động: Bầng cách di chuyện sang EMR trên AWS Outposts, bần có thệ đầt đửớc hiệu quầ hoầt động
bầng cách chuyện các nhiệm vụ quần lý cớ sớ hầ tầng sang AWS, cho phép nhóm cụa bần tầp trung vào các nhiệm
vụ xử lý dử liệu thay vì bầo trì cớ sớ hầ tầng.

Ngửớc lầi, các tùy chộn khác liên quan đện việc di chuyện dử liệu sang đám mây (Tùy chộn D) hoầc sử dụng dịch
vụ AWS trên đám mây mà không tiệp tục xử lý dử liệu tầi cớ sớ (Tùy chộn A và B), không phù hớp với yêu cầu
tiệp tục xử lý dử liệu tầi cớ sớ . Vì vầy, phửớng án C là giầi pháp phù hớp nhầt cho tình huộng nhầt định.

Question #: : 719

A company is migrating a large amount of data from on-premises storage to AWS. Windows, Mac, and Linux
based Amazon EC2 instances in the same AWS Region will access the data by using SMB and NFS storage
protocols. The company will access a portion of the data routinely. The company will access the remaining data
infrequently.

The company needs to design a solution to host the data.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use
AWS DataSync to migrate the data to the EFS volume.
• B. Create an Amazon FSx for ONTAP instance. Create an FSx for ONTAP file system with a root volume
that uses the auto tiering policy. Migrate the data to the FSx for ONTAP volume.
• C. Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by
using an AWS Storage Gateway Amazon S3 File Gateway.
• D. Create an Amazon FSx for OpenZFS file system. Migrate the data to the new volume.

Hide Answer
Suggested Answer: C

Community vote distribution


B (73%)
C (27%)
by Andy_09 at Feb. 5, 2024, 6:26 p.m.
Explain:

• Amazon FSx for ONTAP supports both SMB and NFS protocols, making it suitable for the mixed
environment of Windows, Mac, and Linux-based EC2 instances.
• Auto tiering policy in FSx for ONTAP automatically moves data between different storage tiers based on
access patterns, ensuring that frequently accessed data remains on faster storage, while infrequently accessed data
is moved to cost-effective storage, minimizing operational overhead.
• The solution keeps the data within the AWS ecosystem, making it easier to manage and integrate with
other AWS services.
Options A, C, and D do not fully meet the requirements:
• Option A (Amazon EFS with Intelligent-Tiering): While Amazon EFS supports NFS, it does not natively
support SMB. Additionally, Intelligent-Tiering may not provide optimal performance for data that requires
frequent access.
• Option C (Amazon S3 with S3 Intelligent-Tiering): While S3 supports both SMB and NFS access
through various methods, using an S3 bucket directly might introduce additional complexity for accessing the data
from EC2 instances, especially when both SMB and NFS access is required.
• Option D (Amazon FSx for OpenZFS): While FSx for OpenZFS supports NFS, it does not support SMB,
which is needed for Windows-based EC2 instances. Additionally, it might not offer the same level of optimization
for frequently and infrequently accessed data as FSx for ONTAP with its auto-tiering policy.
FSx for NetApp ONTAP is a fully managed file storage service with advanced features and support for multiple
protocols, while S3 with S3 File Gateway is a hybrid cloud storage solution that enables on-premises applications
to access data stored in Amazon S3.
FSx cho NetApp ONTAP là dịch vụ lửu trử tệp đửớc quần lý hoàn toàn với các tính năng nâng cao và hộ trớ nhiệu
giao thửc, trong khi S3 với Cộng tệp S3 là giầi pháp lửu trử đám mây lai cho phép các ửng dụng tầi chộ truy cầp
dử liệu đửớc lửu trử trong Amazon S3.

Question #: : 720
Report generation:tầo báo cáo
Downtime:ngửng hoầt động
A manufacturing company runs its report generation application on AWS. The application generates each report
in about 20 minutes. The application is built as a monolith that runs on a single Amazon EC2 instance. The
application requires frequent updates to its tightly coupled modules. The application becomes complex to
maintain as the company adds new features.

Each time the company patches a software module, the application experiences downtime. Report generation must
restart from the beginning after any interruptions. The company wants to redesign the application so that the
application can be flexible, scalable, and gradually improved. The company wants to minimize application
downtime.

Which solution will meet these requirements?


• A. Run the application on AWS Lambda as a single function with maximum provisioned concurrency.
• B. Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default
allocation strategy.
• C. Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service
auto scaling.
• D. Run the application on AWS Elastic Beanstalk as a single application environment with an all-at-once
deployment strategy.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 6:27 p.m.
Explain:
• Amazon ECS allows for the deployment and management of Docker containers at scale, providing
flexibility and scalability.
• By decomposing the monolithic application into microservices and running them on ECS, the company
can achieve a more modular and maintainable architecture.
• Service auto-scaling in ECS allows for automatic scaling of the application based on demand, ensuring
that the application can handle varying workloads efficiently without manual intervention.
• With microservices architecture, updates to individual modules can be deployed independently,
minimizing downtime and allowing for gradual improvement of the application.
• ECS provides built-in integrations with AWS services like Application Load Balancer (ALB) for traffic
distribution and CloudWatch for monitoring, enhancing the overall management and observability of the
application.
Amazon ECS cho phép triện khai và quần lý các bộ chửa Docker trên quy mô lớn, mang lầi sử linh hoầt và khầ
năng mớ rộng.
Bầng cách phân tách ửng dụng nguyên khội thành các vi dịch vụ và chầy chúng trên ECS, công ty có thệ đầt đửớc
kiện trúc mô-đun hớn và dệ bầo trì hớn.
Tử động thay đội quy mô dịch vụ trong ECS cho phép tử động thay đội quy mô ửng dụng dửa trên nhu cầu, đầm
bầo rầng ửng dụng có thệ xử lý các khội lửớng công việc khác nhau một cách hiệu quầ mà không cần can thiệp
thụ công.
Với kiện trúc vi dịch vụ, các bần cầp nhầt cho tửng mô-đun riêng lệ có thệ đửớc triện khai độc lầp, giầm thiệu
thới gian ngửng hoầt động và cho phép cầi tiện dần dần ửng dụng.
ECS cung cầp các tích hớp tích hớp với các dịch vụ AWS nhử Cân bầng tầi ửng dụng (ALB) đệ phân phội lửu
lửớng và CloudWatch đệ giám sát, nâng cao khầ năng quần lý và quan sát tộng thệ cụa ửng dụng.

Question #: : 721

A company wants to rearchitect a large-scale web application to a serverless microservices architecture. The
application uses Amazon EC2 instances and is written in Python.

The company selected one component of the web application to test as a microservice. The component supports
hundreds of requests each second. The company wants to create and test the microservice on an AWS solution
that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal
operational support.

Which solution will meet these requirements?


• A. Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating
system.
• B. Use an AWS Elastic Beanstalk web server environment that has high availability configured.
• C. Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed
EC2 instances.
• D. Use an AWS Lambda function that runs custom developed code.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 6:29 p.m.
Explain:
• AWS Lambda is a serverless compute service that allows you to run code without provisioning or
managing servers.
• Lambda supports Python along with several other programming languages, making it suitable for the
company's Python-based application.
• With Lambda, you only pay for the compute time consumed by your function, eliminating the need to
manage infrastructure and reducing operational overhead.
• Lambda scales automatically to handle incoming requests, allowing the microservice to handle hundreds
of requests per second without manual intervention.
• Lambda integrates seamlessly with other AWS services, enabling easy integration with existing AWS
components and services used by the company's web application.

Question #: : 722

A company has an AWS Direct Connect connection from its on-premises location to an AWS account. The AWS
account has 30 different VPCs in the same AWS Region. The VPCs use private virtual interfaces (VIFs). Each
VPC has a CIDR block that does not overlap with other networks under the company's control.

The company wants to centrally manage the networking architecture while still allowing each VPC to
communicate with all other VPCs and on-premises networks.

Which solution will meet these requirements with the LEAST amount of operational overhead?
• A. Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn
on the transit gateway's route propagation feature.
• B. Create a Direct Connect gateway. Recreate the private VIFs to use the new gateway. Associate each
VPC by creating new virtual private gateways.
• C. Create a transit VPConnect the Direct Connect connection to the transit VPCreate a peering
connection between all other VPCs in the Region. Update the route tables.
• D. Create AWS Site-to-Site VPN connections from on premises to each VPC. Ensure that both VPN
tunnels are UP for each connection. Turn on the route propagation feature.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 6:32 p.m.
Ẽplain’

AWS Transit Gateway is a highly scalable service that simplifies network management by acting as a hub for
connecting multiple VPCs and VPN connections.
By attaching all 30 VPCs to a single Transit Gateway, you can centrally manage the routing and connectivity
between these VPCs and on-premises networks.
Transit Gateway allows for efficient communication between attached VPCs without the need for individual VPC
peering connections.
With Transit Gateway, you can easily scale your network as your organization grows by adding new VPCs or VPN
connections without significant operational overhead.
It provides a simplified and centralized approach to managing network connectivity, reducing administrative
burden and operational complexity.
AWS Transit Gateway là dịch vụ có khầ năng mớ rộng cao giúp đớn giần hóa việc quần lý mầng bầng cách đóng
vai trò là trung tâm kệt nội nhiệu VPC và kệt nội VPN.
Bầng cách gần tầt cầ 30 VPC vào một Transit Gateway duy nhầt, bần có thệ quần lý tầp trung việc định tuyện và
kệt nội giửa các VPC này và mầng tầi chộ.
Transit Gateway cho phép liên lầc hiệu quầ giửa các VPC đửớc đính kèm mà không cần kệt nội ngang hàng VPC
riêng lệ.
Với Transit Gateway, bần có thệ dệ dàng mớ rộng quy mô mầng khi tộ chửc cụa bần phát triện bầng cách thêm
các VPC hoầc kệt nội VPN mới mà không cần chi phí hoầt động đáng kệ.
Nó cung cầp một cách tiệp cần đớn giần và tầp trung đệ quần lý kệt nội mầng, giầm gánh nầng hành chính và độ
phửc tầp trong vần hành.

Question #: : 723

A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS
databases by using an IAM role that has associated policies. The company wants to use AWS Systems Manager to
patch the EC2 instances without disrupting the running applications.

Which solution will meet these requirements?


• A. Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role.
Attach the new IAM role to the EC2 instances and the existing IAM role.
• B. Create an IAM user. Attach the AmazonSSMManagedInstanceCore policy to the IAM user. Configure
Systems Manager to use the IAM user to manage the EC2 instances.
• C. Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
• D. Remove the existing policies from the existing IAM role. Add the AmazonSSMManagedInstanceCore
policy to the existing IAM role.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 7:28 p.m.

Explain:
AWS Systems Manager's Default Host Configuration Management feature allows you to manage EC2 instances
without needing to manually attach IAM policies to IAM roles.
By enabling Default Host Configuration Management, Systems Manager automatically manages the EC2
instances, including patching, without the need for manual IAM role configuration.
This approach reduces operational overhead by leveraging Systems Manager's built-in capabilities for managing
EC2 instances.
It simplifies the management process and ensures that Systems Manager can patch EC2 instances without
disrupting running applications.
This solution aligns with best practices for managing EC2 instances using Systems Manager and ensures efficient
and effective management of resources.

Tính năng Quần lý cầu hình máy chụ mầc định cụa AWS Systems Manager cho phép bần quần lý các phiên bần
EC2 mà không cần đính kèm chính sách IAM vào vai trò IAM theo cách thụ công.
Bầng cách bầt Quần lý cầu hình máy chụ mầc định, Trình quần lý hệ thộng sẽ tử động quần lý các phiên bần EC2,
bao gộm cầ việc vá lội mà không cần cầu hình vai trò IAM thụ công.
Cách tiệp cần này giúp giầm chi phí vần hành bầng cách tần dụng các khầ năng tích hớp cụa Trình quần lý Hệ
thộng đệ quần lý các phiên bần EC2.
Nó đớn giần hóa quy trình quần lý và đầm bầo rầng Trình quần lý hệ thộng có thệ vá các phiên bần EC2 mà không
làm gián đoần các ửng dụng đang chầy.
Giầi pháp này phù hớp với các phửớng pháp hay nhầt đệ quần lý phiên bần EC2 bầng Trình quần lý hệ thộng và
đầm bầo quần lý tài nguyên hiệu quầ và hiệu quầ.

Question #: : 724

A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the
Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect
notices that the number of nodes does not automatically scale out when the existing nodes have reached maximum
capacity in the cluster, which causes performance issues.

Which solution will resolve this issue with the LEAST administrative overhead?
• A. Scale out the nodes by tracking the memory usage.
• B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
• C. Use an AWS Lambda function to resize the EKS cluster automatically.
• D. Use an Amazon EC2 Auto Scaling group to distribute the workload.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 7:30 p.m.
Explain:
Cluster Autoscaler automatically adjusts the size of the Amazon EKS worker node group or Managed Node Group
based on the pending pods in the cluster. When there are pending pods in the cluster due to insufficient resources,
the Cluster Autoscaler will add additional nodes to meet the demand. Similarly, it can also remove nodes if they
are underutilized.
Enabling Cluster Autoscaler requires minimal administrative overhead as it automates the scaling process based
on workload demand without manual intervention.
Trình chia tỷ lệ tử động theo cụm tử động điệu chỉnh kích thửớc cụa nhóm nút nhân viên Amazon EKS hoầc
Nhóm nút đửớc quần lý dửa trên các nhóm đang chớ xử lý trong cụm. Khi có các nhóm đang chớ xử lý trong cụm
do không đụ tài nguyên, Bộ chia tỷ lệ tử động theo cụm sẽ thêm các nút bộ sung đệ đáp ửng nhu cầu. Tửớng tử,
nó cũng có thệ loầi bộ các nút nệu chúng không đửớc sử dụng đúng mửc.

Việc bầt Trình chia tỷ lệ tử động theo cụm yêu cầu chi phí quần trị tội thiệu vì nó tử động hóa quy trình mớ rộng
quy mô dửa trên nhu cầu khội lửớng công việc mà không cần can thiệp thụ công.

Question #: : 725

A company maintains about 300 TB in Amazon S3 Standard storage month after month. The S3 objects are each
typically around 50 GB in size and are frequently replaced with multipart uploads by their global application. The
number and size of S3 objects remain constant, but the company's S3 storage costs are increasing each month.

How should a solutions architect reduce costs in this situation?


• A. Switch from multipart uploads to Amazon S3 Transfer Acceleration.
• B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.
• C. Configure S3 inventory to prevent objects from being archived too quickly.
• D. Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 7:33 p.m.
Explain:

Given the scenario provided, where the company maintains a large amount of data in Amazon S3 Standard storage
and experiences increasing storage costs, the most effective approach to reduce costs would be:
B. Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.
Explanation:
• Multipart uploads incur storage costs even if they are incomplete. Enabling a lifecycle policy to delete
incomplete multipart uploads will help prevent unnecessary storage costs for objects that are not fully uploaded.
• Since the objects are frequently replaced with multipart uploads, there might be instances where uploads
are started but not completed, leading to unnecessary storage costs.
• By configuring a lifecycle policy to delete incomplete multipart uploads, the company can avoid incurring
storage costs for data that is not fully uploaded or required.
Options A, C, and D are not directly relevant to the issue of reducing storage costs for the given scenario:
• Option A suggests switching to Amazon S3 Transfer Acceleration, which is a feature to accelerate data
transfers to and from Amazon S3, but it doesn't address the issue of reducing storage costs.
• Option C mentions configuring S3 inventory, which is used for generating reports on S3 object metadata,
but it doesn't directly impact storage costs.
• Option D suggests using Amazon CloudFront, which is a content delivery network service, but it's not
directly related to reducing storage costs in Amazon S3.

Question #: : 726
requires live location tracking of players based on latitude and longitude.: yêu cầu theo dõi vị trí trửc tiệp cụa
ngửới chới dửa trên vĩ độ và kinh độ
rapid updates and retrieval of locations.: cầp nhầt và truy xuầt vị trí nhanh chóng.

A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players
based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.

The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During
peak usage periods, the database is unable to maintain the performance that is needed for reading and writing
updates. The game's user base is increasing rapidly.

What should a solutions architect do to improve the performance of the data tier?
• A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
• B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
• C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game
to use DAX.
• D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the
game to use Redis.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 7:36 p.m.
Explain:
• Rapid Updates and Retrieval: Amazon ElastiCache for Redis is a highly performant in-memory data store
ideal for caching frequently accessed data like player locations. It allows for rapid reads and writes, improving
response times for location tracking.
• Scalability: ElastiCache for Redis scales horizontally, allowing you to add more nodes to the cluster as the
user base grows, ensuring performance remains consistent.
• Reduced Load on RDS: Caching frequently accessed location data in Redis reduces the load on the
Amazon RDS for PostgreSQL instance, improving overall database performance for write operations and other
database functionalities.
Let's analyze why other options might not be ideal:
• A. Multi-AZ with RDS: While enabling Multi-AZ provides disaster recovery benefits, it doesn't directly
address the performance bottleneck for reads and writes.
• B. Migrate to OpenSearch: OpenSearch is a search and analytics platform, not designed for real-time
data storage and retrieval like player locations.
• C. Deploy DAX: DAX is a caching solution specifically designed for Amazon DynamoDB, a NoSQL
database. It's not compatible with Amazon RDS for PostgreSQL.
Cầp nhầt và truy xuầt nhanh chóng: Amazon ElastiCache dành cho Redis là kho lửu trử dử liệu trong bộ nhớ có
hiệu suầt cao, lý tửớng đệ lửu vào bộ nhớ đệm dử liệu đửớc truy cầp thửớng xuyên nhử vị trí cụa ngửới chới. Nó
cho phép độc và ghi nhanh chóng, cầi thiện thới gian phần hội đệ theo dõi vị trí.
Khầ năng mớ rộng: ElastiCache dành cho Redis mớ rộng theo chiệu ngang, cho phép bần thêm nhiệu nút hớn vào
cụm khi cớ sớ ngửới dùng tăng lên, đầm bầo hiệu suầt vần ộn định.
Giầm tầi trên RDS: Lửu vào bộ nhớ đệm dử liệu vị trí đửớc truy cầp thửớng xuyên trong Redis giúp giầm tầi trên
phiên bần Amazon RDS for PostgreSQL, cầi thiện hiệu suầt cớ sớ dử liệu tộng thệ cho các hoầt động ghi và các
chửc năng cớ sớ dử liệu khác.
Hãy phân tích lý do tầi sao các lửa chộn khác có thệ không lý tửớng:
A. Multi-AZ với RDS: Mầc dù việc kích hoầt Multi-AZ mang lầi lới ích khầc phục thầm hộa nhửng nó không trửc
tiệp giầi quyệt vần đệ thầt cộ chai vệ hiệu suầt đội với hoầt động độc và ghi.
B. Di chuyện sang OpenSearch: OpenSearch là nện tầng tìm kiệm và phân tích, không đửớc thiệt kệ đệ lửu trử và
truy xuầt dử liệu theo thới gian thửc nhử vị trí cụa ngửới chới.
C. Triện khai DAX: DAX là giầi pháp bộ nhớ đệm đửớc thiệt kệ riêng cho Amazon DynamoDB, cớ sớ dử liệu
NoSQL. Nó không tửớng thích với Amazon RDS cho PostgreSQL.

Question #: : 727
prevent this type of disruption in the future.: ngăn chần loầi gián đoần này trong tửớng lai.
operational overhead :chi phí vần hành
deletion protection : tính năng bầo vệ xóa
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator
accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's
operations. The company wants to prevent this type of disruption in the future.

Which solution will meet this requirement with the LEAST operational overhead?
• A. Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create
an AWS Lambda function to automatically restore deleted DynamoDB tables.
• B. Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
• C. Configure deletion protection on the DynamoDB tables.
• D. Enable point-in-time recovery on the DynamoDB tables.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 7:39 p.m.
Explain:
C. Configure deletion protection on the DynamoDB tables.
Here's why this option is the most suitable:
• Prevents Accidental Deletion: Deletion protection acts as a safety net. It requires users to confirm the
deletion explicitly, preventing accidental deletion by the IT administrator or anyone else with the necessary
permissions.
• Minimal Operational Overhead: Enabling deletion protection is a simple configuration change. It doesn't
require setting up or managing additional resources, minimizing operational overhead for ongoing maintenance.
• Easy to Implement: This solution is straightforward to implement and doesn't require significant changes
to existing workflows.
Let's analyze why other options might create more overhead:
• A. CloudTrail, EventBridge, and Lambda for Auto-Restore: This option involves setting up multiple
services (CloudTrail, EventBridge, and Lambda) to automate table restoration. While it offers automation, it
requires initial setup, configuration management, and ongoing maintenance, increasing operational overhead
compared to the simpler deletion protection.
• B. Manual Backup and Restore Plan: Manual backups and restores are time-consuming, error-prone,
and require ongoing planning and execution. This approach creates significant operational overhead compared to
deletion protection.
• D. Point-in-Time Recovery: While point-in-time recovery allows restoring a table to a specific point in
time, it doesn't prevent accidental deletion in the first place. Additionally, configuring point-in-time recovery adds
complexity compared to deletion protection.

Question #: : 728
running out of storage capacity: sầp hệt dung lửớng lửu trử
minimizing bandwidth costs: giầm thiệu chi phí băng thông
immediate retrieval of data at no additional cost: truy xuầt dử liệu ngay lầp tửc mà không mầt thêm chi phí.
A company has an on-premises data center that is running out of storage capacity. The company wants to migrate
its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate
retrieval of data at no additional cost.
How can these requirements be met?
• A. Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity
for the workload.
• B. Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon
S3 while retaining copies of frequently accessed data subsets locally.
• C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
• D. Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage
Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data
to Amazon S3.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 7:53 p.m.
Explain:
C. Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
Here's why this option is the best fit:
1. AWS Storage Gateway with Stored Volumes: This option allows the company to store data locally on-
premises using Storage Gateway's stored volumes feature. It ensures low-latency access to frequently accessed
data while minimizing bandwidth usage and costs associated with transferring data to the cloud.
2. Asynchronous Backup to Amazon S3: By configuring Storage Gateway to asynchronously back up point-
in-time snapshots of the data to Amazon S3, the company can ensure that data is securely stored in the cloud for
durability and disaster recovery purposes. Since the backups are performed asynchronously, it minimizes the
impact on bandwidth and allows for immediate retrieval of data without additional cost.
Option A, deploying Amazon S3 Glacier Vault with expedited retrieval, is not suitable because it involves
additional costs for expedited retrieval and may not meet the requirement for immediate retrieval at no additional
cost.
Option B, using AWS Storage Gateway with cached volumes, is not ideal because it may result in higher bandwidth
costs due to frequent data transfers between the on-premises cache and Amazon S3.
Option D, deploying AWS Direct Connect and using Storage Gateway to store data locally while asynchronously
backing up snapshots to Amazon S3, is not necessary as it adds complexity and potentially higher costs with Direct
Connect usage. It also doesn't directly address the requirement for immediate retrieval of data at no additional
cost.

Question #: : 729
scale resources appropriately according to both the forecast and live changes in utilization.: chia tỷ lệ tài nguyên
một cách thích hớp theo cầ dử báo và nhửng thay đội trửc tiệp trong việc sử dụng.
A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances
run in an Auto Scaling group for the application tier.

The company needs to make an automated scaling plan that will analyze each resource's daily and weekly historical
workload trends. The configuration must scale resources appropriately according to both the forecast and live
changes in utilization.

Which scaling strategy should a solutions architect recommend to meet these requirements?
• A. Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.
• B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
• C. Create an automated scheduled scaling action based on the traffic patterns of the web application.
• D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 7:56 p.m.
Explain:
B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.
Here's why this option is the best fit:
• Predictive Scaling: This option allows the company to use machine learning algorithms to forecast future
resource usage based on historical workload patterns. By enabling predictive scaling, the system can automatically
adjust the desired capacity of resources to match the predicted demand, ensuring proactive scaling before
significant changes occur.
• Dynamic Scaling with Target Tracking: By configuring dynamic scaling with target tracking, the system
can automatically adjust the number of EC2 instances in the Auto Scaling group to maintain a target metric (e.g.,
CPU utilization or request count per instance). This approach ensures that the application dynamically scales up
or down based on current demand, in addition to the predictions made by predictive scaling.
Option A, implementing dynamic scaling with step scaling based on average CPU utilization from the EC2
instances, relies solely on current CPU utilization metrics and does not incorporate predictive capabilities to
anticipate future demand.
Option C, creating an automated scheduled scaling action based on the traffic patterns of the web application,
may not be as effective as predictive scaling because it relies on predefined schedules rather than dynamically
adjusting to changing workload patterns.
Option D, setting up a simple scaling policy and increasing the cooldown period based on the EC2 instance startup
time, does not address the requirement for analyzing historical workload trends or forecasting future demand. It
also does not provide dynamic scaling based on live changes in utilization.

Question #: : 730
reduces the DB cluster usage: giúp giầm mửc sử dụng cụm DB
alleviate the effect of repeated reads on the DB cluster.: giầm bớt ầnh hửớng cụa việc độc lầp lầi trên cụm cớ sớ
dử liệu.
A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL
DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. DB cluster usage
increases at a much faster rate.

The company adds a read replica, which reduces the DB cluster usage for a short period of time. However, the
load continues to increase. The operations that cause the increase in DB cluster usage are all repeated read
statements that are related to delivery details. The company needs to alleviate the effect of repeated reads on the
DB cluster.

Which solution will meet these requirements MOST cost-effectively?


• A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
• B. Add an additional read replica to the DB cluster.
• C. Configure Aurora Auto Scaling for the Aurora read replicas.
• D. Modify the DB cluster to have multiple writer instances.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 8:01 p.m.
Explain:

A. Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
Here's why this option is the best fit:
• Caching Solution: Amazon ElastiCache for Redis provides an in-memory caching solution that can
significantly reduce the load on the DB cluster by caching frequently accessed data. Since the operations causing
the increase in DB cluster usage are repeated read statements related to delivery details, caching this data in Redis
can help alleviate the need for repeated reads from the DB cluster.
• Reduces Database Load: By caching frequently accessed data in ElastiCache for Redis, the number of
repeated reads hitting the DB cluster can be reduced, leading to lower database load and improved performance.
Option B, adding an additional read replica to the DB cluster, may help distribute the read workload, but it may
not effectively address the issue of repeated reads causing increased DB cluster usage. Additionally, adding more
read replicas may increase costs without directly addressing the root cause of the problem.
Option C, configuring Aurora Auto Scaling for the Aurora read replicas, helps with scaling the read capacity of
the DB cluster but does not directly address the issue of repeated reads.
Option D, modifying the DB cluster to have multiple writer instances, does not seem appropriate as the issue is
related to read operations, not write operations. Additionally, Aurora does not support multiple writer instances
for a single DB cluster.

Question #: : 731
Discovers: phát hiện
Latency is in an acceptable range. : Độ trệ nầm trong phầm vi chầp nhần đửớc.
eventually consistent reads: cuội cùng các lần độc nhầt quán
Strongly consistent reads: Các lần độc nhầt quán cao

A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers
that many requests to the table are not returning the latest data. The company's users have not reported any other
issues with database performance. Latency is in an acceptable range.

Which design change should the solutions architect recommend?


• A. Add read replicas to the table.
• B. Use a global secondary index (GSI).
• C. Request strongly consistent reads for the table.
• D. Request eventually consistent reads for the table.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 8:01 p.m.
Explain;
• Strongly consistent reads in Amazon DynamoDB ensure that read operations return the most up-to-date
data. This means that DynamoDB will read data that reflects all writes that were acknowledged prior to the read.
By requesting strongly consistent reads, you ensure that the application receives the latest data from the
DynamoDB table.
• In contrast, eventually consistent reads (option D) provide lower consistency guarantees, which means
that there might be a slight delay between a write operation and when the written data becomes available for read
operations. This might not meet the requirement of returning the latest data.
Options A and B, adding read replicas or using a global secondary index, may help distribute read traffic and
improve performance, but they do not directly address the issue of ensuring the latest data is returned.
Therefore, option C, requesting strongly consistent reads, is the most appropriate solution to ensure that all read
requests return the latest data from the DynamoDB table.


Question #: : 732
principle of least privilege: nguyên tầc đầc quyện tội thiệu
A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company
used the principle of least privilege to configure the database access credentials. The company's security team
wants to protect the application and the database from SQL injection and other web-based attacks.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use security groups and network ACLs to secure the database and application servers.
• B. Use AWS WAF to protect the application. Use RDS parameter groups to configure the security
settings.
• C. Use AWS Network Firewall to protect the application and the database.
• D. Use different database accounts in the application code for different functions. Avoid granting
excessive privileges to the database users.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 8:06 p.m.
Explain:

1. AWS WAF (Web Application Firewall): AWS WAF helps protect web applications from common web
exploits that could affect application availability, compromise security, or consume excessive resources. By
configuring AWS WAF, you can define rules to filter web traffic and block potentially harmful requests, including
those attempting SQL injection and other web-based attacks. AWS WAF integrates seamlessly with Amazon
CloudFront, Application Load Balancer (ALB), and API Gateway, making it easy to protect your applications
without significant changes to your architecture.
2. RDS Parameter Groups: RDS parameter groups allow you to configure database engine settings to meet
specific requirements, including security settings. While they may not directly prevent web-based attacks like SQL
injection, they enable you to configure security-related parameters such as enforcing SSL/TLS connections,
setting timeouts, and enabling encryption. These settings help enhance the security posture of your RDS database
with minimal operational overhead.
Option A (using security groups and network ACLs) provides network-level security controls but does not
specifically address web-based attacks like SQL injection. Additionally, managing security groups and network
ACLs might require more effort compared to configuring AWS WAF and RDS parameter groups.
Option C (using AWS Network Firewall) is more focused on network traffic filtering rather than protecting web
applications against specific web-based attacks like SQL injection. While it provides network-level protection, it
may involve more operational overhead compared to using AWS WAF and RDS parameter groups.
Option D (using different database accounts in the application code) is a good security practice to limit the scope
of privileges granted to database users. However, it alone does not protect the application from web-based attacks
like SQL injection. It's a complementary measure that should be used in conjunction with other security
mechanisms like AWS WAF and secure coding practices.
AWS WAF (Tửớng lửa ửng dụng web): AWS WAF giúp bầo vệ các ửng dụng web khội các hoầt động khai thác
web thông thửớng có thệ ầnh hửớng đện tính khầ dụng cụa ửng dụng, xâm phầm bầo mầt hoầc tiêu tộn quá nhiệu
tài nguyên. Bầng cách định cầu hình AWS WAF, bần có thệ xác định các quy tầc đệ lộc lửu lửớng truy cầp web và
chần các yêu cầu có hầi, bao gộm cầ nhửng yêu cầu cộ gầng chèn SQL và các cuộc tần công dửa trên web khác.
AWS WAF tích hớp liện mầch với Amazon CloudFront, Cân bầng tầi ửng dụng (ALB) và API Gateway, giúp bần
dệ dàng bầo vệ ửng dụng cụa mình mà không cần phầi thay đội đáng kệ kiện trúc cụa mình.

Nhóm tham sộ RDS: Nhóm tham sộ RDS cho phép bần định cầu hình cài đầt công cụ cớ sớ dử liệu đệ đáp ửng
các yêu cầu cụ thệ, bao gộm cầ cài đầt bầo mầt. Mầc dù chúng có thệ không trửc tiệp ngăn chần các cuộc tần công
dửa trên web nhử SQL SQL, nhửng chúng cho phép bần định cầu hình các tham sộ liên quan đện bầo mầt nhử
thửc thi kệt nội SSL/TLS, đầt thới gian chớ và bầt mã hóa. Các cài đầt này giúp nâng cao trầng thái bầo mầt cụa
cớ sớ dử liệu RDS cụa bần với chi phí hoầt động tội thiệu.

Tùy chộn A (sử dụng nhóm bầo mầt và ACL mầng) cung cầp các biện pháp kiệm soát bầo mầt cầp mầng nhửng
không giầi quyệt cụ thệ các cuộc tần công dửa trên web nhử SQL SQL. Ngoài ra, việc quần lý các nhóm bầo mầt
và ACL mầng có thệ cần nhiệu nộ lửc hớn so với việc định cầu hình các nhóm tham sộ AWS WAF và RDS.

Tùy chộn C (sử dụng Tửớng lửa mầng AWS) tầp trung hớn vào việc lộc lửu lửớng mầng thay vì bầo vệ các ửng
dụng web trửớc các cuộc tần công dửa trên web cụ thệ nhử tiêm SQL. Mầc dù cung cầp khầ năng bầo vệ ớ cầp độ
mầng nhửng nó có thệ đòi hội nhiệu chi phí hoầt động hớn so với việc sử dụng các nhóm tham sộ AWS WAF và
RDS.

Tùy chộn D (sử dụng các tài khoần cớ sớ dử liệu khác nhau trong mã ửng dụng) là một phửớng pháp bầo mầt tột
đệ giới hần phầm vi đầc quyện đửớc cầp cho ngửới dùng cớ sớ dử liệu. Tuy nhiên, chỉ riêng nó không bầo vệ đửớc
ửng dụng khội các cuộc tần công dửa trên web nhử SQL SQL. Đó là một biện pháp bộ sung nên đửớc sử dụng
cùng với các cớ chệ bầo mầt khác nhử AWS WAF và các biện pháp mã hóa an toàn.

Question #: : 733
prevent malicious activity: ngăn chần hoầt động độc hầi
identify abnormal failed and incomplete login attempts: xác định các lần đăng nhầp thầt bầi và không đầy đụ bầt
thửớng
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations.
The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to
prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases.

Which solution will meet these requirements in the MOST operationally efficient way?
• A. Attach service control policies (SCPs) to the root of the organization to identity the failed login
attempts.
• B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
• C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a
central Amazon S3 bucket.
• D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 8:17 p.m.
Explain:
B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
Here's why:
1. Amazon GuardDuty with RDS Protection: Amazon GuardDuty is a threat detection service that
continuously monitors for malicious activity and unauthorized behavior in AWS accounts. It offers a specific
feature called RDS Protection, which is designed to detect and alert on suspicious activity related to Amazon RDS
instances, including Amazon Aurora PostgreSQL databases. By enabling RDS Protection in GuardDuty for the
member accounts of the organization, the company can automatically detect abnormal failed and incomplete login
attempts to the databases without needing to configure custom monitoring solutions.
2. Operationally Efficient: Enabling RDS Protection in GuardDuty is a straightforward process that can be
done centrally for all member accounts of the organization. Once enabled, GuardDuty automatically analyzes
CloudTrail logs and VPC Flow Logs to identify potential threats, including unauthorized database access attempts.
This approach minimizes the operational overhead of setting up and managing custom monitoring solutions, such
as SCPs, CloudWatch Logs, or CloudTrail configurations.
Option A (using SCPs) does not provide specific capabilities for identifying abnormal login attempts to Aurora
PostgreSQL databases. SCPs are used for controlling access and permissions within AWS Organizations but are
not designed for threat detection or monitoring.
Option C (publishing Aurora logs to CloudWatch Logs) and Option D (publishing database events to CloudTrail)
are valid approaches for capturing database activity, but they require additional configuration and management
to extract meaningful insights and detect abnormal login attempts. These options may involve more operational
overhead compared to using GuardDuty's RDS Protection feature, which offers out-of-the-box threat detection
capabilities tailored for RDS instances.
Amazon GuardDuty với RDS Protection: Amazon GuardDuty là dịch vụ phát hiện mội đe dộa liên tục giám sát
hoầt động độc hầi và hành vi trái phép trong tài khoần AWS. Nó cung cầp một tính năng cụ thệ gội là Bầo vệ RDS,
đửớc thiệt kệ đệ phát hiện và cầnh báo vệ hoầt động đáng ngớ liên quan đện các phiên bần Amazon RDS, bao
gộm cầ cớ sớ dử liệu Amazon Aurora PostgreSQL. Bầng cách bầt Bầo vệ RDS trong GuardDuty cho các tài khoần
thành viên cụa tộ chửc, công ty có thệ tử động phát hiện các lần đăng nhầp không thành công và không đầy đụ
bầt thửớng vào cớ sớ dử liệu mà không cần phầi định cầu hình các giầi pháp giám sát tùy chỉnh.

Hoầt động hiệu quầ: Kích hoầt Bầo vệ RDS trong GuardDuty là một quy trình đớn giần có thệ đửớc thửc hiện
tầp trung cho tầt cầ các tài khoần thành viên cụa tộ chửc. Sau khi đửớc bầt, GuardDuty sẽ tử động phân tích nhầt
ký CloudTrail và Nhầt ký luộng VPC đệ xác định các mội đe dộa tiệm ần, bao gộm cầ các nộ lửc truy cầp cớ sớ dử
liệu trái phép. Cách tiệp cần này giầm thiệu chi phí hoầt động trong việc thiệt lầp và quần lý các giầi pháp giám
sát tùy chỉnh, chầng hần nhử SCP, Nhầt ký CloudWatch hoầc cầu hình CloudTrail.

Tùy chộn A (sử dụng SCP) không cung cầp khầ năng cụ thệ đệ xác định các lần đăng nhầp bầt thửớng vào cớ sớ
dử liệu Aurora PostgreSQL. SCP đửớc sử dụng đệ kiệm soát quyện truy cầp và quyện trong Tộ chửc AWS nhửng
không đửớc thiệt kệ đệ phát hiện hoầc giám sát mội đe dộa.

Tùy chộn C (xuầt bần nhầt ký Aurora lên CloudWatch Logs) và Tùy chộn D (xuầt bần sử kiện cớ sớ dử liệu lên
CloudTrail) là các phửớng pháp hớp lệ đệ ghi lầi hoầt động cớ sớ dử liệu, nhửng chúng yêu cầu cầu hình và quần
lý bộ sung đệ trích xuầt nhửng hiệu biệt có ý nghĩa và phát hiện các lần đăng nhầp bầt thửớng. Các tùy chộn này
có thệ đòi hội nhiệu chi phí vần hành hớn so với việc sử dụng tính năng Bầo vệ RDS cụa GuardDuty, tính năng
này cung cầp khầ năng phát hiện mội đe dộa sần có đửớc tùy chỉnh cho các phiên bần RDS.

Question #: : 734

A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1
Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection
between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company
and the corporation do not overlap. The company requires connectivity between two Regions and the data centers.
The company needs a solution that is scalable while reducing operational overhead.

What should a solutions architect do to meet these requirements?


• A. Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in eu-west-2.
• B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu-
west-2.
• C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. Use AWS VPN
CloudHub to send and receive data between the data centers and each VPC.
• D. Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the
virtual private gateways of the VPCs in each Region to the Direct Connect gateway.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 8:34 p.m.

Question #: : 735

A company is developing a mobile game that streams score updates to a backend processor and then posts results
on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the
mobile game updates in order of receipt, and store the processed updates in a highly available database. The
company also wants to minimize the management overhead required to maintain the solution.

What should the solutions architect do to meet these requirements?


• A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams
with AWS Lambda. Store the processed updates in Amazon DynamoDB.
• B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2
instances set up for Auto Scaling. Store the processed updates in Amazon Redshift.
• C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an
AWS Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database
running on Amazon EC2.
• D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon
EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an
Amazon RDS Multi-AZ DB instance.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 8:40 p.m.

Question #: : 736

A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are
stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution
that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational
overhead.

Which solution meets these requirements and is MOST cost-effective?


• A. Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the
centralized S3 bucket.
• B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-
west-2. Use this S3 bucket for log analysis.
• C. Write a script that uses the PutObject API operation every day to copy the entire contents of the
buckets to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
• D. Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the
S3 buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for
log analysis.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 8:41 p.m.

Question #: : 737

A company has an application that delivers on-demand training videos to students around the world. The
application also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket
in the us-east-2 Region.

The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in the ap-southeast-1 Region.
The company wants to replicate the data to the new S3 buckets. The company needs to minimize latency for
developers who upload videos and students who stream videos near eu-west-2 and ap-southeast-1.

Which combination of steps will meet these requirements with the FEWEST changes to the application? (Choose
two.)
• A. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure
one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
• B. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure
one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
• C. Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
• D. Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name
(ARN) of the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
• E. Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name
(ARN) of the Multi-Region Access Point for video streaming and uploads.

Hide Answer
Suggested Answer: AB

Community vote distribution


CE (100%)
by Andy_09 at Feb. 5, 2024, 8:43 p.m.

Question #: : 738

A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users
also can post photos and videos from inside the app.

Users access content often in the first minutes after the content is posted. New content quickly replaces older
content, and then the older content disappears. The local nature of the news means that users consume 90% of
the content within the AWS Region where it is uploaded.

Which solution will optimize the user experience by providing the LOWEST latency for content uploads?
• A. Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
• B. Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
• C. Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to
Amazon S3.
• D. Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple
distributions of Amazon CloudFront.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 8:46 p.m.

Question #: : 739

A company is building a new application that uses serverless architecture. The architecture will consist of an
Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.

The company wants to add a service that can send messages received from the API Gateway REST API to multiple
target Lambda functions for processing. The service must offer message filtering that gives the target Lambda
functions the ability to receive only the messages the functions need.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Send the requests from the API Gateway REST API to an Amazon Simple Notification Service
(Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic.
Configure the target Lambda functions to poll the different SQS queues.
• B. Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge
to invoke the target Lambda functions.
• C. Send the requests from the API Gateway REST API to Amazon Managed Streaming for Apache Kafka
(Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
• D. Send the requests from the API Gateway REST API to multiple Amazon Simple Queue Service
(Amazon SQS) queues. Configure the target Lambda functions to poll the different SQS queues.

Hide Answer
Suggested Answer: D

Community vote distribution


B (67%)
A (33%)
by Andy_09 at Feb. 5, 2024, 8:53 p.m.

Question #: : 740

A company migrated millions of archival files to Amazon S3. A solutions architect needs to implement a solution
that will encrypt all the archival data by using a customer-provided key. The solution must encrypt existing
unencrypted objects and future objects.

Which solution will meet these requirements?


• A. Create a list of unencrypted objects by filtering an Amazon S3 Inventory report. Configure an S3
Batch Operations job to encrypt the objects from the list with a server-side encryption with a customer-provided
key (SSE-C). Configure the S3 default encryption feature to use a server-side encryption with a customer-
provided key (SSE-C).
• B. Use S3 Storage Lens metrics to identify unencrypted S3 buckets. Configure the S3 default encryption
feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
• C. Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure an
AWS Batch job to encrypt the objects from the list with a server-side encryption with AWS KMS keys (SSE-KMS).
Configure the S3 default encryption feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
• D. Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure the
S3 default encryption feature to use a server-side encryption with a customer-provided key (SSE-C).

Hide Answer
Suggested Answer: B
Community vote distribution
A (100%)
by Andy_09 at Feb. 5, 2024, 8:54 p.m.

Question #: : 741

The DNS provider that hosts a company's domain name records is experiencing outages that cause service
disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service
and wants the service to run on AWS.

What should a solutions architect do to rapidly migrate the DNS hosting service?
• A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing
the domain records hosted by the previous provider.
• B. Create an Amazon Route 53 private hosted zone for the domain name. Import the zone file containing
the domain records hosted by the previous provider.
• C. Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS
Directory Service for Microsoft Active Directory for the domain records.
• D. Create an Amazon Route 53 Resolver inbound endpoint in the VPC. Specify the IP addresses that the
provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the
domain to the IP addresses that are specified in the inbound endpoint.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 8:57 p.m.

Question #: : 742

A company is building an application on AWS that connects to an Amazon RDS database. The company wants to
manage the application configuration and to securely store and retrieve credentials for the database and other
services.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager
to store and retrieve the credentials.
• B. Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager
Parameter Store to store and retrieve the credentials.
• C. Use an encrypted application configuration file. Store the file in Amazon S3 for the application
configuration. Create another S3 file to store and retrieve the credentials.
• D. Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store
and retrieve the credentials.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 8:58 p.m.

Question #: : 743

To meet security requirements, a company needs to encrypt all of its application data in transit while
communicating with an Amazon RDS MySQL DB instance. A recent security audit revealed that encryption at
rest is enabled using AWS Key Management Service (AWS KMS), but data in transit is not enabled.

What should a solutions architect do to satisfy the security requirements?


• A. Enable IAM database authentication on the database.
• B. Provide self-signed certificates. Use the certificates in all connections to the RDS instance.
• C. Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption enabled.
• D. Download AWS-provided root certificates. Provide the certificates in all connections to the RDS
instance.

Hide Answer
Suggested Answer: A

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 9:02 p.m.

Question #: : 744

A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load
Balancing (ELB) load balancer. However, many of the web service clients can only reach IP addresses authorized
on their firewalls.
What should a solutions architect recommend to meet the clients’ needs?
• A. A Network Load Balancer with an associated Elastic IP address.
• B. An Application Load Balancer with an associated Elastic IP address.
• C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
• D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

Hide Answer
Suggested Answer: D

Community vote distribution


A (75%)
C (25%)
by Andy_09 at Feb. 5, 2024, 9:08 p.m.

Question #: : 745

A company has established a new AWS account. The account is newly provisioned and no changes have been
made to the default settings. The company is concerned about the security of the AWS account root user.

What should be done to secure the root user?


• A. Create IAM users for daily administrative tasks. Disable the root user.
• B. Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
• C. Generate an access key for the root user. Use the access key for daily administration tasks instead of
the AWS Management Console.
• D. Provide the root user credentials to the most senior solutions architect. Have the solutions architect
use the root user for daily administration tasks.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 9:09 p.m.

Question #: : 746
A company is deploying an application that processes streaming data in near-real time. The company plans to use
Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest
possible latency between nodes.

Which combination of network solutions will meet these requirements? (Choose two.)
• A. Enable and configure enhanced networking on each EC2 instance.
• B. Group the EC2 instances in separate accounts.
• C. Run the EC2 instances in a cluster placement group.
• D. Attach multiple elastic network interfaces to each EC2 instance.
• E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.

Hide Answer
Suggested Answer: BE

Community vote distribution


AC (100%)
by Andy_09 at Feb. 5, 2024, 9:11 p.m.

Question #: : 747

A financial services company wants to shut down two data centers and migrate more than 100 TB of data to AWS.
The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders.
Most of the data is unstructured, and the company’s file storage consists of SMB-based storage types from multiple
vendors. The company does not want to change its applications to access the data after migration.

What should a solutions architect do to meet these requirements with the LEAST operational overhead?
• A. Use AWS Direct Connect to migrate the data to Amazon S3.
• B. Use AWS DataSync to migrate the data to Amazon FSx for Lustre.
• C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
• D. Use AWS Direct Connect to migrate the data on-premises file storage to an AWS Storage Gateway
volume gateway.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 9:13 p.m.

Question #: : 748

A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The
company sets up a dedicated monitoring member account in the organization. The company wants to query and
visualize observability data across the accounts by using Amazon CloudWatch.

Which solution will meet these requirements?


• A. Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS
CloudFormation template provided by the monitoring account in each AWS account to share the data with the
monitoring account.
• B. Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account
under the Organizations root organizational unit (OU).
• C. Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy
to have access to query and visualize the CloudWatch data in the account. Attach the new IAM policy to the new
IAM user.
• D. Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS
account. Attach the IAM policies to the new IAM user.

Hide Answer
Suggested Answer: C

Community vote distribution


A (67%)
C (33%)
by Andy_09 at Feb. 5, 2024, 9:17 p.m.

Question #: : 749

A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto
Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution,
and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront
distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from
accessing the website.

What should a solutions architect do to protect the application?


• A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP
address.
• B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
• C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address.
• D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the
malicious IP address.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 9:20 p.m.

Question #: : 750

A company sets up an organization in AWS Organizations that contains 10 AWS accounts. A solutions architect
must design a solution to provide access to the accounts for several thousand employees. The company has an
existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.

Which solution will meet these requirements?


• A. Create IAM users for the employees in the required AWS accounts. Connect IAM users to the existing
IdP. Configure federated authentication for the IAM users.
• B. Set up AWS account root users with user email addresses and passwords that are synchronized from
the existing IdP.
• C. Configure AWS IAM Identity Center (AWS Single Sign-On). Connect IAM Identity Center to the
existing IdP. Provision users and groups from the existing IdP.
• D. Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users
in the existing IdP.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 9:22 p.m.

Question #: : 751
A solutions architect is designing an AWS Identity and Access Management (IAM) authorization model for a
company's AWS account. The company has designated five specific employees to have full access to AWS services
and resources in the AWS account.

The solutions architect has created an IAM user for each of the five designated employees and has created an IAM
user group.

Which solution will meet these requirements?


• A. Attach the AdministratorAccess resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
• B. Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
• C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
• D. Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 6:14 p.m.

Question #: : 752

A company has a multi-tier payment processing application that is based on virtual machines (VMs). The
communication between the tiers occurs asynchronously through a third-party middleware solution that
guarantees exactly-once delivery.

The company needs a solution that requires the least amount of infrastructure management. The solution must
guarantee exactly-once delivery for application messaging.

Which combination of actions will meet these requirements? (Choose two.)


• A. Use AWS Lambda for the compute layers in the architecture.
• B. Use Amazon EC2 instances for the compute layers in the architecture.
• C. Use Amazon Simple Notification Service (Amazon SNS) as the messaging component between the
compute layers.
• D. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component
between the compute layers.
• E. Use containers that are based on Amazon Elastic Kubernetes Service (Amazon EKS) for the compute
layers in the architecture.

Hide Answer
Suggested Answer: AD

Community vote distribution


AD (100%)
by hajra313 at Feb. 8, 2024, 11 p.m.

Question #: : 753

A company has a nightly batch processing routine that analyzes report files that an on-premises file system receives
daily through SFTP. The company wants to move the solution to the AWS Cloud. The solution must be highly
available and resilient. The solution also must minimize operational effort.

Which solution meets these requirements?


• A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for
storage. Use an Amazon EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch
operation.
• B. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic Block
Store (Amazon EBS) volume for storage. Use an Auto Scaling group with the minimum number of instances and
desired number of instances set to 1.
• C. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic File
System (Amazon EFS) file system for storage. Use an Auto Scaling group with the minimum number of instances
and desired number of instances set to 1.
• D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull
the batch files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto
Scaling group with a scheduled scaling policy to run the batch operation.

Hide Answer
Suggested Answer: B

Community vote distribution


D (80%)
A (20%)
by Andy_09 at Feb. 5, 2024, 9:30 p.m.

Question #: : 754

A company has users all around the world accessing its HTTP-based application deployed on Amazon EC2
instances in multiple AWS Regions. The company wants to improve the availability and performance of the
application. The company also wants to protect the application against common web exploits that may affect
availability, compromise security, or consume excessive resources. Static IP addresses are required.

What should a solutions architect recommend to accomplish this?


• A. Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF
on the NLBs. Create an accelerator using AWS Global Accelerator and register the NLBs as endpoints.
• B. Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Deploy AWS WAF
on the ALBs. Create an accelerator using AWS Global Accelerator and register the ALBs as endpoints.
• C. Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF
on the NLBs. Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based
routing to route requests to the NLBs.
• D. Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon
CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the
ALBs. Deploy AWS WAF on the CloudFront distribution.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 9:32 p.m.

Question #: : 755

A company’s data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and
multiple DB instances across different Availability Zones. Users have recently reported errors from the database
that indicate that there are too many connections. The company wants to reduce the failover time by 20% when
a read replica is promoted to primary writer.

Which solution will meet this requirement?


• A. Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.
• B. Use Amazon RDS Proxy in front of the Aurora database.
• C. Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
• D. Switch to Amazon Redshift with relocation capability.
Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 9:37 p.m.

Question #: : 756

A company stores text files in Amazon S3. The text files include customer chat messages, date and time
information, and customer personally identifiable information (PII).

The company needs a solution to provide samples of the conversations to an external service provider for quality
control. The external service provider needs to randomly pick sample conversations up to the most recent
conversation. The company must not share the customer PII with the external service provider. The solution must
scale when the number of customer conversations increases.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when
the function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
• B. Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the PII
from the files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access
the bucket that does not contain the PII.
B. Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the PII from the
files, and allows the external service provider to download new versions of the files that have the PII redacted.
• D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the
files that does not contain PII. Configure the Lambda function to store the non-PII data in the DynamoDB table
when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 9:39 p.m.

Question #: : 757

A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and
the system cannot run on more than one instance. A solutions architect must design a resilient solution that can
improve the recovery time for the system.
What should the solutions architect recommend to meet these requirements?
• A. Enable termination protection for the EC2 instance.
• B. Configure the EC2 instance for Multi-AZ deployment.
• C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
• D. Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID
configurations for storage redundancy.

Hide Answer
Suggested Answer: A

Community vote distribution


D (80%)
C (20%)
by Andy_09 at Feb. 6, 2024, 6:21 p.m.

Question #: : 758

A company wants to deploy its containerized application workloads to a VPC across three Availability Zones. The
company needs a solution that is highly available across Availability Zones. The solution must require minimal
changes to the application.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling
to use target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with
an Availability Zone attribute.
• B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure Application
Auto Scaling to use target tracking scaling. Set the minimum capacity to 3.
• C. Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spread placement group.
Configure an Auto Scaling group to use target tracking scaling. Set the minimum capacity to 3.
• D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure
Application Auto Scaling to use Lambda as a scalable target. Set the minimum capacity to 3.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 9:50 p.m.

Question #: : 759

A media company stores movies in Amazon S3. Each movie is stored in a single video file that ranges from 1 GB
to 10 GB in size.

The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There
is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The
company wants to minimize hosting service costs based on demand.

Which solution will meet these requirements?


• A. Store all media content in Amazon S3. Use S3 Lifecycle policies to move media data into the
Infrequent Access tier when the demand for a movie decreases.
• B. Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-infrequent
Access (S3 Standard-IA). When a user orders an older movie, retrieve the video file by using standard retrieval.
• C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier
Flexible Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
• D. Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible
Retrieval. When a user orders an older movie, retrieve the video file by using bulk retrieval.

Hide Answer
Suggested Answer: A

Community vote distribution


B (57%)
C (29%)
14%
by Andy_09 at Feb. 5, 2024, 9:52 p.m.

Question #: : 760

A solutions architect needs to design the architecture for an application that a vendor provides as a Docker
container image. The container needs 50 GB of storage available for temporary files. The infrastructure must be
serverless.

Which solution meets these requirements with the LEAST operational overhead?
• A. Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted
volume that has more than 50 GB of space.
• B. Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block
Store (Amazon EBS) volume that has more than 50 GB of space.
• C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch
type. Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.
• D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch
type with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task
definition for the container image. Create a service with that task definition.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 9:54 p.m.

Question #: : 761

A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management
Console. The directory service is not compatible with Security Assertion Markup Language (SAML).

Which solution meets these requirements?


• A. Enable AWS IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP.
• B. Create an IAM policy that uses AWS credentials, and integrate the policy into LDAP.
• C. Set up a process that rotates the IAM credentials whenever LDAP credentials are updated.
• D. Develop an on-premises custom identity broker application or process that uses AWS Security Token
Service (AWS STS) to get short-lived credentials.

Hide Answer
Suggested Answer: C

Community vote distribution


D (100%)
by kempes at Feb. 7, 2024, 11:26 p.m.

Question #: : 762
A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2
instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The
company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a
separate AWS account.
• B. Copy all AMIs to another AWS account periodically.
• C. Create a retention rule in Recycle Bin.
• D. Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 9:59 p.m.

Question #: : 763

A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud
within the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose
during the night only.

What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
• A. Use AWS Snowmobile to ship the data to AWS.
• B. Order multiple AWS Snowball devices to ship the data to AWS.
• C. Enable Amazon S3 Transfer Acceleration and securely upload the data.
• D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 5, 2024, 10:02 p.m.

Question #: : 764
A company wants to migrate its three-tier application from on premises to AWS. The web tier and the application
tier are running on third-party virtual machines (VMs). The database tier is running on MySQL.

The company needs to migrate the application by making the fewest possible changes to the architecture. The
company also needs a database solution that can restore data to a specific point in time.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.
• B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2
instances in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
• C. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2
instances in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
• D. Migrate the web tier and the application tier to Amazon EC2 instances in public subnets. Migrate the
database tier to Amazon Aurora MySQL in public subnets.

Hide Answer
Suggested Answer: A

Community vote distribution


B (86%)
14%
by Andy_09 at Feb. 5, 2024, 10:03 p.m.

Question #: : 765

A development team is collaborating with another company to create an integrated product. The other company
needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development
team's account. The other company wants to poll the queue without giving up its own account permissions to do
so.

How should a solutions architect provide access to the SQS queue?


• A. Create an instance profile that provides the other company access to the SQS queue.
• B. Create an IAM policy that provides the other company access to the SQS queue.
• C. Create an SQS access policy that provides the other company access to the SQS queue.
• D. Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other
company access to the SQS queue.
Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by Andy_09 at Feb. 5, 2024, 10:05 p.m.

Question #: : 766

A company’s developers want a secure way to gain SSH access on the company's Amazon EC2 instances that run
the latest version of Amazon Linux. The developers work remotely and in the corporate office.

The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private
subnet and access the internet through a NAT gateway that is deployed in a public subnet.

What should a solutions architect do to meet these requirements MOST cost-effectively?


• A. Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection
IAM permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2
instances.
• B. Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct
the developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on
the corporate network. Instruct the developers to set up another VPN connection for access when they work
remotely.
• C. Create a bastion host in the public subnet of the VPConfigure the security groups and SSH keys of
the bastion host to only allow connections and SSH authentication from the developers’ corporate and remote
networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2 instances.
• D. Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the
EC2 instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.

Hide Answer
Suggested Answer: B

Community vote distribution


D (100%)
by Andy_09 at Feb. 5, 2024, 10:06 p.m

Question #: : 767
A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown
exponentially over the past few months. The company's researchers regularly require a subset of the entire dataset
to be immediately available with minimal lag. However, the entire dataset does not need to be accessed on a daily
basis. All the data currently resides in on-premises storage arrays, and the company wants to reduce ongoing
capital expenses.

Which storage solution should a solutions architect recommend to meet these requirements?
• A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing
basis.
• B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage. Migrate
the data to the Storage Gateway appliance.
• C. Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as
the target storage. Migrate the data to the Storage Gateway appliance.
• D. Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate
data to an Amazon Elastic File System (Amazon EFS) file system.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 6:50 a.m.

Question #: : 768

A company has a business-critical application that runs on Amazon EC2 instances. The application stores data in
an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours.

Which solution meets these requirements with the LEAST operational overhead?
• A. Configure point-in-time recovery for the table.
• B. Use AWS Backup for the table.
• C. Use an AWS Lambda function to make an on-demand backup of the table every hour.
• D. Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a
copy of the stream in an Amazon S3 bucket.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Andy_09 at Feb. 5, 2024, 10:11 p.m.

Question #: : 769

A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are
processed to extract metadata, which takes less than 5 seconds. The volume and frequency of the uploads varies
from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to
design a cost-effective architecture that will meet these requirements.

What should the solutions architect recommend?


• A. Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files.
• B. Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda
function to process the files.
• C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS
Lambda function to process the files.
• D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded
to Amazon S3. Invoke an AWS Lambda function to process the files.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 6:57 a.m.

Question #: : 770

A company’s application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-
driven architecture. The company uses nonproduction development environments in a different AWS account to
test new features before the company deploys the features to production.

The production instances show constant usage because of customers in different time zones. The company uses
nonproduction instances only during business hours on weekdays. The company does not use the nonproduction
instances on the weekends. The company wants to optimize the costs to run its application on AWS.

Which solution will meet these requirements MOST cost-effectively?


• A. Use On-Demand Instances for the production instances. Use Dedicated Hosts for the nonproduction
instances on weekends only.
• B. Use Reserved Instances for the production instances and the nonproduction instances. Shut down the
nonproduction instances when not in use.
• C. Use Compute Savings Plans for the production instances. Use On-Demand Instances for the
nonproduction instances. Shut down the nonproduction instances when not in use.
• D. Use Dedicated Hosts for the production instances. Use EC2 Instance Savings Plans for the
nonproduction instances.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 7:03 a.m.

Question #: : 771

A company stores data in an on-premises Oracle relational database. The company needs to make the data
available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to
connect its on-premises network to AWS.

The company must capture the changes that occur to the source database during the migration to Aurora
PostgreSQL.

Which solution will meet these requirements?


• A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora
PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate
the data.
• B. Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora
PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
• C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora
PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate
the ongoing changes.
• D. Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to
Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 7:04 a.m.

Question #: : 772

A company built an application with Docker containers and needs to run the application in the AWS Cloud. The
company wants to use a managed service to host the application.

The solution must scale in and out appropriately according to demand on the individual container services. The
solution also must not result in additional operational overhead or infrastructure to manage.

Which solutions will meet these requirements? (Choose two.)


• A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
• B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
• C. Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
• D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
• E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

Hide Answer
Suggested Answer: AC

Community vote distribution


AB (100%)
by Andy_09 at Feb. 6, 2024, 7:24 a.m.

Question #: : 773

An ecommerce company is running a seasonal online sale. The company hosts its website on Amazon EC2
instances spanning multiple Availability Zones. The company wants its website to manage sudden traffic increases
during the sale.

Which solution will meet these requirements MOST cost-effectively?


• A. Create an Auto Scaling group that is large enough to handle peak traffic load. Stop half of the Amazon
EC2 instances. Configure the Auto Scaling group to use the stopped instances to scale out when traffic increases.
• B. Create an Auto Scaling group for the website. Set the minimum size of the Auto Scaling group so that
it can handle high traffic volumes without the need to scale out.
• C. Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an Auto Scaling
group set as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront
and ElastiCache. Scale in after the cache is fully populated.
• D. Configure an Auto Scaling group to scale out as traffic increases. Create a launch template to start
new instances from a preconfigured Amazon Machine Image (AMI).

Hide Answer
Suggested Answer: A

Community vote distribution


D (100%)
by Andy_09 at Feb. 6, 2024, 7:25 a.m.

Question #: : 774

A solutions architect must provide an automated solution for a company's compliance policy that states security
groups cannot include a rule that allows SSH from 0.0.0.0/0. The company needs to be notified if there is any
breach in the policy. A solution is needed as soon as possible.

What should the solutions architect do to meet these requirements with the LEAST operational overhead?
• A. Write an AWS Lambda script that monitors security groups for SSH being open to 0.0.0.0/0 addresses
and creates a notification every time it finds one.
• B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification
Service (Amazon SNS) notification when a noncompliant rule is created.
• C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an
Amazon Simple Notification Service (Amazon SNS) topic to generate a notification every time the role is assumed
by a user.
• D. Configure a service control policy (SCP) that prevents non-administrative users from creating or
editing security groups. Create a notification in the ticketing system when a user requests a rule that needs
administrator permissions.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 7:28 a.m.

Question #: : 775
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

A company has deployed an application in an AWS account. The application consists of microservices that run on
AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each
microservice. The company has multiple AWS accounts and wants to give each team its own account for its
microservices.

A solutions architect needs to design a solution that will provide service-to-service communication over HTTPS
(port 443). The solution also must provide a service registry for service discovery.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create an inspection VPC. Deploy an AWS Network Firewall firewall to the inspection VPC. Attach
the inspection VPC to a new transit gateway. Route VPC-to-VPC traffic to the inspection VPC. Apply firewall
rules to allow only HTTPS communication.
• B. Create a VPC Lattice service network. Associate the microservices with the service network. Define
HTTPS listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.
• C. Create a Network Load Balancer (NLB) with an HTTPS listener and target groups for each
microservice. Create an AWS PrivateLink endpoint service for each microservice. Create an interface VPC
endpoint in each VPC that needs to consume that microservice.
• D. Create peering connections between VPCs that contain microservices. Create a prefix list for each
service that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create
security groups to allow only HTTPS communication.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 7:33 a.m.

Question #: : 776

A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game
increased in popularity, developers noticed slowdowns related to the game's metadata load times. Performance
metrics indicate that simply scaling the database will not help. A solutions architect must explore all options that
include capabilities for snapshots, replication, and sub-millisecond response times.

What should the solutions architect recommend to solve these issues?


• A. Migrate the database to Amazon Aurora with Aurora Replicas.
• B. Migrate the database to Amazon DynamoDB with global tables.
• C. Add an Amazon ElastiCache for Redis layer in front of the database.
• D. Add an Amazon ElastiCache for Memcached layer in front of the database.

Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 7:36 a.m.

Question #: : 777

A company uses AWS Organizations for its multi-account AWS setup. The security organizational unit (OU) of
the company needs to share approved Amazon Machine Images (AMIs) with the development OU. The AMIs are
created by using AWS Key Management Service (AWS KMS) encrypted snapshots.

Which solution will meet these requirements? (Choose two.)


• A. Add the development team's OU Amazon Resource Name (ARN) to the launch permission list for the
AMIs.
• B. Add the Organizations root Amazon Resource Name (ARN) to the launch permission list for the AMIs.
• C. Update the key policy to allow the development team's OU to use the AWS KMS keys that are used
to decrypt the snapshots.
• D. Add the development team’s account Amazon Resource Name (ARN) to the launch permission list
for the AMIs.
• E. Recreate the AWS KMS key. Add a key policy to allow the Organizations root Amazon Resource Name
(ARN) to use the AWS KMS key.

Hide Answer
Suggested Answer: BC

Community vote distribution


AC (100%)
by Andy_09 at Feb. 6, 2024, 7:37 a.m.

Question #: : 778
A data analytics company has 80 offices that are distributed globally. Each office hosts 1 PB of data and has
between 1 and 2 Gbps of internet bandwidth.

The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The
company must complete the migration within 4 weeks.

Which solution will meet these requirements MOST cost-effectively?


• A. Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon
S3.
• B. Use multiple AWS Snowball Edge storage-optimized devices to store and transfer the data to Amazon
S3.
• C. Use an AWS Snowmobile to store and transfer the data to Amazon S3.
• D. Set up an AWS Storage Gateway Volume Gateway to transfer the data to Amazon S3.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 7:40 a.m.

Question #: : 779

A company has an Amazon Elastic File System (Amazon EFS) file system that contains a reference dataset. The
company has applications on Amazon EC2 instances that need to read the dataset. However, the applications must
not be able to change the dataset. The company wants to use IAM access control to prevent the applications from
being able to modify or delete the dataset.

Which solution will meet these requirements?


• A. Mount the EFS file system in read-only mode from within the EC2 instances.
• B. Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to
the IAM roles that are attached to the EC2 instances.
• C. Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action
on the EFS file system.
• D. Create an EFS access point for each application. Use Portable Operating System Interface (POSIX)
file permissions to allow read-only access to files in the root directory.

Hide Answer
Suggested Answer: A
Community vote distribution
C (67%)
D (17%)
B (17%)
by Andy_09 at Feb. 6, 2024, 7:41 a.m.

Question #: : 780

A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an
automated tool that is hosted in an AWS account that the vendor owns. The vendor does not have IAM access to
the company’s AWS account. The company needs to grant the vendor access to the company’s AWS account.

Which solution will meet these requirements MOST securely?


• A. Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the
appropriate IAM policies to the role for the permissions that the vendor requires.
• B. Create an IAM user in the company’s account with a password that meets the password complexity
requirements. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.
• C. Create an IAM group in the company’s account. Add the automated tool’s IAM user from the vendor
account to the group. Attach the appropriate IAM policies to the group for the permissions that the vendor
requires.
• D. Create an IAM user in the company’s account that has a permission boundary that allows the vendor’s
account. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Andy_09 at Feb. 6, 2024, 7:47 a.m.

Question #: : 781

A company wants to run its experimental workloads in the AWS Cloud. The company has a budget for cloud
spending. The company's CFO is concerned about cloud spending accountability for each department. The CFO
wants to receive notification when the spending threshold reaches 60% of the budget.

Which solution will meet these requirements?


• A. Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets.
Add an alert threshold to receive notification when spending exceeds 60% of the budget.
• B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection
to create alert threshold notifications when spending exceeds 60% of the budget.
• C. Use cost allocation tags on AWS resources to label owners. Use AWS Support API on AWS Trusted
Advisor to create alert threshold notifications when spending exceeds 60% of the budget.
• D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS
Budgets. Add an alert threshold to receive notification when spending exceeds 60% of the budget.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Andy_09 at Feb. 6, 2024, 7:48 a.m.

Question #: : 782

A company wants to deploy an internal web application on AWS. The web application must be accessible only
from the company's office. The company needs to download security patches for the web application from the
internet.

The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company's office.
A solutions architect must design a secure architecture for the web application.

Which solution will meet these requirements?


• A. Deploy the web application on Amazon EC2 instances in public subnets behind a public Application
Load Balancer (ALB). Attach an internet gateway to the VPC. Set the inbound source of the ALB's security group
to 0.0.0.0/0.
• B. Deploy the web application on Amazon EC2 instances in private subnets behind an internal
Application Load Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the
VPC. Set the inbound source of the ALB's security group to the company's office network CIDR block.
• C. Deploy the web application on Amazon EC2 instances in public subnets behind an internal
Application Load Balancer (ALB). Deploy NAT gateways in private subnets. Attach an internet gateway to the
VPSet the outbound destination of the ALB’s security group to the company's office network CIDR block.
• D. Deploy the web application on Amazon EC2 instances in private subnets behind a public Application
Load Balancer (ALB). Attach an internet gateway to the VPC. Set the outbound destination of the ALB’s security
group to 0.0.0.0/0.
Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 7:57 a.m.

Question #: : 783

A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The
company needs to migrate the data to an AWS managed service for development and maintenance of the
application data. The solution must require minimal operational support and provide immutable,
cryptographically verifiable logs of data changes.

Which solution will meet these requirements MOST cost-effectively?


• A. Copy the records from the application into an Amazon Redshift cluster.
• B. Copy the records from the application into an Amazon Neptune cluster.
• C. Copy the records from the application into an Amazon Timestream database.
• D. Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB)
ledger.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Andy_09 at Feb. 6, 2024, 8:02 a.m.

Question #: : 784

A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket. A series of data
preparation jobs aggregate the data for reporting. The data preparation jobs need to run at regular intervals in
parallel. A few jobs need to run in a specific order later.

The company wants to remove the operational overhead of job error handling, retry logic, and state management.

Which solution will meet these requirements?


• A. Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket.
Invoke other Lambda functions at regularly scheduled intervals.
• B. Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a
regular internal.
• C. Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run the
DataBrew data preparation jobs.
• D. Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at
midnight.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 8:05 a.m.

Question #: : 785

A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets
across multiple Availability Zones. The application uses multiple Lambda functions and processes millions of
transactions each day.

The architecture must ensure that the application does not process duplicate payments.

Which solution will meet these requirements?


• A. Use Lambda to retrieve all due payments. Publish the due payments to an Amazon S3 bucket.
Configure the S3 bucket with an event notification to invoke another Lambda function to process the due
payments.
• B. Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue
Service (Amazon SQS) queue. Configure another Lambda function to poll the SQS queue and to process the due
payments.
• C. Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue
Service (Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and to process
the due payments.
• D. Use Lambda to retrieve all due payments. Store the due payments in an Amazon DynamoDB table.
Configure streams on the DynamoDB table to invoke another Lambda function to process the due payments.

Hide Answer
Suggested Answer: C
Community vote distribution
C (63%)
D (38%)
by Andy_09 at Feb. 6, 2024, 8:08 a.m.

Question #: : 786

A company runs multiple workloads in its on-premises data center. The company's data center cannot scale fast
enough to meet the company's expanding business needs. The company wants to collect usage and configuration
data about the on-premises servers and workloads to plan a migration to AWS.

Which solution will meet these requirements?


• A. Set the home AWS Region in AWS Migration Hub. Use AWS Systems Manager to collect data about
the on-premises servers.
• B. Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect
data about the on-premises servers.
• C. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS
Trusted Advisor to collect data about the on-premises servers.
• D. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS
Database Migration Service (AWS DMS) to collect data about the on-premises servers.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 8:09 a.m.

Question #: : 787

A company has an organization in AWS Organizations that has all features enabled. The company requires that
all API calls and logins in any existing or new AWS account must be audited. The company needs a managed
solution to prevent additional work and to minimize costs. The company also needs to know when any AWS
account is not compliant with the AWS Foundational Security Best Practices (FSBP) standard.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS
Security Hub and AWS Control Tower Account Factory in the environment.
• B. Deploy an AWS Control Tower environment in a dedicated Organizations member account. Enable
AWS Security Hub and AWS Control Tower Account Factory in the environment.
• C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ).
Submit an RFC to self-service provision Amazon GuardDuty in the MALZ.
• D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ).
Submit an RFC to self-service provision AWS Security Hub in the MALZ.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Andy_09 at Feb. 6, 2024, 8:11 a.m.

Question #: : 788

A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. The company
occasionally needs to use SQL to analyze the log files.

Which solution will meet these requirements MOST cost-effectively?


• A. Create an Amazon Aurora MySQL database. Migrate the data from the S3 bucket into Aurora by using
AWS Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
• B. Create an Amazon Redshift cluster. Use Redshift Spectrum to run SQL statements directly on the
data in the S3 bucket.
• C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon
Athena to run SQL statements directly on the data in the S3 bucket.
• D. Create an Amazon EMR cluster. Use Apache Spark SQL to run SQL statements directly on the data
in the S3 bucket.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 8:11 a.m.

Question #: : 789
A company needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access
Management (IAM) resources that include an inline policy or “*” in the statement. The solution must also prohibit
deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled
in its organization in AWS Organizations.

Which solution will meet these requirements?


• A. Use AWS Control Tower proactive controls to block deployment of EC2 instances with public IP
addresses and inline policies with elevated access or “*”.
• B. Use AWS Control Tower detective controls to block deployment of EC2 instances with public IP
addresses and inline policies with elevated access or “*”.
• C. Use AWS Config to create rules for EC2 and IAM compliance. Configure the rules to run an AWS
Systems Manager Session Manager automation to delete a resource when it is not compliant.
• D. Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the
actions lead to noncompliance.

Hide Answer
Suggested Answer: D

Community vote distribution


A (67%)
D (33%)
by Andy_09 at Feb. 6, 2024, 8:12 a.m.

Question #: : 790

A company's web application that is hosted in the AWS Cloud recently increased in popularity. The web
application currently exists on a single Amazon EC2 instance in a single public subnet. The web application has
not been able to meet the demand of the increased web traffic.

The company needs a solution that will provide high availability and scalability to meet the increased user demand
without rewriting the web application.

Which combination of steps will meet these requirements? (Choose two.)


• A. Replace the EC2 instance with a larger compute optimized instance.
• B. Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.
• C. Configure a NAT gateway in a public subnet to handle web requests.
• D. Replace the EC2 instance with a larger memory optimized instance.
• E. Configure an Application Load Balancer in a public subnet to distribute web traffic.
Hide Answer
Suggested Answer: BE

Community vote distribution


BE (100%)
by Andy_09 at Feb. 6, 2024, 8:13 a.m.

Question #: : 791

A company has AWS Lambda functions that use environment variables. The company does not want its developers
to see environment variables in plaintext.

Which solution will meet these requirements?


• A. Deploy code to Amazon EC2 instances instead of using Lambda functions.
• B. Configure SSL encryption on the Lambda functions to use AWS CloudHSM to store and encrypt the
environment variables.
• C. Create a certificate in AWS Certificate Manager (ACM). Configure the Lambda functions to use the
certificate to encrypt the environment variables.
• D. Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda
functions to use the KMS key to store and encrypt the environment variables.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Andy_09 at Feb. 6, 2024, 8:42 a.m.

Question #: : 792

An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs
to offer a web analytics service to millions of users. Users must be verified by using an authentication service to
access the APIs.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway
REST APIs with a Cognito authorizer.
• B. Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway
HTTP APIs with a Cognito authorizer.
• C. Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway
REST APIs with a Lambda authorizer.
• D. Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs
with an IAM authorizer.

Hide Answer
Suggested Answer: D

Community vote distribution


A (80%)
B (20%)
by Andy_09 at Feb. 6, 2024, 8:52 a.m.

Question #: : 793

A company has a mobile app for customers. The app’s data is sensitive and must be encrypted at rest. The company
uses AWS Key Management Service (AWS KMS).

The company needs a solution that prevents the accidental deletion of KMS keys. The solution must use Amazon
Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts
to delete a KMS key.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an
AWS Config rule that cancels any deletion of a KMS key. Add the AWS Config rule as a target of the EventBridge
rule. Create an SNS topic that notifies the administrators.
• B. Create an AWS Lambda function that has custom logic to prevent KMS key deletion. Create an
Amazon CloudWatch alarm that is activated when a user tries to delete a KMS key. Create an Amazon EventBridge
rule that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic.
Configure the EventBridge rule to publish an SNS message that notifies the administrators.
• C. Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed.
Configure the rule to initiate an AWS Systems Manager Automation runbook. Configure the runbook to cancel
the deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message
that notifies the administrators.
• D. Create an AWS CloudTrail trail. Configure the trail to deliver logs to a new Amazon CloudWatch log
group. Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm
to use Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.
Hide Answer
Suggested Answer: D

Community vote distribution


C (100%)
by Andy_09 at Feb. 6, 2024, 8:55 a.m.

Question #: : 794

A company wants to analyze and generate reports to track the usage of its mobile app. The app is popular and has
a global user base. The company uses a custom report building program to analyze application usage.

The program generates multiple reports during the last week of each month. The program takes less than 10
minutes to produce each report. The company rarely uses the program to generate reports outside of the last week
of each month The company wants to generate reports in the least amount of time when the reports are requested.

Which solution will meet these requirements MOST cost-effectively?


• A. Run the program by using Amazon EC2 On-Demand Instances. Create an Amazon EventBridge rule
to start the EC2 instances when reports are requested. Run the EC2 instances continuously during the last week
of each month.
• B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function
when reports are requested.
• C. Run the program in Amazon Elastic Container Service (Amazon ECS). Schedule Amazon ECS to run
the program when reports are requested.
• D. Run the program by using Amazon EC2 Spot Instances. Create an Amazon EventBndge rule to start
the EC2 instances when reports are requested. Run the EC2 instances continuously during the last week of each
month.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 9:04 a.m.

Question #: : 795

A company is designing a tightly coupled high performance computing (HPC) environment in the AWS Cloud.
The company needs to include features that will optimize the HPC environment for networking and storage.

Which combination of solutions will meet these requirements? (Choose two.)


• A. Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
• B. Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
• C. Create an Amazon CloudFront distribution. Configure the viewer protocol policy to be HTTP and
HTTPS.
• D. Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
• E. Create an AWS Elastic Beanstalk deployment to manage the environment.

Hide Answer
Suggested Answer: BD

Community vote distribution


BD (100%)
by Andy_09 at Feb. 6, 2024, 9:06 a.m.

Question #: : 796

A company needs a solution to prevent photos with unwanted content from being uploaded to the company's web
application. The solution must not involve training a machine learning (ML) model.

Which solution will meet these requirements?


• A. Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that
the web application invokes when new photos are uploaded.
• B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create
a Lambda function URL that the web application invokes when new photos are uploaded.
• C. Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content.
Associate the function with the web application.
• D. Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content.
Create a Lambda function URL that the web application invokes when new photos are uploaded.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 9:11 a.m.

Question #: : 797

A company uses AWS to run its ecommerce platform. The platform is critical to the company's operations and has
a high volume of traffic and transactions. The company configures a multi-factor authentication (MFA) device to
secure its AWS account root user credentials. The company wants to ensure that it will not lose access to the root
user account if the MFA device is lost.

Which solution will meet these requirements?


• A. Set up a backup administrator account that the company can use to log in if the company loses the
MFA device.
• B. Add multiple MFA devices for the root user account to handle the disaster scenario.
• C. Create a new administrator account when the company cannot access the root account.
• D. Attach the administrator policy to another IAM user when the company cannot access the root
account.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 9:14 a.m.

Question #: : 798

A social media company is creating a rewards program website for its users. The company gives users points when
users create and upload videos to the website. Users redeem their points for gifts or discounts from the company's
affiliated partners. A unique ID identifies users. The partners refer to this ID to verify user eligibility for rewards.

The partners want to receive notification of user IDs through an HTTP endpoint when the company gives users
points. Hundreds of vendors are interested in becoming affiliated partners every day. The company wants to
design an architecture that gives the website the ability to add partners rapidly in a scalable way.

Which solution will meet these requirements with the LEAST implementation effort?
• A. Create an Amazon Timestream database to keep a list of affiliated partners. Implement an AWS
Lambda function to read the list. Configure the Lambda function to send user IDs to each partner when the
company gives users points.
• B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol.
Subscribe the partners to the topic. Publish user IDs to the topic when the company gives users points.
• C. Create an AWS Step Functions state machine. Create a task for every affiliated partner. Invoke the
state machine with user IDs as input when the company gives users points.
• D. Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer
applications. Store a list of affiliated partners in the data stream. Send user IDs when the company gives users
points.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Andy_09 at Feb. 6, 2024, 9:16 a.m.

Question #: : 799

A company needs to extract the names of ingredients from recipe records that are stored as text files in an Amazon
S3 bucket. A web application will use the ingredient names to query an Amazon DynamoDB table and determine
a nutrition score.

The application can handle non-food records and errors. The company does not have any employees who have
machine learning knowledge to develop this solution.

Which solution will meet these requirements MOST cost-effectively?


• A. Use S3 Event Notifications to invoke an AWS Lambda function when PutObject requests occur.
Program the Lambda function to analyze the object and extract the ingredient names by using Amazon
Comprehend. Store the Amazon Comprehend output in the DynamoDB table.
• B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObject requests occur.
Program the Lambda function to analyze the object by using Amazon Forecast to extract the ingredient names.
Store the Forecast output in the DynamoDB table.
• C. Use S3 Event Notifications to invoke an AWS Lambda function when PutObject requests occur. Use
Amazon Polly to create audio recordings of the recipe records. Save the audio files in the S3 bucket. Use Amazon
Simple Notification Service (Amazon SNS) to send a URL as a message to employees. Instruct the employees to
listen to the audio files and calculate the nutrition score. Store the ingredient names in the DynamoDB table.
• D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObject request
occurs. Program the Lambda function to analyze the object and extract the ingredient names by using Amazon
SageMaker. Store the inference output from the SageMaker endpoint in the DynamoDB table.

Hide Answer
Suggested Answer: D
by asdfcdsxdfc at March 5, 2024, 10:24 p.m.
A???
804

Question #: : 804

A company has an Amazon S3 data lake. The company needs a solution that transforms the data from the data
lake and loads the data into a data warehouse every day. The data warehouse must have massively parallel
processing (MPP) capabilities.

Data analysts then need to create and train machine learning (ML) models by using SQL commands on the data.
The solution must use serverless AWS services wherever possible.

Which solution will meet these requirements?


• A. Run a daily Amazon EMR job to transform the data and load the data into Amazon Redshift. Use
Amazon Redshift ML to create and train the ML models.
• B. Run a daily Amazon EMR job to transform the data and load the data into Amazon Aurora Serverless.
Use Amazon Aurora ML to create and train the ML models.
• C. Run a daily AWS Glue job to transform the data and load the data into Amazon Redshift Serverless.
Use Amazon Redshift ML to create and train the ML models.
• D. Run a daily AWS Glue job to transform the data and load the data into Amazon Athena tables. Use
Amazon Athena ML to create and train the ML models.

Hide Answer
Suggested Answer: B

by asdfcdsxdfc at March 5, 2024, 10:40 p.m.


―>C???????
Explain:
AWS Glue is a fully managed ETL service that makes it easy to move data between data stores. It can read data
from Amazon S3, Amazon Redshift, Amazon RDS, and other data stores, and transform and load it into a data
warehouse like Amazon Redshift. AWS Glue supports serverless execution, which means that you only pay for the
compute time you use.
By using AWS Glue and Amazon Redshift, the company can easily transform and load data from Amazon S3, and
perform SQL operations on the data warehouse, without having to manage infrastructure or worry about scaling.
This allows the company to focus on their data analysis and ML workloads, while leveraging the power and
scalability of AWS.
AWS Glue là dịch vụ ETL đửớc quần lý toàn phần giúp dệ dàng di chuyện dử liệu giửa các kho dử liệu. Nó có thệ
độc dử liệu tử Amazon S3, Amazon Redshift, Amazon RDS và các kho dử liệu khác, động thới chuyện đội và tầi
dử liệu đó vào kho dử liệu nhử Amazon Redshift. AWS Glue hộ trớ thửc thi serverless, nghĩa là bần chỉ trầ tiện
cho thới gian tính toán mà bần sử dụng.
Bầng cách sử dụng AWS Glue và Amazon Redshift, công ty có thệ dệ dàng chuyện đội và tầi dử liệu tử Amazon
S3 cũng nhử thửc hiện các thao tác SQL trên kho dử liệu mà không cần phầi quần lý cớ sớ hầ tầng hay lo lầng vệ
việc mớ rộng quy mô. Điệu này cho phép công ty tầp trung vào phân tích dử liệu và khội lửớng công việc ML,
động thới tần dụng sửc mầnh và khầ năng mớ rộng cụa AWS.

Question #: : 802

A company wants to run its payment application on AWS. The application receives payment notifications from
mobile devices. Payment notifications require a basic validation before they are sent for further processing.

The backend processing application is long running and requires compute and memory to be adjusted. The
company does not want to manage the infrastructure.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon
EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment
notifications and send the notifications to the backend application. Deploy the backend application on Amazon
Elastic Kubernetes Service (Amazon EKS) Anywhere. Create a standalone cluster.
• B. Create an Amazon API Gateway API. Integrate the API with an AWS Step Functions state machine
to receive payment notifications from mobile devices. Invoke the state machine to validate payment notifications
and send the notifications to the backend application. Deploy the backend application on Amazon Elastic
Kubernetes Service (Amazon EKS). Configure an EKS cluster with self-managed nodes.
• C. Create an Amazon Simple Queue Service (Amazon SQS) queue. Integrate the queue with an Amazon
EventBridge rule to receive payment notifications from mobile devices. Configure the rule to validate payment
notifications and send the notifications to the backend application. Deploy the backend application on Amazon
EC2 Spot Instances. Configure a Spot Fleet with a default allocation strategy.
• D. Create an Amazon API Gateway API. Integrate the API with AWS Lambda to receive payment
notifications from mobile devices. Invoke a Lambda function to validate payment notifications and send the
notifications to the backend application. Deploy the backend application on Amazon Elastic Container Service
(Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.

Hide Answer
Suggested Answer: C

by asdfcdsxdfc at March 5, 2024, 10:32 p.m.


DD
Explain:
AWS Lambda is a serverless computing service that lets you run code without provisioning or managing any
servers. It automatically scales based on the demand, and you only pay for the compute time you use.
AWS Lambda là dịch vụ điện toán serverless cho phép bần chầy mã mà không cần cung cầp hay quần lý bầt kỳ
máy chụ nào. Nó tử động mớ rộng quy mô dửa trên nhu cầu và bần chỉ trầ tiện cho thới gian tính toán mà bần sử
dụng.

Question #: : 803

A solutions architect is designing a user authentication solution for a company. The solution must invoke two-
factor authentication for users that log in from inconsistent geographical locations, IP addresses, or devices. The
solution must also be able to scale up to accommodate millions of users.

Which solution will meet these requirements?


• A. Configure Amazon Cognito user pools for user authentication. Enable the risk-based adaptive
authentication feature with multifactor authentication (MFA).
• B. Configure Amazon Cognito identity pools for user authentication. Enable multi-factor authentication
(MFA).
• C. Configure AWS Identity and Access Management (IAM) users for user authentication. Attach an IAM
policy that allows the AllowManageOwnUserMFA action.
• D. Configure AWS IAM Identity Center (AWS Single Sign-On) authentication for user authentication.
Configure the permission sets to require multi-factor authentication (MFA).

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by xBUGx at March 8, 2024, 3:15 a.m.
User pool use cases
Use a user pool in the following scenarios:
• Design sign-up and sign-in webpages for your app.
• Access and manage user data.
• Track your user device, location, and IP address, and adapt to sign-in requests of different risk levels.
• Use a custom authentication flow for your app.
Identity pool use cases
Use an identity pool in the following scenarios:
• Give your users access to AWS resources, such as an Amazon Simple Storage Service (Amazon S3) bucket
or an Amazon DynamoDB table.
• Generate temporary AWS credentials for unauthenticated users.

Question #: : 801
A financial company needs to handle highly sensitive data. The company will store the data in an Amazon S3
bucket. The company needs to ensure that the data is encrypted in transit and at rest. The company must manage
the encryption keys outside the AWS Cloud.

Which solution will meet these requirements?


• A. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses an AWS Key
Management Service (AWS KMS) customer managed key.
• B. Encrypt the data in the S3 bucket with server-side encryption (SSE) that uses an AWS Key
Management Service (AWS KMS) AWS managed key.
• C. Encrypt the data in the S3 bucket with the default server-side encryption (SSE).
• D. Encrypt the data at the company's data center before storing the data in the S3 bucket.

Hide Answer
Suggested Answer: A

by asdfcdsxdfc at March 5, 2024, 10:28 p.m.


By using SSE with a customer managed key (CMK) in AWS KMS, the financial company can ensure that the data
is encrypted at rest in the S3 bucket.
AWS KMS allows customers to create and manage their own encryption keys, providing greater control over key
management.
This solution ensures that the encryption keys are managed outside the AWS Cloud, meeting the requirement
specified by the financial company.
Bầng cách sử dụng SSE với khóa do khách hàng quần lý (CMK) trong AWS KMS, công ty tài chính có thệ đầm
bầo rầng dử liệu đửớc mã hóa ớ phần còn lầi trong bộ chửa S3.
AWS KMS cho phép khách hàng tầo và quần lý khóa mã hóa cụa riêng mình, mang lầi khầ năng kiệm soát tột hớn
đội với việc quần lý khóa.
Giầi pháp này đầm bầo rầng các khóa mã hóa đửớc quần lý bên ngoài Đám mây AWS, đáp ửng yêu cầu do công
ty tài chính chỉ định.

Question #: : 800

A company needs to create an AWS Lambda function that will run in a VPC in the company's primary AWS
account. The Lambda function needs to access files that the company stores in an Amazon Elastic File System
(Amazon EFS) file system. The EFS file system is located in a secondary AWS account. As the company adds files
to the file system, the solution must scale to meet the demand.

Which solution will meet these requirements MOST cost-effectively?


• A. Create a new EFS file system in the primary account. Use AWS DataSync to copy the contents of the
original EFS file system to the new EFS file system.
• B. Create a VPC peering connection between the VPCs that are in the primary account and the secondary
account.
• C. Create a second Lambda function in the secondary account that has a mount that is configured for the
file system. Use the primary account's Lambda function to invoke the secondary account's Lambda function.
• D. Move the contents of the file system to a Lambda layer. Configure the Lambda layer's permissions to
allow the company's secondary account to use the Lambda layer.

Hide Answer
Suggested Answer: A

by asdfcdsxdfc at March 5, 2024, 10:27 p.m.


Community vote distribution
B (100%)

Explain:
VPC peering allows communication between VPCs in different AWS accounts as if they were in the same network.
- By creating a VPC peering connection between the VPC containing the Lambda function in the primary account
and the VPC containing the EFS file system in the secondary account, the Lambda function can access the files
stored in the EFS file system.
- This solution ensures secure communication between the resources in different accounts without incurring data
transfer costs, as data transfer over VPC peering connections within the same AWS Region is not charged for.
- Additionally, VPC peering provides low latency and high bandwidth connectivity, which is suitable for accessing
files stored in an EFS file system.
- This solution also ensures scalability as the demand for accessing files stored in the EFS file system increases.
- VPC ngang hàng cho phép giao tiệp giửa các VPC trong các tài khoần AWS khác nhau nhử thệ chúng ớ trong
cùng một mầng.
- Bầng cách tầo kệt nội ngang hàng VPC giửa VPC chửa hàm Lambda trong tài khoần chính và VPC chửa hệ
thộng tệp EFS trong tài khoần phụ, hàm Lambda có thệ truy cầp các tệp đửớc lửu trử trong hệ thộng tệp EFS.
- Giầi pháp này đầm bầo liên lầc an toàn giửa các tài nguyên trong các tài khoần khác nhau mà không phát sinh
chi phí truyện dử liệu vì việc truyện dử liệu qua kệt nội ngang hàng VPC trong cùng Khu vửc AWS không bị tính
phí.
- Ngoài ra, VPC ngang hàng cung cầp độ trệ thầp và kệt nội băng thông cao, phù hớp đệ truy cầp các tệp đửớc
lửu trử trong hệ thộng tệp EFS.
- Giầi pháp này còn đầm bầo khầ năng mớ rộng khi nhu cầu truy cầp các file đửớc lửu trử trong hệ thộng file EFS
ngày càng tăng.

Question #: : 805
remain locally in the company's data center:lửu trử cục bộ trong trung tâm dử liệu cụa công ty
A company runs containers in a Kubernetes environment in the company's local data center. The company wants
to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services. Data must remain
locally in the company's data center and cannot be stored in any remote site or cloud to maintain compliance.

Which solution will meet these requirements?


• A. Deploy AWS Local Zones in the company's data center.
• B. Use an AWS Snowmobile in the company's data center.
• C. Install an AWS Outposts rack in the company's data center.
• D. Install an AWS Snowball Edge Storage Optimized node in the data center.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
Explain:
If the company has a requirement to keep their data locally while utilizing Amazon EKS and other AWS managed
services, the best solution would be to leverage AWS Outposts.
AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to virtually
any data center, co-location space, or on-premises facility for a truly consistent hybrid experience. AWS compute,
storage, database, and other services run locally on Outposts.
AWS Outposts là dịch vụ đửớc quần lý toàn phần giúp mớ rộng cớ sớ hầ tầng, dịch vụ, API và công cụ AWS tới
hầu hệt mội trung tâm dử liệu, không gian chung hoầc cớ sớ tầi chộ đệ có trầi nghiệm kệt hớp thửc sử nhầt quán.
Dịch vụ điện toán, lửu trử, cớ sớ dử liệu và các dịch vụ khác cụa AWS chầy cục bộ trên Outposts.

Question #: : 806

A social media company has workloads that collect and process data. The workloads store the data in on-premises
NFS storage. The data store cannot scale fast enough to meet the company’s expanding business needs. The
company wants to migrate the current data store to AWS.

Which solution will meet these requirements MOST cost-effectively?


• A. Set up an AWS Storage Gateway Volume Gateway. Use an Amazon S3 Lifecycle policy to transition
the data to the appropriate storage class.
• B. Set up an AWS Storage Gateway Amazon S3 File Gateway. Use an Amazon S3 Lifecycle policy to
transition the data to the appropriate storage class.
• C. Use the Amazon Elastic File System (Amazon EFS) Standard-Infrequent Access (Standard-IA)
storage class. Activate the infrequent access lifecycle policy.
• D. Use the Amazon Elastic File System (Amazon EFS) One Zone-Infrequent Access (One Zone-IA)
storage class. Activate the infrequent access lifecycle policy.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by asdfcdsxdfc at March 5, 2024, 10:48 p.m.
Explain:
1. **On-Premises NFS Storage**: The Amazon S3 File Gateway seamlessly integrates with on-premises NFS
storage, allowing the company to extend their existing storage infrastructure to AWS S3 without the need for
significant modifications to their applications.

2. **Scalability**: With Amazon S3 serving as the backend storage for the File Gateway, the company gains access
to virtually limitless scalability. They can scale their storage capacity in AWS S3 to meet their expanding business
needs without facing constraints.

3. **Migration**: The File Gateway simplifies the migration process by providing a bridge between the on-
premises NFS storage and AWS S3. It allows for a gradual migration strategy, minimizing disruption to operations
and ensuring a smooth transition of data to the cloud.

4. **Cost-Effectiveness**: Utilizing the Amazon S3 File Gateway offers a highly cost-effective solution. The
company pays only for the storage capacity they use in AWS S3, eliminating the need for upfront hardware
investments and reducing ongoing maintenance costs associated with managing on-premises storage
infrastructure.

Question #: : 807
reduce the compute costs and to maintain service latency :giầm chi phí tính toán và duy trì độ trệ
A company uses high concurrency AWS Lambda functions to process a constantly increasing number of messages
in a message queue during marketing events. The Lambda functions use CPU intensive code to process the
messages. The company wants to reduce the compute costs and to maintain service latency for its customers.

Which solution will meet these requirements?


• A. Configure reserved concurrency for the Lambda functions. Decrease the memory allocated to the
Lambda functions.
• B. Configure reserved concurrency for the Lambda functions. Increase the memory according to AWS
Compute Optimizer recommendations.
• C. Configure provisioned concurrency for the Lambda functions. Decrease the memory allocated to the
Lambda functions.
• D. Configure provisioned concurrency for the Lambda functions. Increase the memory according to
AWS Compute Optimizer recommendations.

Hide Answer
Suggested Answer: C
Community vote distribution
D (67%)
A (33%)
by 1dd at March 9, 2024, 4:14 a.m.
Explain:
Provisioned concurrency is a feature in AWS Lambda that keeps functions initialized and hyper-ready to respond
in double-digit milliseconds. This could be useful for their high concurrency use case. By configuring provisioned
concurrency, the company can ensure that there are always a set number of instances ready to respond to the
requests, reducing the cold start latency.

Increasing the memory allocation for the Lambda functions will also increase the CPU power available to them
according to the AWS's shared compute environment, which should help with processing the CPU-intensive
messages more quickly. AWS Compute Optimizer can provide recommendations on the right amount of memory
to allocate for optimum performance.

Option A and C suggest decreasing the memory allocation which may reduce costs but could also lead to increased
latency, especially if the functions are CPU intensive. Decreasing memory would also decrease the available CPU
capacity, which could adversely impact the function's performance.

Option B suggests using reserved concurrency which reserves a specific number of instances for a function. While
this can prevent other functions from using all the available concurrency, it does not help in keeping the functions
warm like provisioned concurrency does, which might be beneficial in a high concurrency use case to maintain
low latency.
• ※Reserved concurrency – This represents the maximum number of concurrent instances allocated to
your function. When a function has reserved concurrency, no other function can use that concurrency.
Configuring reserved concurrency for a function incurs no additional charges.
• Provisioned concurrency – This is the number of pre-initialized execution environments allocated to
your function. These execution environments are ready to respond immediately to incoming function requests.
Configuring provisioned concurrency incurs additional charges to your AWS account.
• Động thới dành riêng – Điệu này thệ hiện sộ lửớng phiên bần động thới tội đa đửớc phân bộ cho chửc
năng cụa bần. Khi một hàm có động thới dành riêng thì không có hàm nào khác có thệ sử dụng động thới đó. Việc
định cầu hình động thới dành riêng cho một chửc năng không phát sinh thêm phí.

• Động thới đửớc cung cầp – Đây là sộ lửớng môi trửớng thửc thi đửớc khới tầo trửớc đửớc phân bộ cho
chửc năng cụa bần. Các môi trửớng thửc thi này sần sàng đáp ửng ngay lầp tửc các yêu cầu chửc năng đện. Việc
định cầu hình động thới đửớc cung cầp sẽ phát sinh thêm phí cho tài khoần AWS cụa bần.

Question #: : 808

A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). The container images that
the ECS task definition uses need to be scanned for Common Vulnerabilities and Exposures (CVEs). New
container images that are created also need to be scanned.

Which solution will meet these requirements with the FEWEST changes to the workloads?
• A. Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the
container images. Specify scan on push filters for the ECR basic scan.
• B. Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an
S3 Event Notification to initiate a Macie scan for every event with an s3:ObjectCreated:Put event type.
• C. Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic
Container Registry (Amazon ECR) as a private image repository. Specify scan on push filters for the ECR
enhanced scan.
• D. Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3
Event Notification for s3:ObjectCreated:* events to invoke an AWS Lambda function. Configure the Lambda
function to initiate an Amazon Inspector scan.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by xBUGx at March 8, 2024, 3:39 a.m.
Explain:
Using Amazon ECR as the private image repository to store container images aligns with the existing setup of
running workloads on Amazon ECS.
By specifying scan on push filters for the ECR basic scan, the scanning process becomes automatic upon pushing
new container images to the registry. This process requires minimal configuration changes to the existing
workflow.

Question #: : 809
invoke a third-party reporting application:gội ửng dụng báo cáo cụa bên thử ba
A company uses an AWS Batch job to run its end-of-day sales process. The company needs a serverless solution
that will invoke a third-party reporting application when the AWS Batch job is successful. The reporting
application has an HTTP API interface that uses username and password authentication.

Which solution will meet these requirements?


• A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events.
Configure the third-party API as an EventBridge API destination with a username and password. Set the API
destination as the EventBridge rule target.
• B. Configure Amazon EventBridge Scheduler to match incoming AWS Batch job SUCCEEDED events.
Configure an AWS Lambda function to invoke the third-party API by using a username and password. Set the
Lambda function as the EventBridge rule target.
• C. Configure an AWS Batch job to publish job SUCCEEDED events to an Amazon API Gateway REST
API. Configure an HTTP proxy integration on the API Gateway REST API to invoke the third-party API by using
a username and password.
• D. Configure an AWS Batch job to publish job SUCCEEDED events to an Amazon API Gateway REST
API. Configure a proxy integration on the API Gateway REST API to an AWS Lambda function. Configure the
Lambda function to invoke the third-party API by using a username and password.

Hide Answer
Suggested Answer: D

Community vote distribution


A (33%)
D (33%)
B (33%)
by osmk at March 10, 2024, 9:41 p.m.
Explain:
• A. Configure an Amazon EventBridge rule to match incoming AWS Batch job SUCCEEDED events.
Configure the third-party API as an EventBridge API destination with a username and password. Set the API
destination as the EventBridge rule target.
The reason is that Amazon EventBridge rule can be setup to match AWS Batch Job SUCCEEDED events and
trigger a target when the specific event occurs. The target in this case would be an API Destination. An API
Destination is a HTTP/HTTPS endpoint configured in EventBridge to route events based on the rule matched.
This feature offers a secure and direct way to connect your events to applications outside AWS without going
through public internet.
Here, you can store the username and password as Basic Auth Parameters in the API Destination itself. This
authenticates your request directly from EventBridge to the third-party system.
The other options are not preferred because they require additional resources, such as Lambda functions or API
Gateway, which add unnecessary complexity and potential latency without providing additional value.

Question #: : 810

A company collects and processes data from a vendor. The vendor stores its data in an Amazon RDS for MySQL
database in the vendor's own AWS account. The company’s VPC does not have an internet gateway, an AWS
Direct Connect connection, or an AWS Site-to-Site VPN connection. The company needs to access the data that
is in the vendor database.

Which solution will meet this requirement?


• A. Instruct the vendor to sign up for the AWS Hosted Connection Direct Connect Program. Use VPC
peering to connect the company's VPC and the vendor's VPC.
• B. Configure a client VPN connection between the company's VPC and the vendor's VPC. Use VPC
peering to connect the company's VPC and the vendor's VPC.
• C. Instruct the vendor to create a Network Load Balancer (NLB). Place the NLB in front of the Amazon
RDS for MySQL database. Use AWS PrivateLink to integrate the company's VPC and the vendor's VPC.
• D. Use AWS Transit Gateway to integrate the company's VPC and the vendor's VPC. Use VPC peering
to connect the company’s VPC and the vendor's VPC.

Hide Answer
Suggested Answer: A

Community vote distribution


C (80%)
A (20%)
by asdfcdsxdfc at March 5, 2024, 10:55 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 811

A company wants to set up Amazon Managed Grafana as its visualization tool. The company wants to visualize
data from its Amazon RDS database as one data source. The company needs a secure solution that will not expose
the data over the internet.

Which solution will meet these requirements?


• A. Create an Amazon Managed Grafana workspace without a VPC. Create a public endpoint for the RDS
database. Configure the public endpoint as a data source in Amazon Managed Grafana.
• B. Create an Amazon Managed Grafana workspace in a VPC. Create a private endpoint for the RDS
database. Configure the private endpoint as a data source in Amazon Managed Grafana.
• C. Create an Amazon Managed Grafana workspace without a VPCreate an AWS PrivateLink endpoint
to establish a connection between Amazon Managed Grafana and Amazon RDS. Set up Amazon RDS as a data
source in Amazon Managed Grafana.
• D. Create an Amazon Managed Grafana workspace in a VPC. Create a public endpoint for the RDS
database. Configure the public endpoint as a data source in Amazon Managed Grafana.

Hide Answer
Suggested Answer: B

Community vote distribution


C (60%)
B (40%)
by osmk at March 10, 2024, 10:08 p.m.

Question #: : 812
A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various
data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include
filtering of anomalies, normalizing of data to standard date and time values, and generation of aggregates for
analyses.

The company must store the transformed data in S3 buckets that data analysts access. The company needs a
prebuilt solution for data transformation that does not require code. The solution must provide data lineage and
data profiling. The company needs to share the data transformation steps with employees throughout the company.

Which solution will meet these requirements?


• A. Configure an AWS Glue Studio visual canvas to transform the data. Share the transformation steps
with employees by using AWS Glue jobs.
• B. Configure Amazon EMR Serverless to transform the data. Share the transformation steps with
employees by using EMR Serverless jobs.
• C. Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees
by using DataBrew recipes.
• D. Create Amazon Athena tables for the data. Write Athena SQL queries to transform the data. Share
the Athena SQL queries with employees.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by asdfcdsxdfc at March 5, 2024, 10:58 p.m.

Question #: : 813

A solutions architect runs a web application on multiple Amazon EC2 instances that are in individual target groups
behind an Application Load Balancer (ALB). Users can reach the application through a public website.

The solutions architect wants to allow engineers to use a development version of the website to access one specific
development EC2 instance to test new features for the application. The solutions architect wants to use an Amazon
Route 53 hosted zone to give the engineers access to the development instance. The solution must automatically
route to the development instance even if the development instance is replaced.

Which solution will meet these requirements?


• A. Create an A Record for the development website that has the value set to the ALB. Create a listener
rule on the ALB that forwards requests for the development website to the target group that contains the
development instance.
• B. Recreate the development instance with a public IP address. Create an A Record for the development
website that has the value set to the public IP address of the development instance.
• C. Create an A Record for the development website that has the value set to the ALB. Create a listener
rule on the ALB to redirect requests for the development website to the public IP address of the development
instance.
• D. Place all the instances in the same target group. Create an A Record for the development website. Set
the value to the ALB. Create a listener rule on the ALB that forwards requests for the development website to the
target group.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by asdfcdsxdfc at March 11, 2024, 10:32 a.m.
Explain:
Option A involves having an A Record on Route 53 point to the Application Load Balancer (ALB) which then
routes requests specifically for the development website to a Target Group that includes the development
instance(s). This solution is smart because if the development EC2 instance is replaced, the new development
instance just needs to be added to the same target group and traffic will automatically be routed to it. The routing
is performed dynamically via listener rules on the ALB, not based on static IP addresses which may change (as in
option B).
The other options C and D do not properly isolate the development instances from the production instances,
making them undesirable for a testing environment.

Question #: : 814

A company runs a container application on a Kubernetes cluster in the company's data center. The application
uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue. The data center
cannot scale fast enough to meet the company’s expanding business needs. The company wants to migrate the
workloads to AWS.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS). Use Amazon
Simple Queue Service (Amazon SQS) to retrieve the messages.
• B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon
MQ to retrieve the messages.
• C. Use highly available Amazon EC2 instances to run the application. Use Amazon MQ to retrieve the
messages.
• D. Use AWS Lambda functions to run the application. Use Amazon Simple Queue Service (Amazon
SQS) to retrieve the messages.
Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by asdfcdsxdfc at March 5, 2024, 11 p.m.
Explain:
• Amazon EKS is the managed Kubernetes service provided by AWS, which easily integrates with AWS
services and has lower operational overhead than handling EC2 instances manually. And since the application is
already run on Kubernetes cluster in the company's data center, it would be a straightforward process to migrate
to EKS. Amazon MQ is a managed message queue service for Apache ActiveMQ and RabbitMQ, both of which
support AMQP protocol, which is being used by the application. Thus, it would provide a near-seamless transition
for the app's messaging requirements.
Other options such as ECS, EC2 or AWS Lambda would require significant changes to the application, and SQS
doesn't support AMQP protocol, so they wouldn't be as seamless or effective as migrating to EKS and using
Amazon MQ.

Question #: : 815

An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs)
across multiple AWS Regions. The NLBs can route requests to targets over the internet. The company wants to
improve the customer playing experience by reducing end-to-end load time for its global customer base.

Which solution will meet these requirements?


• A. Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the
existing EC2 instances as targets for the ALBs in each Region.
• B. Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.
• C. Create additional NLBs and EC2 instances in other Regions where the company has large customer
bases.
• D. Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target
endpoints.

Hide Answer
Suggested Answer: A

Community vote distribution


D (100%)
by asdfcdsxdfc at March 5, 2024, 11:01 p.m.
Explain:
AWS Global Accelerator is a networking service that improves the availability and performance for your
applications with local or global users. It provides static IP addresses that act as a fixed entry point to your
application endpoints in a single or multiple AWS Regions. By using the AWS Global Accelerator, you can route
the users to the nearest healthy endpoint which reduces the end-to-end load time.
Other solutions do not provide the global routing features required to optimize the end-to-end load time for a
global customer base.

Question #: : 816
legacy applications: ửng dụng cũ
A company has an on-premises application that uses SFTP to collect financial data from multiple vendors. The
company is migrating to the AWS Cloud. The company has created an application that uses Amazon S3 APIs to
upload files from vendors.

Some vendors run their systems on legacy applications that do not support S3 APIs. The vendors want to continue
to use SFTP-based applications to upload data. The company wants to use managed services for the needs of the
vendors that use legacy applications.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Create an AWS Database Migration Service (AWS DMS) instance to replicate data from the storage
of the vendors that use legacy applications to Amazon S3. Provide the vendors with the credentials to access the
AWS DMS instance.
• B. Create an AWS Transfer Family endpoint for vendors that use legacy applications.
• C. Configure an Amazon EC2 instance to run an SFTP server. Instruct the vendors that use legacy
applications to use the SFTP server to upload data.
• D. Configure an Amazon S3 File Gateway for vendors that use legacy applications to upload files to an
SMB file share.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by asdfcdsxdfc at March 5, 2024, 11:03 p.m.
Explain:

AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using
SFTP, FTPS, and FTP. This enables vendors to continue to use their SFTP-based legacy applications to upload
data without having to modify their applications or manage the underlying servers. This solution would meet the
company's requirements with the least operational overhead. Other solutions would require significantly more
configuration and maintenance, thereby increasing operational overhead.

AWS Transfer Family cung cầp hộ trớ đửớc quần lý hoàn toàn đệ truyện tệp trửc tiệp vào và ra khội Amazon S3
bầng SFTP, FTPS và FTP. Điệu này cho phép nhà cung cầp tiệp tục sử dụng các ửng dụng cũ dửa trên SFTP cụa
hộ đệ tầi dử liệu lên mà không cần phầi sửa đội ửng dụng cụa hộ hoầc quần lý các máy chụ cớ bần. Giầi pháp này
sẽ đáp ửng các yêu cầu cụa công ty với chi phí hoầt động ít nhầt. Các giầi pháp khác sẽ yêu cầu cầu hình và bầo
trì nhiệu hớn đáng kệ, do đó làm tăng chi phí hoầt động.

Question #: : 817

A marketing team wants to build a campaign for an upcoming multi-sport event. The team has news reports from
the past five years in PDF format. The team needs a solution to extract insights about the content and the
sentiment of the news reports. The solution must use Amazon Textract to process the news reports.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Provide the extracted insights to Amazon Athena for analysis. Store the extracted insights and analysis
in an Amazon S3 bucket.
• B. Store the extracted insights in an Amazon DynamoDB table. Use Amazon SageMaker to build a
sentiment model.
• C. Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon
S3 bucket.
• D. Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze
the data.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by asdfcdsxdfc at March 5, 2024, 11:07 p.m.
Explain:
Amazon Textract extracts text and data from scanned documents. The extracted text and data can then be analyzed
by Amazon Comprehend, which uses machine learning to find insights and relationships in text including
sentiment analysis, which is one of the requirements. The results can be stored on S3 bucket which requires least
operational overhead rather than using Amazon Athena, Amazon SageMaker, Amazon DynamoDB or Amazon
QuickSight.

Amazon Textract trích xuầt văn bần và dử liệu tử các tài liệu đửớc quét. Sau đó, văn bần và dử liệu đửớc trích
xuầt có thệ đửớc phân tích bới Amazon Comprehend. Amazon Comprehend sử dụng máy hộc đệ tìm thông tin
chi tiệt và mội quan hệ trong văn bần, bao gộm cầ phân tích cầm tính, đây là một trong nhửng yêu cầu. Kệt quầ
có thệ đửớc lửu trử trên bộ chửa S3 đòi hội ít chi phí hoầt động nhầt thay vì sử dụng Amazon Athena, Amazon
SageMaker, Amazon DynamoDB hoầc Amazon QuickSight.

Question #: : 818

A company's application runs on Amazon EC2 instances that are in multiple Availability Zones. The application
needs to ingest real-time data from third-party applications.

The company needs a data ingestion solution that places the ingested raw data in an Amazon S3 bucket.

Which solution will meet these requirements?


• A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery
streams to consume the Kinesis data streams. Specify the S3 bucket as the destination of the delivery streams.
• B. Create database migration tasks in AWS Database Migration Service (AWS DMS). Specify replication
instances of the EC2 instances as the source endpoints. Specify the S3 bucket as the target endpoint. Set the
migration type to migrate existing data and replicate ongoing changes.
• C. Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to
transfer data from the EC2 instances to the S3 bucket.
• D. Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon
Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3
bucket as the destination of the delivery streams.

Hide Answer
Suggested Answer: A

Community vote distribution


A (83%)
C (17%)
by asdfcdsxdfc at March 5, 2024, 11:08 p.m.
Explain:
Amazon Kinesis Data Streams can continuously capture gigabytes of data per second from hundreds of thousands
of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs,
and location-tracking events. The data collected is available in milliseconds to enable real-time analytics. Then,
with Amazon Kinesis Data Firehose, you can prepare and load the streaming data to Amazon S3, which is a durable,
secure, and scalable data storage.

Luộng dử liệu Amazon Kinesis có thệ liên tục thu thầp hàng gigabyte dử liệu mội giây tử hàng trăm nghìn nguộn
nhử luộng nhầp chuột trên trang web, luộng sử kiện cớ sớ dử liệu, giao dịch tài chính, nguộn cầp dử liệu truyện
thông xã hội, nhầt ký CNTT và sử kiện theo dõi vị trí. Dử liệu đửớc thu thầp có sần tính bầng mili giây đệ cho
phép phân tích theo thới gian thửc. Sau đó, với Amazon Kinesis Data Firehose, bần có thệ chuần bị và tầi dử liệu
phát trửc tuyện lên Amazon S3, đây là bộ lửu trử dử liệu bện bỉ, an toàn và có thệ mớ rộng quy mô.

Question #: : 819

A company’s application is receiving data from multiple data sources. The size of the data varies and is expected
to increase over time. The current maximum size is 700 KB. The data volume and data size continue to grow as
more data sources are added.
The company decides to use Amazon DynamoDB as the primary database for the application. A solutions architect
needs to identify a solution that handles the large data sizes.

Which solution will meet these requirements in the MOST operationally efficient way?
• A. Create an AWS Lambda function to filter the data that exceeds DynamoDB item size limits. Store the
larger data in an Amazon DocumentDB (with MongoDB compatibility) database.
• B. Store the large data as objects in an Amazon S3 bucket. In a DynamoDB table, create an item that has
an attribute that points to the S3 URL of the data.
• C. Split all incoming large data into a collection of items that have the same partition key. Write the data
to a DynamoDB table in a single operation by using the BatchWriteItem API operation.
• D. Create an AWS Lambda function that uses gzip compression to compress the large objects as they are
written to a DynamoDB table.

Hide Answer
Suggested Answer: D

Community vote distribution


B (100%)
by Neung983 at March 6, 2024, 2:28 p.m.
Explain:
This approach keeps your DynamoDB database lean, without the need to handle high amounts of data in each
item. Large files get stored directly in S3 which is designed for large data objects storage. You only need to save a
pointer (the URL) to the S3 object in your DynamoDB item. This is both operationally efficient and cost-effective.

Question #: : 820

A company is migrating a legacy application from an on-premises data center to AWS. The application relies on
hundreds of cron jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.

The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring. The solution
must support running the cron jobs in response to an event in the future.

Which solution will meet these requirements?


• A. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring
schedule. Run the cron job tasks as AWS Lambda functions.
• B. Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service
(Amazon ECS) with a scheduling policy to run the cron jobs.
• C. Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring
schedule. Run the cron job tasks on AWS Fargate.
• D. Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait
state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.
Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by asdfcdsxdfc at March 5, 2024, 11:22 p.m.
Explain:
Amazon EventBridge Scheduler makes it easier to schedule regular cron jobs or recurring tasks, and using AWS
Fargate allows you to run containers without managing the underlying servers. Creating a container image for the
existing cron jobs would enable migration of the tasks to AWS with minimal refactoring, and EventBridge would
handle the scheduling.

Bộ lầp lịch Amazon EventBridge giúp việc lên lịch các tác vụ định kỳ hoầc tác vụ định kỳ thông thửớng trớ nên dệ
dàng hớn và việc sử dụng AWS Fargate cho phép bần chầy các bộ chửa mà không cần quần lý các máy chụ cớ bần.
Việc tầo hình ầnh vùng chửa cho các tác vụ định kỳ hiện có sẽ cho phép di chuyện các tác vụ sang AWS mà không
cần tái cầu trúc ớ mửc tội thiệu và EventBridge sẽ xử lý việc lầp lịch trình.

Question #: : 821

A company uses Salesforce. The company needs to load existing data and ongoing data changes from Salesforce
to Amazon Redshift for analysis. The company does not want the data to travel over the public internet.

Which solution will meet these requirements with the LEAST development effort?
• A. Establish a VPN connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.
• B. Establish an AWS Direct Connect connection from the VPC to Salesforce. Use AWS Glue DataBrew
to transfer data.
• C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer
data.
• D. Create a VPC peering connection to Salesforce. Use Amazon AppFlow to transfer data.

Hide Answer
Suggested Answer: C

Explain:

Amazon AppFlow is a fully managed integration service that enables data transfer in a secure and scalable manner
among Software as a Service (SaaS), AWS services, and on-premises applications. AppFlow supports Salesforce
as a data source and can transfer data directly to an Amazon Redshift cluster, greatly reducing the development
effort required.

AWS PrivateLink provides a connection between your VPC and an external service (in this case Salesforce) that
is secured and does not route traffic over the public internet.
Amazon AppFlow là một dịch vụ tích hớp đửớc quần lý toàn phần, cho phép truyện dử liệu một cách an toàn và
có thệ mớ rộng giửa Phần mệm dửới dầng dịch vụ (SaaS), dịch vụ AWS và ửng dụng tầi chộ. AppFlow hộ trớ
Salesforce dửới dầng nguộn dử liệu và có thệ truyện dử liệu trửc tiệp đện cụm Amazon Redshift, giúp giầm đáng
kệ nộ lửc phát triện cần thiệt.

AWS PrivateLink cung cầp kệt nội giửa VPC cụa bần và một dịch vụ bên ngoài (trong trửớng hớp này là
Salesforce) đửớc bầo mầt và không định tuyện lửu lửớng truy cầp qua Internet công cộng.

Question #: : 822

A company recently migrated its application to AWS. The application runs on Amazon EC2 Linux instances in an
Auto Scaling group across multiple Availability Zones. The application stores data in an Amazon Elastic File
System (Amazon EFS) file system that uses EFS Standard-Infrequent Access storage. The application indexes the
company's files. The index is stored in an Amazon RDS database.

The company needs to optimize storage costs with some application and services changes.

Which solution will meet these requirements MOST cost-effectively?


• A. Create an Amazon S3 bucket that uses an Intelligent-Tiering lifecycle policy. Copy all files to the S3
bucket. Update the application to use Amazon S3 API to store and retrieve files.
• B. Deploy Amazon FSx for Windows File Server file shares. Update the application to use CIFS protocol
to store and retrieve files.
• C. Deploy Amazon FSx for OpenZFS file system shares. Update the application to use the new mount
point to store and retrieve files.
• D. Create an Amazon S3 bucket that uses S3 Glacier Flexible Retrieval. Copy all files to the S3 bucket.
Update the application to use Amazon S3 API to store and retrieve files as standard retrievals.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Kenneth99 at March 24, 2024, 9:05 a.m.
Explain:
This solution is the most cost-effective. Amazon S3 Intelligent-Tiering is designed to optimize costs by
automatically moving data between two access tiers (frequent and infrequent access) based on changing access
patterns. Also, Amazon S3 is typically cheaper than Amazon EFS for storage, so migrating the data can save costs,
and using S3 APIs for data retrieval and storage is straightforward.

Giầi pháp này là hiệu quầ nhầt vệ mầt chi phí. Phân bầc thông minh cụa Amazon S3 đửớc thiệt kệ đệ tội ửu hóa
chi phí bầng cách tử động di chuyện dử liệu giửa hai tầng truy cầp (truy cầp thửớng xuyên và không thửớng
xuyên) dửa trên việc thay đội mô hình truy cầp. Ngoài ra, Amazon S3 thửớng rệ hớn Amazon EFS vệ mầt lửu trử
nên việc di chuyện dử liệu có thệ tiệt kiệm chi phí và việc sử dụng API S3 đệ truy xuầt và lửu trử dử liệu cũng rầt
đớn giần.

Question #: : 823

A robotics company is designing a solution for medical surgery. The robots will use advanced sensors, cameras,
and AI algorithms to perceive their environment and to complete surgeries.

The company needs a public load balancer in the AWS Cloud that will ensure seamless communication with
backend services. The load balancer must be capable of routing traffic based on the query strings to different
target groups. The traffic must also be encrypted.

Which solution will meet these requirements?


• A. Use a Network Load Balancer with a certificate attached from AWS Certificate Manager (ACM). Use
query parameter-based routing.
• B. Use a Gateway Load Balancer. Import a generated certificate in AWS Identity and Access
Management (IAM). Attach the certificate to the load balancer. Use HTTP path-based routing.
• C. Use an Application Load Balancer with a certificate attached from AWS Certificate Manager (ACM).
Use query parameter-based routing.
• D. Use a Network Load Balancer. Import a generated certificate in AWS Identity and Access
Management (IAM). Attach the certificate to the load balancer. Use query parameter-based routing.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by alawada at March 23, 2024, 2:21 a.m.
Explain:
Application Load Balancers operate at the request level (layer 7), supporting routing based on the content of the
request including query strings. Additionally, they provide native SSL termination, so you can offload the work of
encrypting and decrypting traffic from your services. You can attach SSL certificates provided by AWS Certificate
Manager (ACM) to your load balancer, satisfying the encryption requirement. Network Load Balancer, Gateway
Load Balancer are not capable of routing traffic based on query strings.

Cân bầng tầi ửng dụng hoầt động ớ cầp độ yêu cầu (lớp 7), hộ trớ định tuyện dửa trên nội dung cụa yêu cầu bao
gộm các chuội truy vần. Ngoài ra, hộ còn cung cầp tính năng chầm dửt SSL gộc, do đó bần có thệ giầm tầi công
việc mã hóa và giầi mã lửu lửớng truy cầp tử các dịch vụ cụa mình. Bần có thệ đính kèm chửng chỉ SSL do AWS
Certification Manager (ACM) cung cầp vào bộ cân bầng tầi cụa mình đệ đáp ửng yêu cầu mã hóa. Network Load
Balancer, Gateway Load Balancer không có khầ năng định tuyện lửu lửớng truy cầp dửa trên chuội truy vần.

Question #: : 824
A company has an application that runs on a single Amazon EC2 instance. The application uses a MySQL database
that runs on the same EC2 instance. The company needs a highly available and automatically scalable solution to
handle increased traffic.

Which solution will meet these requirements?


• A. Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application
Load Balancer. Create an Amazon Redshift cluster that has multiple MySQL-compatible nodes.
• B. Deploy the application to EC2 instances that are configured as a target group behind an Application
Load Balancer. Create an Amazon RDS for MySQL cluster that has multiple instances.
• C. Deploy the application to EC2 instances that run in an Auto Scaling group behind an Application
Load Balancer. Create an Amazon Aurora Serverless MySQL cluster for the database layer.
• D. Deploy the application to EC2 instances that are configured as a target group behind an Application
Load Balancer. Create an Amazon ElastiCache for Redis cluster that uses the MySQL connector.

Hide Answer
Suggested Answer: B

Community vote distribution


C (100%)
by haci at March 21, 2024, 8:10 a.m.
Explain:
• This solution is the most suitable as it will provide the automatic scaling needed at both the application
layer and the database layer. Amazon EC2 Auto Scaling will handle the scaling of the EC2 instances. And Aurora
Serverless scales the MySQL-compatible database automatically.
The other options are not appropriate because:
o Amazon Redshift (Option A) is a data warehousing service and is not suitable for transactional workloads
typical in application backends.
o A typical Amazon RDS for MySQL cluster (Option B) won't automatically scale.
o Amazon ElastiCache for Redis (Option D) is an in-memory data store and is not a replacement for a
relational database such as MySQL.
Giầi pháp này là phù hớp nhầt vì nó sẽ cung cầp khầ năng tử động điệu chỉnh quy mô cần thiệt ớ cầ lớp ửng dụng
và lớp cớ sớ dử liệu. Amazon EC2 Auto Scaling sẽ xử lý việc mớ rộng quy mô cụa phiên bần EC2. Và Aurora
Serverless tử động thay đội quy mô cớ sớ dử liệu tửớng thích với MySQL.

Các lửa chộn khác không phù hớp vì:

Amazon Redshift (Tùy chộn A) là dịch vụ lửu trử dử liệu và không phù hớp với khội lửớng công việc giao dịch
điện hình trong phần phụ trớ ửng dụng.
Cụm Amazon RDS cho MySQL điện hình (Tùy chộn B) sẽ không tử động mớ rộng quy mô.
Amazon ElastiCache for Redis (Tùy chộn D) là kho lửu trử dử liệu trong bộ nhớ và không thay thệ cho cớ sớ dử
liệu quan hệ nhử MySQL.
Question #: : 825

A company is planning to migrate data to an Amazon S3 bucket. The data must be encrypted at rest within the S3
bucket. The encryption key must be rotated automatically every year.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Migrate the data to the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE-
S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
• B. Create an AWS Key Management Service (AWS KMS) customer managed key. Enable automatic key
rotation. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data
to the S3 bucket.
• C. Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket's
default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. Manually
rotate the KMS key every year.
• D. Use customer key material to encrypt the data. Migrate the data to the S3 bucket. Create an AWS Key
Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key.
Enable automatic key rotation.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by haci at March 21, 2024, 8:21 a.m.
Explain:

This solution meets all the requirements specified. AWS Key Management Service (KMS) allows you to create
and manage cryptographic keys and control their use across a wide range of AWS services and in your applications.
You can set up automatic key rotation for a customer managed key, which will automatically rotate the key every
year so you don't need to do this manually. Setting the S3 bucket's default encryption behavior to use this customer
managed KMS key will ensure data is automatically encrypted at rest with the key when it is loaded to the S3
bucket. This results in the least amount of operational overhead while meeting the key rotation and encryption
requirements.

Giầi pháp này đáp ửng tầt cầ các yêu cầu đửớc chỉ định. Dịch vụ quần lý khóa AWS (KMS) cho phép bần tầo và
quần lý khóa mầt mã cũng nhử kiệm soát việc sử dụng chúng trên nhiệu dịch vụ AWS và trong ửng dụng cụa bần.
Bần có thệ thiệt lầp xoay khóa tử động cho khóa do khách hàng quần lý, thao tác này sẽ tử động xoay khóa hàng
năm nên bần không cần phầi thửc hiện việc này theo cách thụ công. Việc đầt hành vi mã hóa mầc định cụa bộ
chửa S3 đệ sử dụng khóa KMS do khách hàng quần lý này sẽ đầm bầo dử liệu đửớc mã hóa tử động ớ phần lửu
trử bầng khóa khi dử liệu đửớc tầi vào bộ chửa S3. Điệu này dần đện chi phí vần hành ít nhầt trong khi vần đáp
ửng các yêu cầu vệ mã hóa và luân chuyện khóa.
Question #: : 826

A company is migrating applications from an on-premises Microsoft Active Directory that the company manages
to AWS. The company deploys the applications in multiple AWS accounts. The company uses AWS Organizations
to manage the accounts centrally.

The company's security team needs a single sign-on solution across all the company's AWS accounts. The
company must continue to manage users and groups that are in the on-premises Active Directory.

Which solution will meet these requirements?


• A. Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active
Directory. Configure the Active Directory to be the identity source for AWS IAM Identity Center.
• B. Enable AWS IAM Identity Center. Configure a two-way forest trust relationship to connect the
company's self-managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft
Active Directory.
• C. Use AWS Directory Service and create a two-way trust relationship with the company's self-managed
Active Directory.
• D. Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS
IAM Identity Center.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by haci at March 21, 2024, 8:26 a.m.
Explain:
The best solution to meet the company's requirements is:
B. Enable AWS IAM Identity Center. Configure a two-way forest trust relationship to connect the company's self-
managed Active Directory with IAM Identity Center by using AWS Directory Service for Microsoft Active
Directory. / Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust
to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory
Service for Microsoft Active Directory
Here's why this option is the most suitable:
• Centralized Management: IAM Identity Center is the recommended service for managing user access
across multiple AWS accounts. It integrates well with AWS Organizations, allowing centralized control.
• On-Premises AD Integration: A two-way trust relationship lets users and groups from the company's on-
premises Active Directory seamlessly authenticate with IAM Identity Center. This eliminates the need to duplicate
user management in AWS.
• AWS Directory Service (Optional): While not strictly necessary for the trust relationship, using AWS
Directory Service for Microsoft Active Directory can provide a managed Active Directory instance within AWS,
potentially simplifying management compared to a fully self-managed solution.
Let's explore why the other options are not ideal:
• A. Enterprise Edition AD: This creates a separate Active Directory instance within AWS. While it can be
an identity source for IAM Identity Center, it doesn't directly integrate with the company's existing on-premises
AD, requiring duplicate user management.
• C. AWS Directory Service Trust: This option only establishes a trust with a managed Active Directory
within AWS Directory Service. It wouldn't connect to the existing on-premises AD, defeating the purpose of
central user management.
• D. IdP on EC2: Deploying and managing an IdP on EC2 adds complexity. IAM Identity Center offers a
built-in solution for integrating with various identity sources, including on-premises Active Directory through
trust relationships.
Therefore, option B provides a secure and efficient way to leverage the company's existing on-premises Active
Directory for user authentication across multiple AWS accounts while maintaining centralized user management.
• Quần lý tầp trung: Trung tâm nhần dầng IAM là dịch vụ đửớc đệ xuầt đệ quần lý quyện truy cầp cụa ngửới
dùng trên nhiệu tài khoần AWS. Nó tích hớp tột với AWS Organs, cho phép kiệm soát tầp trung.
• Tích hớp AD tầi chộ: Mội quan hệ tin cầy hai chiệu cho phép ngửới dùng và nhóm tử Active Directory tầi chộ
cụa công ty xác thửc liện mầch với Trung tâm nhần dầng IAM. Điệu này giúp loầi bộ nhu cầu quần lý ngửới dùng
trùng lầp trong AWS.
• Dịch vụ thử mục AWS (Tùy chộn): Mầc dù không thửc sử cần thiệt cho mội quan hệ tin cầy, nhửng việc sử
dụng Dịch vụ thử mục AWS cho Microsoft Active Directory có thệ cung cầp một phiên bần Active Directory đửớc
quần lý trong AWS, có khầ năng đớn giần hóa việc quần lý so với giầi pháp tử quần lý hoàn toàn.

Question #: : 827

A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The
application will receive large amounts of traffic. The company wants to optimize the storage performance of the
cluster as the load on the application increases.

Which solution will meet these requirements MOST cost-effectively?


• A. Configure the cluster to use the Aurora Standard storage configuration.
• B. Configure the cluster storage type as Provisioned IOPS.
• C. Configure the cluster storage type as General Purpose.
• D. Configure the cluster to use the Aurora I/O-Optimized storage configuration.

Hide Answer
Suggested Answer: C

Community vote distribution


D (67%)
C (33%)
by haci at March 21, 2024, 8:35 a.m.
Explain:
Amazon Aurora I/O-Optimized configuration is designed for high-performance, highly transactional (I/O-
intensive) database workloads. It automatically scales storage capacity without any impact on performance, which
is ideal for an application expecting large amounts of traffic. On the contrary, the Standard storage, Provisioned
IOPS, and General Purpose storage configurations may not be able to scale dynamically to meet the increased
load, and could be more expensive. Hence the most cost-effective and performance optimizing solution would be
option D.
Cầu hình tội ửu hóa I/O cụa Amazon Aurora đửớc thiệt kệ cho khội lửớng công việc cớ sớ dử liệu có hiệu suầt
cao, có tính giao dịch cao (chuyên sâu I/O). Nó tử động điệu chỉnh dung lửớng lửu trử mà không có bầt kỳ tác
động nào đện hiệu suầt, lý tửớng cho một ửng dụng cần lửu lửớng truy cầp lớn. Ngửớc lầi, cầu hình bộ lửu trử
Tiêu chuần, IOPS đửớc cung cầp và bộ lửu trử Mục đích chung có thệ không thệ mớ rộng quy mô một cách linh
hoầt đệ đáp ửng tầi tăng lên và có thệ đầt hớn. Do đó, giầi pháp tội ửu hóa hiệu suầt và tiệt kiệm chi phí nhầt sẽ
là phửớng án D.

Question #: : 828
proof that the designed controls have been implemented and are functioning correctly.’ bầng chửng rầng các biện
pháp kiệm soát đửớc thiệt kệ đã đửớc triện khai và hoầt động chính xác.

A financial services company that runs on AWS has designed its security controls to meet industry standards. The
industry standards include the National Institute of Standards and Technology (NIST) and the Payment Card
Industry Data Security Standard (PCI DSS).

The company's third-party auditors need proof that the designed controls have been implemented and are
functioning correctly. The company has hundreds of AWS accounts in a single organization in AWS Organizations.
The company needs to monitor the current state of the controls across accounts.

Which solution will meet these requirements?


• A. Designate one account as the Amazon Inspector delegated administrator account from the
Organizations management account. Integrate Inspector with Organizations to discover and scan resources across
all AWS accounts. Enable Inspector industry standards for NIST and PCI DSS.
• B. Designate one account as the Amazon GuardDuty delegated administrator account from the
Organizations management account. In the designated GuardDuty administrator account, enable GuardDuty to
protect all member accounts. Enable GuardDuty industry standards for NIST and PCI DSS.
• C. Configure an AWS CloudTrail organization trail in the Organizations management account.
Designate one account as the compliance account. Enable CloudTrail security standards for NIST and PCI DSS
in the compliance account.
• D. Designate one account as the AWS Security Hub delegated administrator account from the
Organizations management account. In the designated Security Hub administrator account, enable Security Hub
for all member accounts. Enable Security Hub standards for NIST and PCI DSS.

Hide Answer
Suggested Answer: D
Community vote distribution
D (100%)
by Kaula at March 23, 2024, 2:44 p.m.
Explain’
Security Hub is a central service that collects security findings from AWS Config and other AWS security services.
This aligns perfectly with the initial recommendation for the financial services company.
Security Hub là dịch vụ trung tâm thu thầp các phát hiện bầo mầt tử AWS Config và các dịch vụ bầo mầt AWS
khác. Điệu này hoàn toàn phù hớp với khuyện nghị ban đầu dành cho công ty dịch vụ tài chính.

Each of the mentioned services has a different role within the AWS ecosystem:
1. Amazon Inspector is an automated security assessment service that helps improve the security and
compliance of applications deployed on AWS. It assesses applications for vulnerabilities or deviations from best
practices.
2. Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect AWS accounts and workloads.
3. AWS CloudTrail is a service that enables governance, compliance, operations auditing, and risk auditing
of your AWS account. It logs all activities that happen in your AWS environment.
4. AWS Security Hub gives you a comprehensive view of your high-priority security alerts and security
status across your AWS accounts. It aggregates, organizes, and prioritizes your security alerts, or findings, from
multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, and AWS CloudTrail.
AWS Security Hub is a central place where you can manage security and compliance across an AWS environment
so you can get a comprehensive understanding of your high-priority security alerts and compliance status across
AWS accounts.
Mội dịch vụ đửớc đệ cầp có vai trò khác nhau trong hệ sinh thái AWS:
Amazon Inspector là dịch vụ đánh giá bầo mầt tử động giúp cầi thiện tính bầo mầt và tuân thụ cụa các ửng dụng
đửớc triện khai trên AWS. Nó đánh giá các ửng dụng đệ tìm lộ hộng hoầc sai lệch so với các phửớng pháp hay
nhầt.
Amazon GuardDuty là dịch vụ phát hiện mội đe dộa liên tục giám sát hoầt động độc hầi và hành vi trái phép đệ
bầo vệ tài khoần và khội lửớng công việc AWS.
AWS CloudTrail là dịch vụ cho phép quần trị, tuân thụ, kiệm tra hoầt động và kiệm tra rụi ro đội với tài khoần
AWS cụa bần. Nó ghi lầi tầt cầ các hoầt động diện ra trong môi trửớng AWS cụa bần.
AWS Security Hub cung cầp cho bần cái nhìn toàn diện vệ các cầnh báo bầo mầt có mửc độ ửu tiên cao và trầng
thái bầo mầt trên các tài khoần AWS cụa bần. Nó tộng hớp, sầp xệp và ửu tiên các cầnh báo hoầc phát hiện bầo
mầt cụa bần tử nhiệu dịch vụ AWS, chầng hần nhử Amazon GuardDuty, Amazon Inspector và AWS CloudTrail.
Trung tâm bầo mầt AWS là nới trung tâm nới bần có thệ quần lý bầo mầt và tuân thụ trên môi trửớng AWS đệ
bần có thệ hiệu rõ toàn diện vệ các cầnh báo bầo mầt có mửc độ ửu tiên cao và trầng thái tuân thụ trên các tài
khoần AWS.

Question #: : 829
provide immediate availability for frequently accessed objects.:cung cầp tính khầ dụng ngay lầp tửc cho các đội
tửớng đửớc truy cầp thửớng xuyên.
A company uses an Amazon S3 bucket as its data lake storage platform. The S3 bucket contains a massive amount
of data that is accessed randomly by multiple teams and hundreds of applications. The company wants to reduce
the S3 storage costs and provide immediate availability for frequently accessed objects.

What is the MOST operationally efficient solution that meets these requirements?
• A. Create an S3 Lifecycle rule to transition objects to the S3 Intelligent-Tiering storage class.
• B. Store objects in Amazon S3 Glacier. Use S3 Select to provide applications with access to the data.
• C. Use data from S3 storage class analysis to create S3 Lifecycle rules to automatically transition objects
to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class.
• D. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an
AWS Lambda function to transition objects to the S3 Standard storage class when they are accessed by an
application.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Kaula at March 23, 2024, 2:50 p.m.
Explain:
S3 Intelligent-Tiering automatically moves objects between two access tiers — frequent access and infrequent
access — when access patterns change, and is optimized for long-lived data with changing or unknown access
patterns. Its use will help reduce the S3 storage costs and provide immediate availability for frequently accessed
objects, as required.
S3 Intelligent-Tiering tử động di chuyện các đội tửớng giửa hai tầng truy cầp — truy cầp thửớng xuyên và truy
cầp không thửớng xuyên — khi các kiệu truy cầp thay đội và đửớc tội ửu hóa cho dử liệu tộn tầi lâu dài với các
kiệu truy cầp thay đội hoầc không xác định. Việc sử dụng nó sẽ giúp giầm chi phí lửu trử S3 và cung cầp tính khầ
dụng ngay lầp tửc cho các đội tửớng đửớc truy cầp thửớng xuyên, theo yêu cầu.

Question #: : 830

A company has 5 TB of datasets. The datasets consist of 1 million user profiles and 10 million connections. The
user profiles have connections as many-to-many relationships. The company needs a performance efficient way
to find mutual connections up to five levels.

Which solution will meet these requirements?


• A. Use an Amazon S3 bucket to store the datasets. Use Amazon Athena to perform SQL JOIN queries
to find connections.
• B. Use Amazon Neptune to store the datasets with edges and vertices. Query the data to find connections.
• C. Use an Amazon S3 bucket to store the datasets. Use Amazon QuickSight to visualize connections.
• D. Use Amazon RDS to store the datasets with multiple tables. Perform SQL JOIN queries to find
connections.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by alawada at March 23, 2024, 2:39 a.m.
Explain:
The company should implement Amazon Neptune, which is a managed graph database service that's designed for
storing billions of relationships and querying the graph with milliseconds of latency. Neptune efficiently supports
highly connected datasets and graph processing paradigms which makes it ideal for the company needs of finding
mutual relationships up to five levels deep. This would provide a performance efficient way to meet the
requirements.
Công ty nên triện khai Amazon Neptune, một dịch vụ cớ sớ dử liệu độ thị đửớc quần lý đửớc thiệt kệ đệ lửu trử
hàng tỷ mội quan hệ và truy vần độ thị với độ trệ tính bầng mili giây. Neptune hộ trớ hiệu quầ các bộ dử liệu có
tính kệt nội cao và các mô hình xử lý độ thị, khiện nó trớ nên lý tửớng cho nhu cầu cụa công ty trong việc tìm kiệm
các mội quan hệ tửớng hộ sâu tới năm cầp độ. Điệu này sẽ cung cầp một cách hiệu quầ đệ đáp ửng các yêu cầu.

Question #: : 831
high bandwidth: băng thông cao
A company needs a secure connection between its on-premises environment and AWS. This connection does not
need high bandwidth and will handle a small amount of traffic. The connection should be set up quickly.

What is the MOST cost-effective method to establish this type of connection?


• A. Implement a client VPN.
• B. Implement AWS Direct Connect.
• C. Implement a bastion host on Amazon EC2.
• D. Implement an AWS Site-to-Site VPN connection.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Kaula at March 23, 2024, 3:04 p.m.
Explain:
AWS Site-to-Site VPN :for low bandwidth requirements and smaller amounts of traffic
AWS Direct Connect.:used for high volume and consistent network traffic.

Question #: : 832
scale the file transfer solution: mớ rộng giầi pháp truyện tệp
optimize costs: tội ửu hóa chi phí

A company has an on-premises SFTP file transfer solution. The company is migrating to the AWS Cloud to scale
the file transfer solution and to optimize costs by using Amazon S3. The company's employees will use their
credentials for the on-premises Microsoft Active Directory (AD) to access the new solution. The company wants
to keep the current authentication and file access mechanisms.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Configure an S3 File Gateway. Create SMB file shares on the file gateway that use the existing Active
Directory to authenticate.
• B. Configure an Auto Scaling group with Amazon EC2 instances to run an SFTP solution. Configure the
group to scale up at 60% CPU utilization.
• C. Create an AWS Transfer Family server with SFTP endpoints. Choose the AWS Directory Service
option as the identity provider. Use AD Connector to connect the on-premises Active Directory.
• D. Create an AWS Transfer Family SFTP endpoint. Configure the endpoint to use the AWS Directory
Service option as the identity provider to connect to the existing Active Directory.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Kaula at March 23, 2024, 3:08 p.m.
Explain:
• Connect your on-premises Active Directory to AWS Directory Service for Microsoft AD. This aligns with
option D, which would require creating a separate user base in AWS Directory Service.
• Use your existing on-premises Active Directory credentials to authenticate users with AWS services. This
directly supports option C, where user authentication leverages the existing AD through AD Connector,
eliminating the need for separate user management in AWS.
• Kệt nội Active Directory tầi chộ cụa bần với Dịch vụ thử mục AWS dành cho Microsoft AD. Điệu này
phù hớp với tùy chộn D, tùy chộn này sẽ yêu cầu tầo cớ sớ ngửới dùng riêng trong Dịch vụ thử mục AWS.
• Sử dụng thông tin xác thửc Active Directory tầi chộ hiện có cụa bần đệ xác thửc ngửới dùng bầng dịch
vụ AWS. Điệu này hộ trớ trửc tiệp tùy chộn C, trong đó xác thửc ngửới dùng tần dụng AD hiện có thông qua AD
Connector, loầi bộ nhu cầu quần lý ngửới dùng riêng biệt trong AWS.

Question #: : 833
multiple validation steps: nhiệu bửớc xác thửc
Individual validation steps: Các bửớc xác thửc riêng lệ
loosely coupled to accommodate future business changes.: kệt hớp lộng lệo đệ phù hớp với nhửng thay đội kinh
doanh trong tửớng lai.
A company is designing an event-driven order processing system. Each order requires multiple validation steps
after the order is created. An idempotent AWS Lambda function performs each validation step. Each validation
step is independent from the other validation steps. Individual validation steps need only a subset of the order
event information.

The company wants to ensure that each validation step Lambda function has access to only the information from
the order event that the function requires. The components of the order processing system should be loosely
coupled to accommodate future business changes.

Which solution will meet these requirements?


• A. Create an Amazon Simple Queue Service (Amazon SQS) queue for each validation step. Create a new
Lambda function to transform the order data to the format that each validation step requires and to publish the
messages to the appropriate SQS queues. Subscribe each validation step Lambda function to its corresponding
SQS queue.
• B. Create an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the validation step
Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed
Lambda function.
• C. Create an Amazon EventBridge event bus. Create an event rule for each validation step. Configure
the input transformer to send only the required data to each target validation step Lambda function.
• D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Create a new Lambda function to
subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use
the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel
on separate threads.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Kaula at March 23, 2024, 3:17 p.m.
Explain:
* Use EventBridge for event-driven architectures with complex routing rules and filtering needs. , promoting loose
coupling.
* Use SQS for reliable, asynchronous message delivery between decoupled applications.
* Use SNS for broadcasting messages to a large number of diverse subscribers with filtering options.

Question #: : 834

A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past,
the application users reported poor application performance when creating new entries. These performance issues
were caused by users generating different real-time reports from the application during working hours.
Which solution will improve the performance of the application when it is moved to AWS?
• A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application
to use DynamoDB for reports.
• B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed
the on-premises database.
• C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the
application to use the reader endpoint for reports.
• D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup
instance of the cluster as an endpoint for the reports.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by xBUGx at April 3, 2024, 10:18 p.m.
Explain:
Aurora is generally more cost-effective than running a MySQL database on an EC2 instance, especially as the
application scales.
Isolating read and write operations through read replicas improves overall application performance, especially for
report generation.

Question #: : 835

A company is expanding a secure on-premises network to the AWS Cloud by using an AWS Direct Connect
connection. The on-premises network has no direct internet access. An application that runs on the on-premises
network needs to use an Amazon S3 bucket.

Which solution will meet these requirements MOST cost-effectively?


• A. Create a public virtual interface (VIF). Route the AWS traffic over the public VIF.
• B. Create a VPC and a NAT gateway. Route the AWS traffic from the on-premises network to the NAT
gateway.
• C. Create a VPC and an Amazon S3 interface endpoint. Route the AWS traffic from the on-premises
network to the S3 interface endpoint.
• D. Create a VPC peering connection between the on-premises network and Direct Connect. Route the
AWS traffic over the peering connection.

Hide Answer
Suggested Answer: C
Community vote distribution
C (100%)
by Kaula at March 23, 2024, 3:26 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 836

A company serves its website by using an Auto Scaling group of Amazon EC2 instances in a single AWS Region.
The website does not require a database.

The company is expanding, and the company's engineering team deploys the website to a second Region. The
company wants to distribute traffic across both Regions to accommodate growth and for disaster recovery
purposes. The solution should not serve traffic from a Region in which the website is unhealthy.

Which policy or resource should the company use to meet these requirements?
• A. An Amazon Route 53 simple routing policy
• B. An Amazon Route 53 multivalue answer routing policy
• C. An Application Load Balancer in one Region with a target group that specifies the EC2 instance IDs
from both Regions
• D. An Application Load Balancer in one Region with a target group that specifies the IP addresses of the
EC2 instances from both Regions

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Kaula at March 23, 2024, 3:31 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 837

A company runs its applications on Amazon EC2 instances that are backed by Amazon Elastic Block Store
(Amazon EBS). The EC2 instances run the most recent Amazon Linux release. The applications are experiencing
availability issues when the company's employees store and retrieve files that are 25 GB or larger. The company
needs a solution that does not require the company to transfer files between EC2 instances. The files must be
available across many EC2 instances and across multiple Availability Zones.

Which solution will meet these requirements?


• A. Migrate all the files to an Amazon S3 bucket. Instruct the employees to access the files from the S3
bucket.
• B. Take a snapshot of the existing EBS volume. Mount the snapshot as an EBS volume across the EC2
instances. Instruct the employees to access the files from the EC2 instances.
• C. Mount an Amazon Elastic File System (Amazon EFS) file system across all the EC2 instances. Instruct
the employees to access the files from the EC2 instances.
• D. Create an Amazon Machine Image (AMI) from the EC2 instances. Configure new EC2 instances from
the AMI that use an instance store volume. Instruct the employees to access the files from the EC2 instances.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by xBUGx at April 3, 2024, 10:22 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 838

A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database.
Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest.

Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of
changes to the infrastructure?
• A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database
volume.
• B. Deploy AWS CloudHSM, generate encryption keys, and use the keys to encrypt database volumes.
• C. Configure SSL encryption using AWS Key Management Service (AWS KMS) keys to encrypt database
volumes.
• D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with
AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Hide Answer
Suggested Answer: D

by zinabu at April 10, 2024, 7:47 a.m.


Explain:
This solution requires the least amount of changes to the infrastructure while ensuring that all personally
identifiable information (PII) is encrypted at rest in compliance with the regulations. Amazon RDS encryption
and Amazon EBS encryption both work seamlessly with AWS Key Management Service (KMS) to provide a high
level of data protection for sensitive information. All backups, snapshots, and replicas of the encrypted data are
also encrypted, adding an additional layer of protection.

Question #: : 839
A company runs an AWS Lambda function in private subnets in a VPC. The subnets have a default route to the
internet through an Amazon EC2 NAT instance. The Lambda function processes input data and saves its output
as an object to Amazon S3.

Intermittently, the Lambda function times out while trying to upload the object because of saturated traffic on the
NAT instance's network. The company wants to access Amazon S3 without traversing the internet.

Which solution will meet these requirements?


• A. Replace the EC2 NAT instance with an AWS managed NAT gateway.
• B. Increase the size of the EC2 NAT instance in the VPC to a network optimized instance type.
• C. Provision a gateway endpoint for Amazon S3 in the VPUpdate the route tables of the subnets
accordingly.
• D. Provision a transit gateway. Place transit gateway attachments in the private subnets where the
Lambda function is running.

Hide Answer
Suggested Answer: C

Community vote distribution


C (67%)
A (33%)
by hpmargathia at April 1, 2024, 6:31 p.m.
Explain:
A gateway endpoint for S3 provides a route that directs traffic destined for S3 to the gateway endpoint, allowing
your Lambda function to connect to S3 directly, without needing to pass through a NAT instance or traverse the
public internet.

Question #: : 840
A news company that has reporters all over the world is hosting its broadcast system on AWS. The reporters send
live broadcasts to the broadcast system. The reporters use software on their phones to send live streams through
the Real Time Messaging Protocol (RTMP).

A solutions architect must design a solution that gives the reporters the ability to send the highest quality streams.
The solution must provide accelerated TCP connections back to the broadcast system.

What should the solutions architect use to meet these requirements?


• A. Amazon CloudFront
• B. AWS Global Accelerator
• C. AWS Client VPN
• D. Amazon EC2 instances and AWS Elastic IP addresses
Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by dds69 at March 21, 2024, 11:06 a.m.

Question #: : 841

A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) to run its self-managed
database. The company has 350 TB of data spread across all EBS volumes. The company takes daily EBS snapshots
and keeps the snapshots for 1 month. The daily change rate is 5% of the EBS volumes.

Because of new regulations, the company needs to keep the monthly snapshots for 7 years. The company needs
to change its backup strategy to comply with the new regulations and to ensure that data is available with minimal
administrative effort.

Which solution will meet these requirements MOST cost-effectively?


• A. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Copy the monthly snapshot
to Amazon S3 Glacier Deep Archive with a 7-year retention period.
• B. Continue with the current EBS snapshot policy. Add a new policy to move the monthly snapshot to
Amazon EBS Snapshots Archive with a 7-year retention period.
• C. Keep the daily snapshot in the EBS snapshot standard tier for 1 month. Keep the monthly snapshot
in the standard tier for 7 years. Use incremental snapshots.
• D. Keep the daily snapshot in the EBS snapshot standard tier. Use EBS direct APIs to take snapshots of
all the EBS volumes every month. Store the snapshots in an Amazon S3 bucket in the Infrequent Access tier for 7
years.

Hide Answer
Suggested Answer: A

Community vote distribution


A (38%)
B (38%)
D (25%)
by xBUGx at April 3, 2024, 10:32 p.m.

Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 20
Topic #: 1
[All AWS Certified Solutions Architect - Associate SAA-C03 Questions]
A company wants to improve its ability to clone large amounts of production data into a test environment in the
same AWS Region. The data is stored in Amazon EC2 instances on Amazon Elastic Block Store (Amazon EBS)
volumes. Modifications to the cloned data must not affect the production environment. The software that accesses
this data requires consistently high I/O performance.
A solutions architect needs to minimize the time that is required to clone the production data into the test
environment.
Which solution will meet these requirements?
• A. Take EBS snapshots of the production EBS volumes. Restore the snapshots onto EC2 instance store
volumes in the test environment.
• B. Configure the production EBS volumes to use the EBS Multi-Attach feature. Take EBS snapshots of
the production EBS volumes. Attach the production EBS volumes to the EC2 instances in the test environment.
• C. Take EBS snapshots of the production EBS volumes. Create and initialize new EBS volumes. Attach
the new EBS volumes to EC2 instances in the test environment before restoring the volumes from the production
EBS snapshots.
• D. Take EBS snapshots of the production EBS volumes. Turn on the EBS fast snapshot restore feature
on the EBS snapshots. Restore the snapshots into new EBS volumes. Attach the new EBS volumes to EC2
instances in the test environment.

Hide Answer
Suggested Answer: D

Community vote distribution


D (93%)
6%

Explain:
Amazon EBS fast snapshot restore (FSR) enables you to create a volume from a snapshot that is fully initialized
at creation. This eliminates the latency of I/O operations on a block when it is accessed for the first time. Volumes
that are created using fast snapshot restore instantly deliver all of their provisioned performance.
Amazon EBS fast snapshot restore (FSR) cụa Amazon EBS cho phép bần tầo ộ đĩa tử ầnh chụp nhanh đửớc khới
tầo hoàn toàn khi tầo. Điệu này giúp loầi bộ độ trệ cụa các thao tác I/O trên một khội khi nó đửớc truy cầp lần
đầu tiên. Các tầp đửớc tầo bầng cách sử dụng khôi phục ầnh chụp nhanh nhanh sẽ ngay lầp tửc cung cầp tầt cầ
hiệu suầt đửớc cung cầp cụa chúng.

Question #: : 842

A company runs an application on several Amazon EC2 instances that store persistent data on an Amazon Elastic
File System (Amazon EFS) file system. The company needs to replicate the data to another AWS Region by using
an AWS managed service solution.

Which solution will meet these requirements MOST cost-effectively?


• A. Use the EFS-to-EFS backup solution to replicate the data to an EFS file system in another Region.
• B. Run a nightly script to copy data from the EFS file system to an Amazon S3 bucket. Enable S3 Cross-
Region Replication on the S3 bucket.
• C. Create a VPC in another Region. Establish a cross-Region VPC peer. Run a nightly rsync to copy data
from the original Region to the new Region.
• D. Use AWS Backup to create a backup plan with a rule that takes a daily backup and replicates it to
another Region. Assign the EFS file system resource to the backup plan.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by xBUGx at April 3, 2024, 10:38 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 843

An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists
of a web application and a backend Microsoft SQL database for storage.

The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS
Cloud must be highly available and scalable.

Which solution will meet these requirements with the LEAST administrative overhead?
• A. Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an
Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in
both Availability Zones.
• B. Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across
two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across
separate AWS Regions with database replication.
• C. Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two
Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ
deployment.
• D. Migrate the web application to three Amazon EC2 instances across three Availability Zones behind
an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by Awsbeginner87 at April 8, 2024, 1:19 a.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 844

A company has an on-premises business application that generates hundreds of files each day. These files are
stored on an SMB file share and require a low-latency connection to the application servers. A new company policy
states all application-generated files must be copied to AWS. There is already a VPN connection to AWS.

The application development team does not have time to make the necessary code modifications to move the
application to AWS.

Which service should a solutions architect recommend to allow the application to copy files to AWS?
• A. Amazon Elastic File System (Amazon EFS)
• B. Amazon FSx for Windows File Server
• C. AWS Snowball
• D. AWS Storage Gateway

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Kaula at April 7, 2024, 5:45 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 845

A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The
company wants to send an email message to each employee on the day of the employee's work anniversary.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service
(Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day
on an Amazon EC2 instance.
• B. Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon
SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an
Amazon EC2 instance.
• C. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple
Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda
function to run every day.
• D. Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue
Service (Amazon SQS) to send email messages to employees when necessary. Schedule this Lambda function to
run every day.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Mikado211 at April 19, 2024, 9:45 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 846

A company’s application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic
Load Balancing (ELB) load balancer. Based on the application's history, the company anticipates a spike in traffic
during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group
proactively increases capacity to minimize any performance impact on application users.

Which solution will meet these requirements?


• A. Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds
90%.
• B. Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of
peak demand.
• C. Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the
peak demand period.
• D. Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when
there are autoscaling:EC2_INSTANCE_LAUNCH events.

Hide Answer
Suggested Answer: B

Community vote distribution


A (50%)
B (50%)
by Hkayne at April 18, 2024, 9:33 a.m.

Question #: : 847

A company uses Amazon RDS for PostgreSQL databases for its data tier. The company must implement password
rotation for the databases.

Which solution meets this requirement with the LEAST operational overhead?
• A. Store the password in AWS Secrets Manager. Enable automatic rotation on the secret.
• B. Store the password in AWS Systems Manager Parameter Store. Enable automatic rotation on the
parameter.
• C. Store the password in AWS Systems Manager Parameter Store. Write an AWS Lambda function that
rotates the password.
• D. Store the password in AWS Key Management Service (AWS KMS). Enable automatic rotation on the
AWS KMS key.

Hide Answer
Suggested Answer: A

Question #: : 848

A company runs its application on Oracle Database Enterprise Edition. The company needs to migrate the
application and the database to AWS. The company can use the Bring Your Own License (BYOL) model while
migrating to AWS. The application uses third-party database features that require privileged access.

A solutions architect must design a solution for the database migration.

Which solution will meet these requirements MOST cost-effectively?


• A. Migrate the database to Amazon RDS for Oracle by using native tools. Replace the third-party features
with AWS Lambda.
• B. Migrate the database to Amazon RDS Custom for Oracle by using native tools. Customize the new
database settings to support the third-party features.
• C. Migrate the database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
Customize the new database settings to support the third-party features.
• D. Migrate the database to Amazon RDS for PostgreSQL by using AWS Database Migration Service
(AWS DMS). Rewrite the application code to remove the dependency on third-party features.

Hide Answer
Suggested Answer: B

by jcck202020 at April 13, 2024, 10:19 p.m.

Question #: : 849

A large international university has deployed all of its compute services in the AWS Cloud. These services include
Amazon EC2, Amazon RDS, and Amazon DynamoDB. The university currently relies on many custom scripts to
back up its infrastructure. However, the university wants to centralize management and automate data backups as
much as possible by using AWS native options.

Which solution will meet these requirements?


• A. Use third-party backup software with an AWS Storage Gateway tape gateway virtual tape library.
• B. Use AWS Backup to configure and monitor all backups for the services in use.
• C. Use AWS Config to set lifecycle management to take snapshots of all data sources on a schedule.
• D. Use AWS Systems Manager State Manager to manage the configuration and monitoring of backup
tasks.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Mikado211 at April 4, 2024, 11:27 p.m.

Question #: : 850

A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose
security risks. The company's security team must be able to query data in the IT infrastructure map and quickly
identify security risks.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
• B. Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
• C. Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
• D. Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by dds69 at April 4, 2024, 7:36 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 851
A large company wants to provide its globally located developers separate, limited size, managed PostgreSQL
databases for development purposes. The databases will be low volume. The developers need the databases only
when they are actively working.

Which solution will meet these requirements MOST cost-effectively?


• A. Give the developers the ability to launch separate Amazon Aurora instances. Set up a process to shut
down Aurora instances at the end of the workday and to start Aurora instances at the beginning of the next workday.
• B. Develop an AWS Service Catalog product that enforces size restrictions for launching Amazon Aurora
instances. Give the developers access to launch the product when they need a development database.
• C. Create an Amazon Aurora Serverless cluster. Develop an AWS Service Catalog product to launch
databases in the cluster with the default capacity settings. Grant the developers access to the product.
• D. Monitor AWS Trusted Advisor checks for idle Amazon RDS databases. Create a process to terminate
identified idle RDS databases.

Hide Answer
Suggested Answer: C

Community vote distribution


B (100%)
by Hkayne at April 18, 2024, 10:06 a.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 852

A company is building a web application that serves a content management system. The content management
system runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an
Auto Scaling group across multiple Availability Zones. Users are constantly adding and updating files, blogs, and
other website assets in the content management system.

A solutions architect must implement a solution in which all the EC2 instances share up-to-date website content
with the least possible lag time.

Which solution meets these requirements?


• A. Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from
the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only
in the newest EC2 instance.
• B. Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each
EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the
website assets that are stored in the EFS file system.
• C. Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 instance downloads the
website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3
sync command once each hour to keep files up to date.
• D. Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the
EBS snapshot as a secondary EBS volume when a new EC2 instance is launched. Configure the website hosting
application to reference the website assets that are stored in the secondary EBS volume.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Hkayne at April 18, 2024, 1:36 p.m.

Question #: : 853

A company's web application consists of multiple Amazon EC2 instances that run behind an Application Load
Balancer in a VPC. An Amazon RDS for MySQL DB instance contains the data. The company needs the ability
to automatically detect and respond to suspicious or unexpected behavior in its AWS environment. The company
already has added AWS WAF to its architecture.

What should a solutions architect do next to protect against threats?


• A. Use Amazon GuardDuty to perform threat detection. Configure Amazon EventBridge to filter for
GuardDuty findings and to invoke an AWS Lambda function to adjust the AWS WAF rules.
• B. Use AWS Firewall Manager to perform threat detection. Configure Amazon EventBridge to filter for
Firewall Manager findings and to invoke an AWS Lambda function to adjust the AWS WAF web ACL.
• C. Use Amazon Inspector to perform threat detection and to update the AWS WAF rules. Create a VPC
network ACL to limit access to the web application.
• D. Use Amazon Macie to perform threat detection and to update the AWS WAF rules. Create a VPC
network ACL to limit access to the web application.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Awsbeginner87 at April 4, 2024, 2:49 a.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 854

A company is planning to run a group of Amazon EC2 instances that connect to an Amazon Aurora database. The
company has built an AWS CloudFormation template to deploy the EC2 instances and the Aurora DB cluster.
The company wants to allow the instances to authenticate to the database in a secure way. The company does not
want to maintain static database credentials.

Which solution meets these requirements with the LEAST operational effort?
• A. Create a database user with a user name and password. Add parameters for the database user name
and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are
launched.
• B. Create a database user with a user name and password. Store the user name and password in AWS
Systems Manager Parameter Store. Configure the EC2 instances to retrieve the database credentials from
Parameter Store.
• C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM
authentication. Associate a role with the EC2 instances to allow applications on the instances to access the
database.
• D. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database
user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow
applications on the instances to access the database.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Hkayne at April 18, 2024, 1:43 p.m.

Question #: : 855

A company wants to configure its Amazon CloudFront distribution to use SSL/TLS certificates. The company
does not want to use the default domain name for the distribution. Instead, the company wants to use a different
domain name for the distribution.

Which solution will deploy the certificate without incurring any additional costs?
• A. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-east-1
Region.
• B. Request an Amazon issued private certificate from AWS Certificate Manager (ACM) in the us-west-
1 Region.
• C. Request an Amazon issued public certificate from AWS Certificate Manager (ACM) in the us-east-1
Region.
• D. Request an Amazon issued public certificate from AWS Certificate Manager (ACM) in the us-west-
1 Region.

Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by cloudee at April 3, 2024, 2:59 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 856

A company creates operations data and stores the data in an Amazon S3 bucket. For the company's annual audit,
an external consultant needs to access an annual report that is stored in the S3 bucket. The external consultant
needs to access the report for 7 days.

The company must implement a solution to allow the external consultant access to only the report.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create a new S3 bucket that is configured to host a public static website. Migrate the operations data
to the new S3 bucket. Share the S3 website URL with the external consultant.
• B. Enable public access to the S3 bucket for 7 days. Remove access to the S3 bucket when the external
consultant completes the audit.
• C. Create a new IAM user that has access to the report in the S3 bucket. Provide the access keys to the
external consultant. Revoke the access keys after 7 days.
• D. Generate a presigned URL that has the required access to the location of the report on the S3 bucket.
Share the presigned URL with the external consultant.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Hkayne at April 18, 2024, 1:50 p.m.

Question #: : 857

A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances. The workload
requires low-latency network performance and high network throughput with tightly coupled node-to-node
communication.

Which solution will meet these requirements?


• A. Configure the EC2 instances to be part of a cluster placement group.
• B. Launch the EC2 instances with Dedicated Instance tenancy.
• C. Launch the EC2 instances as Spot Instances.
• D. Configure an On-Demand Capacity Reservation when the EC2 instances are launched.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by AlvinC2024 at April 3, 2024, 5:55 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 859

A company runs several Amazon RDS for Oracle On-Demand DB instances that have high utilization. The RDS
DB instances run in member accounts that are in an organization in AWS Organizations.

The company's finance team has access to the organization's management account and member accounts. The
finance team wants to find ways to optimize costs by using AWS Trusted Advisor.

Which combination of steps will meet these requirements? (Choose two.)


• A. Use the Trusted Advisor recommendations in the management account.
• B. Use the Trusted Advisor recommendations in the member accounts where the RDS DB instances are
running.
• C. Review the Trusted Advisor checks for Amazon RDS Reserved Instance Optimization.
• D. Review the Trusted Advisor checks for Amazon RDS Idle DB Instances.
• E. Review the Trusted Advisor checks for compute optimization. Crosscheck the results by using AWS
Compute Optimizer.

Hide Answer
Suggested Answer: BC

Community vote distribution


AE (33%)
AD (33%)
AC (33%)
by AlvinC2024 at April 3, 2024, 5:59 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 860

A solutions architect is creating an application. The application will run on Amazon EC2 instances in private
subnets across multiple Availability Zones in a VPC. The EC2 instances will frequently access large files that
contain confidential information. These files are stored in Amazon S3 buckets for processing. The solutions
architect must optimize the network architecture to minimize data transfer costs.

What should the solutions architect do to meet these requirements?


• A. Create a gateway endpoint for Amazon S3 in the VPC. In the route tables for the private subnets, add
an entry for the gateway endpoint.
• B. Create a single NAT gateway in a public subnet. In the route tables for the private subnets, add a
default route that points to the NAT gateway.
• C. Create an AWS PrivateLink interface endpoint for Amazon S3 in the VPIn the route tables for the
private subnets, add an entry for the interface endpoint.
• D. Create one NAT gateway for each Availability Zone in public subnets. In each of the route tables for
the private subnets, add a default route that points to the NAT gateway in the same Availability Zone.

Hide Answer
Suggested Answer: C

Community vote distribution


A (100%)
by AlvinC2024 at April 3, 2024, 6:07 p.m.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 861

A company wants to relocate its on-premises MySQL database to AWS. The database accepts regular imports
from a client-facing application, which causes a high volume of write operations. The company is concerned that
the amount of traffic might be causing performance issues within the application.

How should a solutions architect design the architecture on AWS?


• A. Provision an Amazon RDS for MySQL DB instance with Provisioned IOPS SSD storage. Monitor
write operation metrics by using Amazon CloudWatch. Adjust the provisioned IOPS if necessary.
• B. Provision an Amazon RDS for MySQL DB instance with General Purpose SSD storage. Place an
Amazon ElastiCache cluster in front of the DB instance. Configure the application to query ElastiCache instead.
• C. Provision an Amazon DocumentDB (with MongoDB compatibility) instance with a memory
optimized instance type. Monitor Amazon CloudWatch for performance-related issues. Change the instance class
if necessary.
• D. Provision an Amazon Elastic File System (Amazon EFS) file system in General Purpose performance
mode. Monitor Amazon CloudWatch for IOPS bottlenecks. Change to Provisioned Throughput performance
mode if necessary.

Hide Answer
Suggested Answer: A
Community vote distribution
A (100%)
by Tanidanindo at April 9, 2024, 6:17 a.m.

Question #: : 862

A company runs an application in the AWS Cloud that generates sensitive archival data files. The company wants
to rearchitect the application's data storage. The company wants to encrypt the data files and to ensure that third
parties do not have access to the data before the data is encrypted and sent to AWS. The company has already
created an Amazon S3 bucket.

Which solution will meet these requirements?


• A. Configure the S3 bucket to use client-side encryption with an Amazon S3 managed encryption key.
Configure the application to use the S3 bucket to store the archival files.
• B. Configure the S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Configure
the application to use the S3 bucket to store the archival files.
• C. Configure the S3 bucket to use dual-layer server-side encryption with AWS KMS keys (SSE-KMS).
Configure the application to use the S3 bucket to store the archival files.
• D. Configure the application to use client-side encryption with a key stored in AWS Key Management
Service (AWS KMS). Configure the application to store the archival files in the S3 bucket.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by xBUGx at April 6, 2024, 4:11 a.m.
xam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 863

A company uses Amazon RDS with default backup settings for its database tier. The company needs to make a
daily backup of the database to meet regulatory requirements. The company must retain the backups for 30 days.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Write an AWS Lambda function to create an RDS snapshot every day.
• B. Modify the RDS database to have a retention period of 30 days for automated backups.
• C. Use AWS Systems Manager Maintenance Windows to modify the RDS backup retention period.
• D. Create a manual snapshot every day by using the AWS CLI. Modify the RDS backup retention period.
Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Hkayne at April 19, 2024, 1:38 p.m.

Question #: : 865

A company's near-real-time streaming application is running on AWS. As the data is ingested, a job runs on the
data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of
incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.

Which combination of steps should the solutions architect take? (Choose two.)
• A. Use Amazon Kinesis Data Firehose to ingest the data.
• B. Use AWS Lambda with AWS Step Functions to process the data.
• C. Use AWS Database Migration Service (AWS DMS) to ingest the data.
• D. Use Amazon EC2 instances in an Auto Scaling group to process the data.
• E. Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.

Hide Answer
Suggested Answer: AB

Community vote distribution


AE (100%)
by AlvinC2024 at April 3, 2024, 6:19 p.m.

Question #: : 866

A company runs a web application on multiple Amazon EC2 instances in a VPC. The application needs to write
sensitive data to an Amazon S3 bucket. The data cannot be sent over the public internet.

Which solution will meet these requirements?


• A. Create a gateway VPC endpoint for Amazon S3. Create a route in the VPC route table to the endpoint.
• B. Create an internal Network Load Balancer that has the S3 bucket as the target.
• C. Deploy the S3 bucket inside the VPCreate a route in the VPC route table to the bucket.
• D. Create an AWS Direct Connect connection between the VPC and an S3 regional endpoint.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Awsbeginner87 at April 4, 2024, 1:55 a.m.
Explain:
Gateway VPC endpoints allow instances in the VPC to use their private IP addresses to access Amazon S3 with
no exposure to the public Internet. They do not require an internet gateway, NAT device, VPN connection, or
AWS Direct Connect connection. Traffic between the VPC and the AWS service does not leave the Amazon
network.

Question #: : 867

A company runs its production workload on Amazon EC2 instances with Amazon Elastic Block Store (Amazon
EBS) volumes. A solutions architect needs to analyze the current EBS volume cost and to recommend
optimizations. The recommendations need to include estimated monthly saving opportunities.

Which solution will meet these requirements?


• A. Use Amazon Inspector reporting to generate EBS volume recommendations for optimization.
• B. Use AWS Systems Manager reporting to determine EBS volume recommendations for optimization.
• C. Use Amazon CloudWatch metrics reporting to determine EBS volume recommendations for
optimization.
• D. Use AWS Compute Optimizer to generate EBS volume recommendations for optimization.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Awsbeginner87 at April 4, 2024, 1:55 a.m.
Explain:
AWS Compute Optimizer provides recommendations to optimize AWS resources to reduce costs and improve
performance by using machine learning. It analyses the configuration and resource utilization of workloads to
report where performance can be improved and provide an estimate of the cost that could be saved. For Amazon
EBS, it would check and recommend optimizations such as resizing or changing the volume type. It can also
recommend the optimal Amazon EC2 instance type for your workloads.
Trình tội ửu hóa điện toán AWS cung cầp các đệ xuầt đệ tội ửu hóa tài nguyên AWS nhầm giầm chi phí và cầi
thiện hiệu suầt bầng cách sử dụng máy hộc. Nó phân tích cầu hình và việc sử dụng tài nguyên cụa khội lửớng công
việc đệ báo cáo nhửng điệm có thệ cầi thiện hiệu suầt và đửa ra ửớc tính vệ chi phí có thệ tiệt kiệm đửớc. Đội với
Amazon EBS, nó sẽ kiệm tra và đệ xuầt các biện pháp tội ửu hóa nhử thay đội kích thửớc hoầc thay đội loầi ộ đĩa.
Nó cũng có thệ đệ xuầt loầi phiên bần Amazon EC2 tội ửu cho khội lửớng công việc cụa bần.
Question #: : 868

A global company runs its workloads on AWS. The company's application uses Amazon S3 buckets across AWS
Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets
daily. The company wants to identify all S3 buckets that are not versioning-enabled.

Which solution will meet these requirements?


• B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.
• C. Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across
Regions.
• D. Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled
across Regions.

Hide Answer
Suggested Answer: B

by xBUGx at April 3, 2024, 11:50 p.m.


Explain:
Amazon S3 Storage Lens is a feature that provides organization-wide visibility into object storage usage and
activity trends across multiple AWS accounts and Regions. It offers actionable recommendations to optimize
storage usage, improve security posture, and reduce costs.
For the specific requirement of identifying S3 buckets that are not versioning-enabled across Regions, Storage
Lens can provide insights into the configuration of each S3 bucket, including whether versioning is enabled or not.
This allows the company to easily identify buckets lacking versioning across Regions.
Amazon S3 Storage Lens là một tính năng cung cầp khầ năng hiện thị trên toàn tộ chửc vệ xu hửớng hoầt động
và sử dụng bộ nhớ đội tửớng trên nhiệu tài khoần và Khu vửc AWS. Nó đửa ra các đệ xuầt hửu ích đệ tội ửu hóa
việc sử dụng bộ nhớ, cầi thiện tình hình bầo mầt và giầm chi phí.

Đội với yêu cầu cụ thệ vệ việc xác định các nhóm S3 không hộ trớ lầp phiên bần trên các Khu vửc, Storage Lens
có thệ cung cầp thông tin chi tiệt vệ cầu hình cụa tửng nhóm S3, bao gộm cầ việc liệu phiên bần có đửớc bầt hay
không. Điệu này cho phép công ty dệ dàng xác định các nhóm thiệu phiên bần trên khầp các Khu vửc.

※A Config rule that checks whether versioning is enabled for your S3 buckets. Optionally, the rule checks if MFA
delete is enabled for your S3 buckets.

Question #: : 869
unpredictable traffic surges:thới gian lửu lửớng truy cầp tăng đột biện khó lửớng.
A company wants to enhance its ecommerce order-processing application that is deployed on AWS. The
application must process each order exactly once without affecting the customer experience during unpredictable
traffic surges.
Which solution will meet these requirements?
• A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Put all the orders in the SQS
queue. Configure an AWS Lambda function as the target to process the orders.
• B. Create an Amazon Simple Notification Service (Amazon SNS) standard topic. Publish all the orders
to the SNS standard topic. Configure the application as a notification target.
• C. Create a flow by using Amazon AppFlow. Send the orders to the flow. Configure an AWS Lambda
function as the target to process the orders.
• D. Configure AWS X-Ray in the application to track the order requests. Configure the application to
process the orders by pulling the orders from Amazon CloudWatch.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Mikado211 at April 7, 2024, 4:55 p.m.
Explain:
Amazon SQS FIFO (First-In-First-Out) queues are designed to ensure that the order of messages is preserved
and a message is delivered exactly once eliminating any duplicates. This approach can also handle traffic surges
because orders/messages will just be queued until they can be processed. AWS Lambda can process these
messages/orders easily and scales according to the incoming traffic. This solution will ensure orders are processed
exactly once and the application can smoothly handle traffic spikes without affecting customer experience.

Hàng đới Amazon SQS FIFO (First-In-First-Out) đửớc thiệt kệ đệ đầm bầo rầng thử tử cụa tin nhần đửớc giử
nguyên và tin nhần đửớc gửi chính xác sau khi loầi bộ bầt kỳ sử trùng lầp nào. Cách tiệp cần này cũng có thệ xử
lý sử gia tăng lửu lửớng truy cầp vì các đớn đầt hàng/tin nhần sẽ chỉ đửớc xệp hàng đới cho đện khi chúng có thệ
đửớc xử lý. AWS Lambda có thệ xử lý các tin nhần/đớn đầt hàng này một cách dệ dàng và điệu chỉnh quy mô theo
lửu lửớng truy cầp đện. Giầi pháp này sẽ đầm bầo các đớn hàng đửớc xử lý chính xác một lần và ửng dụng có thệ
xử lý trớn tru các đớt tăng đột biện lửu lửớng truy cầp mà không ầnh hửớng đện trầi nghiệm cụa khách hàng.

Question #: : 870

A company has two AWS accounts: Production and Development. The company needs to push code changes in
the Development account to the Production account. In the alpha phase, only two senior developers on the
development team need access to the Production account. In the beta phase, more developers will need access to
perform testing.

Which solution will meet these requirements?


• A. Create two policy documents by using the AWS Management Console in each account. Assign the
policy to developers who need access.
• B. Create an IAM role in the Development account. Grant the IAM role access to the Production account.
Allow developers to assume the role.
• C. Create an IAM role in the Production account. Define a trust policy that specifies the Development
account. Allow developers to assume the role.
• D. Create an IAM group in the Production account. Add the group as a principal in a trust policy that
specifies the Production account. Add developers to the group.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
Explain:
1. IAM Role in Production Account: Creating an IAM role in the Production account allows for controlled
access to resources in that account. This role will define the permissions that developers can have when they
assume this role.
2. Trust Policy for Development Account: By defining a trust policy in the IAM role in the Production
account that specifies the Development account, access is restricted to only those authenticated in the
Development account. This ensures that only authorized users from the Development account can assume the
role in the Production account.
3. Gradual Access Expansion: Initially, only senior developers need access to the Production account. As
more developers are required during the beta phase, they can be added to the role's policy in the Development
account, allowing them to assume the role and gain access to the Production account. This allows for controlled
access expansion as needed.
Vai trò IAM trong Tài khoần sần xuầt: Tầo vai trò IAM trong tài khoần Sần xuầt cho phép truy cầp có kiệm soát
vào các tài nguyên trong tài khoần đó. Vai trò này sẽ xác định các quyện mà nhà phát triện có thệ có khi hộ đầm
nhần vai trò này.

Chính sách tin cầy dành cho Tài khoần phát triện: Bầng cách xác định chính sách tin cầy trong vai trò IAM trong
tài khoần Sần xuầt chỉ định tài khoần Phát triện, quyện truy cầp chỉ đửớc giới hần ớ nhửng tài khoần đửớc xác
thửc trong tài khoần Phát triện. Điệu này đầm bầo rầng chỉ nhửng ngửới dùng đửớc ụy quyện tử tài khoần Phát
triện mới có thệ đầm nhần vai trò trong tài khoần Sần xuầt.

Mớ rộng quyện truy cầp dần dần: Ban đầu, chỉ nhửng nhà phát triện cầp cao mới cần quyện truy cầp vào tài khoần
Sần xuầt. Vì cần có nhiệu nhà phát triện hớn trong giai đoần beta nên hộ có thệ đửớc thêm vào chính sách cụa vai
trò trong tài khoần Phát triện, cho phép hộ đầm nhần vai trò và có quyện truy cầp vào tài khoần Sần xuầt. Điệu
này cho phép mớ rộng quyện truy cầp có kiệm soát khi cần thiệt.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 871
restrict access: hần chệ quyện truy cầp
implement a serverless architecture for authorization: triện khai kiện trúc không có máy chụ đệ ụy quyện
authentication that has low login latency.: xác thửc có độ trệ đăng nhầp thầp.
A company wants to restrict access to the content of its web application. The company needs to protect the content
by using authorization techniques that are available on AWS. The company also wants to implement a serverless
architecture for authorization and authentication that has low login latency.

The solution must integrate with the web application and serve web content globally. The application currently
has a small user base, but the company expects the application's user base to increase.

Which solution will meet these requirements?


• A. Configure Amazon Cognito for authentication. Implement Lambda@Edge for authorization.
Configure Amazon CloudFront to serve the web application globally.
• B. Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement AWS
Lambda for authorization. Use an Application Load Balancer to serve the web application globally.
• C. Configure Amazon Cognito for authentication. Implement AWS Lambda for authorization. Use
Amazon S3 Transfer Acceleration to serve the web application globally.
• D. Configure AWS Directory Service for Microsoft Active Directory for authentication. Implement
Lambda@Edge for authorization. Use AWS Elastic Beanstalk to serve the web application globally.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by Danges at April 12, 2024, 5:39 p.m.
Explain:
Amazon Cognito offers robust user directory management and authentication features. Lambda@Edge is
integrated with CloudFront (AWS's content delivery network service), allowing you to execute the authorization
functions closer to the user and thus decreasing latency. Amazon CloudFront can serve the web application
globally and reliably, offering low latency and high transfer speeds.

Amazon Cognito cung cầp các tính năng xác thửc và quần lý thử mục ngửới dùng mầnh mẽ. Lambda@Edge đửớc
tích hớp với CloudFront (dịch vụ mầng phân phội nội dung cụa AWS), cho phép bần thửc thi các chửc năng ụy
quyện gần hớn với ngửới dùng và do đó giầm độ trệ. Amazon CloudFront có thệ phục vụ ửng dụng web trên toàn
cầu và đáng tin cầy, mang lầi độ trệ thầp và tộc độ truyện cao.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 872

A development team uses multiple AWS accounts for its development, staging, and production environments.
Team members have been launching large Amazon EC2 instances that are underutilized. A solutions architect
must prevent large instances from being launched in all accounts.

How can the solutions architect meet this requirement with the LEAST operational overhead?
• A. Update the IAM policies to deny the launch of large EC2 instances. Apply the policies to all users.
• B. Define a resource in AWS Resource Access Manager that prevents the launch of large EC2 instances.
• C. Create an IAM role in each account that denies the launch of large EC2 instances. Grant the
developers IAM group access to the role.
• D. Create an organization in AWS Organizations in the management account with the default policy.
Create a service control policy (SCP) that denies the launch of large EC2 instances, and apply it to the AWS
accounts.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by Hkayne at April 19, 2024, 2:44 p.m.

Question #: : 873

A company has migrated a fleet of hundreds of on-premises virtual machines (VMs) to Amazon EC2 instances.
The instances run a diverse fleet of Windows Server versions along with several Linux distributions. The company
wants a solution that will automate inventory and updates of the operating systems. The company also needs a
summary of common vulnerabilities of each instance for regular monthly reviews.

What should a solutions architect recommend to meet these requirements?


• A. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Configure AWS
Security Hub to produce monthly reports.
• B. Set up AWS Systems Manager Patch Manager to manage all the EC2 instances. Deploy Amazon
Inspector, and configure monthly reports.
• C. Set up AWS Shield Advanced, and configure monthly reports. Deploy AWS Config to automate patch
installations on the EC2 instances.
• D. Set up Amazon GuardDuty in the account to monitor all EC2 instances. Deploy AWS Config to
automate patch installations on the EC2 instances.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by Awsbeginner87 at April 4, 2024, 1:31 a.m.
Explain:
AWS Systems Manager Patch Manager will automate patching and updates of the operating systems while
Amazon Inspector will analyze EC2 instances for vulnerabilities and provide monthly reports. AWS Security Hub
(Option A) doesn’t provide operating system vulnerability reports, AWS Shield Advanced (Option C) is more
appropriate for DDoS protection, and Amazon GuardDuty (Option D) for threat detection, not OS patch
management or vulnerability management.

AWS Systems Manager Patch Manager sẽ tử động vá lội và cầp nhầt hệ điệu hành trong khi Amazon Inspector sẽ
phân tích các phiên bần EC2 đệ tìm lộ hộng và cung cầp báo cáo hàng tháng. Trung tâm bầo mầt AWS (Tùy chộn
A) không cung cầp báo cáo vệ lộ hộng hệ điệu hành, AWS Shield Advanced (Tùy chộn C) phù hớp hớn đệ bầo vệ
DDoS và Amazon GuardDuty (Tùy chộn D) đệ phát hiện mội đe dộa chử không phầi quần lý bần vá hệ điệu hành
hoầc quần lý lộ hộng bầo mầt.

Question #: : 874

A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances in an Auto
Scaling group behind an Elastic Load Balancing (ELB) load balancer. The application connects to an Amazon
DynamoDB table.

For disaster recovery (DR) purposes, the company wants to ensure that the application is available from another
AWS Region with minimal downtime.

Which solution will meet these requirements with the LEAST downtime?
• A. Create an Auto Scaling group and an ELB in the DR Region. Configure the DynamoDB table as a
global table. Configure DNS failover to point to the new DR Region's ELB.
• B. Create an AWS CloudFormation template to create EC2 instances, ELBs, and DynamoDB tables to
be launched when necessary. Configure DNS failover to point to the new DR Region's ELB.
• C. Create an AWS CloudFormation template to create EC2 instances and an ELB to be launched when
necessary. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new DR
Region's ELB.
• D. Create an Auto Scaling group and an ELB in the DR Region. Configure the DynamoDB table as a
global table. Create an Amazon CloudWatch alarm with an evaluation period of 10 minutes to invoke an AWS
Lambda function that updates Amazon Route 53 to point to the DR Region's ELB.

Hide Answer
Suggested Answer: A

Community vote distribution


C (100%)
by Awsbeginner87 at April 4, 2024, 1:28 a.m.
Explain:
A. Create an Auto Scaling group and an Elastic Load Balancing (ELB) load balancer in the disaster recovery (DR)
Region. Configure the DynamoDB table as a global table. Configure DNS failover to point to the new DR Region's
ELB.
This solution option will achieve the company's goal with the least possible downtime. In the DR region, it
duplicates the application setup and uses DynamoDB Global Tables for multi-region replication. In case of a
disaster, DNS failover will ensure that traffic is redirected to the DR region quickly.

Question #: : 875

A company runs an application on Amazon EC2 instances in a private subnet. The application needs to store and
retrieve data in Amazon S3 buckets. According to regulatory requirements, the data must not travel across the
public internet.

What should a solutions architect do to meet these requirements MOST cost-effectively?


• A. Deploy a NAT gateway to access the S3 buckets.
• B. Deploy AWS Storage Gateway to access the S3 buckets.
• C. Deploy an S3 interface endpoint to access the S3 buckets.
• D. Deploy an S3 gateway endpoint to access the S3 buckets.

Hide Answer
Suggested Answer: D

Community vote distribution


D (50%)
C (50%)
by awsshare at April 8, 2024, 12:32 p.m.
Explain: D

https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html

Question #: : 876

A company hosts an application on Amazon EC2 instances that run in a single Availability Zone. The application
is accessible by using the transport layer of the Open Systems Interconnection (OSI) model. The company needs
the application architecture to have high availability.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
• A. Configure new EC2 instances in a different Availability Zone. Use Amazon Route 53 to route traffic
to all instances.
• B. Configure a Network Load Balancer in front of the EC2 instances.
• C. Configure a Network Load Balancer for TCP traffic to the instances. Configure an Application Load
Balancer for HTTP and HTTPS traffic to the instances.
• D. Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple
Availability Zones. Configure the Auto Scaling group to run application health checks on the instances.
• E. Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to
a stopped state.

Hide Answer
Suggested Answer: CD

Community vote distribution


BD (100%)
by xBUGx at April 4, 2024, 12:03 a.m.
Explain:
Option B involves setting up a Network Load Balancer (NLB). The Network Load Balancer operates at the
transport layer (Layer 4) of the OSI model, routing traffic to targets, such as Amazon EC2 instances, within
Amazon VPC based on IP protocol data.
Option D (using Auto Scaling across multiple Availability Zones), ensures the application has high availability
and fault tolerance. If an instance fails, AWS Auto Scaling automatically replaces the instance to maintain the
desired number of instances. Furthermore, by configuring the group across multiple AZs, if one availability zone
fails, instances in other AZs will continue handling the application's traffic.
Exam question from Amazon's AWS Certified Solutions Architect - Associate SAA-C03

Question #: : 877

A company uses Amazon S3 to host its static website. The company wants to add a contact form to the webpage.
The contact form will have dynamic server-side components for users to input their name, email address, phone
number, and user message.

The company expects fewer than 100 site visits each month. The contact form must notify the company by email
when a customer fills out the form.

Which solution will meet these requirements MOST cost-effectively?


• A. Host the dynamic contact form in Amazon Elastic Container Service (Amazon ECS). Set up Amazon
Simple Email Service (Amazon SES) to connect to a third-party email provider.
• B. Create an Amazon API Gateway endpoint that returns the contact form from an AWS Lambda
function. Configure another Lambda function on the API Gateway to publish a message to an Amazon Simple
Notification Service (Amazon SNS) topic.
• C. Host the website by using AWS Amplify Hosting for static content and dynamic content. Use server-
side scripting to build the contact form. Configure Amazon Simple Queue Service (Amazon SQS) to deliver the
message to the company.
• D. Migrate the website from Amazon S3 to Amazon EC2 instances that run Windows Server. Use
Internet Information Services (IIS) for Windows Server to host the webpage. Use client-side scripting to build the
contact form. Integrate the form with Amazon WorkMail.
Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Hkayne at April 20, 2024, 3:42 p.m.

Question #: : 878

A company creates dedicated AWS accounts in AWS Organizations for its business units. Recently, an important
notification was sent to the root user email address of a business unit account instead of the assigned account
owner. The company wants to ensure that all future notifications can be sent to different employees based on the
notification categories of billing, operations, or security.

Which solution will meet these requirements MOST securely?


• A. Configure each AWS account to use a single email address that the company manages. Ensure that all
account owners can access the email account to receive notifications. Configure alternate contacts for each AWS
account with corresponding distribution lists for the billing team, the security team, and the operations team for
each business unit.
• B. Configure each AWS account to use a different email distribution list for each business unit that the
company manages. Configure each distribution list with administrator email addresses that can respond to alerts.
Configure alternate contacts for each AWS account with corresponding distribution lists for the billing team, the
security team, and the operations team for each business unit.
• C. Configure each AWS account root user email address to be the individual company managed email
address of one person from each business unit. Configure alternate contacts for each AWS account with
corresponding distribution lists for the billing team, the security team, and the operations team for each business
unit.
• D. Configure each AWS account root user to use email aliases that go to a centralized mailbox. Configure
alternate contacts for each account by using a single business managed email distribution list each for the billing
team, the security team, and the operations team.

Hide Answer
Suggested Answer: B

Community vote distribution


A (100%)
by d401c0d at April 29, 2024, 7:55 p.m.

Question #: : 879

A company runs an ecommerce application on AWS. Amazon EC2 instances process purchases and store the
purchase details in an Amazon Aurora PostgreSQL DB cluster.
Customers are experiencing application timeouts during times of peak usage. A solutions architect needs to
rearchitect the application so that the application can scale to meet peak usage demands.

Which combination of actions will meet these requirements MOST cost-effectively? (Choose two.)
• A. Configure an Auto Scaling group of new EC2 instances to retry the purchases until the processing is
complete. Update the applications to connect to the DB cluster by using Amazon RDS Proxy.
• B. Configure the application to use an Amazon ElastiCache cluster in front of the Aurora PostgreSQL
DB cluster.
• C. Update the application to send the purchase requests to an Amazon Simple Queue Service (Amazon
SQS) queue. Configure an Auto Scaling group of new EC2 instances that read from the SQS queue.
• D. Configure an AWS Lambda function to retry the ticket purchases until the processing is complete.
• E. Configure an Amazon AP! Gateway REST API with a usage plan.

Hide Answer
Suggested Answer: AC

Community vote distribution


AC (50%)
BC (50%)
by Abdullah_Cloud at April 26, 2024, 3:38 a.m.

Question #: : 880

A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts. The company
used AWS Cost and Usage Report to create a new report in the management account. The report is delivered to
an Amazon S3 bucket that is replicated to a bucket in the data collection account.

The company’s senior leadership wants to view a custom dashboard that provides NAT gateway costs each day
starting at the beginning of the current month.

Which solution will meet these requirements?


• A. Share an Amazon QuickSight dashboard that includes the requested table visual. Configure
QuickSight to use AWS DataSync to query the new report.
• B. Share an Amazon QuickSight dashboard that includes the requested table visual. Configure
QuickSight to use Amazon Athena to query the new report.
• C. Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure
CloudWatch to use AWS DataSync to query the new report.
• D. Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure
CloudWatch to use Amazon Athena to query the new report.
Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by Mikado211 at April 4, 2024, 10:57 p.m.
Explanation:
Amazon QuickSight is a business intelligence tool from AWS that allows you to create and publish interactive
dashboards including table visuals. These dashboards can be accessed from any device, and can be embedded into
applications, portals, and websites.

AWS Athena allows you to query data that's stored in Amazon S3 using standard SQL syntax.

As the Cost and Usage Reports are stored in S3, Athena can be used as a direct querying layer for this reporting
data.

So, the data can be queried by Athena, and the results can be visualized by QuickSight and shared with the senior
leadership.

Question #: : 881

A company is hosting a high-traffic static website on Amazon S3 with an Amazon CloudFront distribution that
has a default TTL of 0 seconds. The company wants to implement caching to improve performance for the website.
However, the company also wants to ensure that stale content is not served for more than a few minutes after a
deployment.

Which combination of caching methods should a solutions architect implement to meet these requirements?
(Choose two.)
• A. Set the CloudFront default TTL to 2 minutes.
• B. Set a default TTL of 2 minutes on the S3 bucket.
• C. Add a Cache-Control private directive to the objects in Amazon S3.
• D. Create an AWS Lambda@Edge function to add an Expires header to HTTP responses. Configure the
function to run on viewer response.
• E. Add a Cache-Control max-age directive of 24 hours to the objects in Amazon S3. On deployment,
create a CloudFront invalidation to clear any changed files from edge caches.

Hide Answer
Suggested Answer: AE

Community vote distribution


AC (100%)
by xBUGx at April 4, 2024, 12:13 a.m.
Explain:
AE
"Set the CloudFront default TTL to 2 minutes": This setting means that CloudFront will cache the
response from the origin (for example, an S3 bucket) for the designated amount of time, in this case 2 minutes.
During this time, all requests for the content will be served directly from CloudFront without checking the origin
for updates. When the TTL expires, CloudFront will request the content from the origin again, which helps ensure
the content's freshness.

"Set a default TTL of 2 minutes on the S3 bucket": Unfortunately, this statement is technically incorrect
because S3 does not utilize a TTL setting. Instead, S3 uses metadata (like Cache-Control) on the objects to
influence caching behavior. TTL is a concept associated with caching systems like CloudFront, not storage systems
like S3. Setting this metadata on S3 objects will not have S3 evict or delete objects after that time. It's actually a
hint for HTTP caches (like CloudFront) that tells them how long to retain a copy of the object before checking
for a new version.
The Cache-Control private directive and the Cache-Control max-age directive serve distinct purposes in
HTTP caching.
Cache-Control: private: The "private" directive in Cache-Control header signifies that the response
message is intended for a single user (usually a specific browser) and must not be stored by a shared cache (like a
CDN). This means that the response is unique to a specific user and should not be cached by intermediate shared
or public caches such as proxies or CDNs (like CloudFront). Consequently, the response is delivered directly from
the origin server (in this case, an Amazon S3 bucket) for every single request, and it could increase the load time
for the end user, especially if the user is geographically distant from the S3 bucket's region.
Cache-Control: max-age: The "max-age" directive tells all caches (private, public, or shared) how long
the response is considered fresh. For example, Cache-Control: max-age=86400 would mean that the file should
be considered 'fresh' for 24 hours. During this period, the file will be delivered from the cache (e.g., CloudFront)
without contacting the origin server (S3 in this case), reducing the load on the server and improving the
performance (especially for frequently accessed files). If the file changes at the origin server within this ‘freshness’
period, end users might still see the older version unless a cache invalidation is performed.
In the context of your question, using Cache-Control: max-age along with CloudFront invalidation on
deployment provides a better approach to balance between caching (for performance improvement) and content
freshness (avoid serving stale content).

Question #: : 882

A company runs its application by using Amazon EC2 instances and AWS Lambda functions. The EC2 instances
run in private subnets of a VPC. The Lambda functions need direct network access to the EC2 instances for the
application to work.

The application will run for 1 year. The number of Lambda functions that the application uses will increase during
the 1-year period. The company must minimize costs on all application resources.
Which solution will meet these requirements?
• A. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to the private subnets that
contain the EC2 instances.
• B. Purchase an EC2 Instance Savings Plan. Connect the Lambda functions to new public subnets in the
same VPC where the EC2 instances run.
• C. Purchase a Compute Savings Plan. Connect the Lambda functions to the private subnets that contain
the EC2 instances.
• D. Purchase a Compute Savings Plan. Keep the Lambda functions in the Lambda service VPC.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by Guru4Cloud at April 11, 2024, 2:55 p.m.

Question #: : 883
A company has deployed a multi-account strategy on AWS by using AWS Control Tower. The company has
provided individual AWS accounts to each of its developers. The company wants to implement controls to limit
AWS resource costs that the developers incur.

Which solution will meet these requirements with the LEAST operational overhead?
• A. Instruct each developer to tag all their resources with a tag that has a key of CostCenter and a value
of the developer's name. Use the required-tags AWS Config managed rule to check for the tag. Create an AWS
Lambda function to terminate resources that do not have the tag. Configure AWS Cost Explorer to send a daily
report to each developer to monitor their spending.
• B. Use AWS Budgets to establish budgets for each developer account. Set up budget alerts for actual and
forecast values to notify developers when they exceed or expect to exceed their assigned budget. Use AWS Budgets
actions to apply a DenyAll policy to the developer's IAM role to prevent additional resources from being launched
when the assigned budget is reached.
• C. Use AWS Cost Explorer to monitor and report on costs for each developer account. Configure Cost
Explorer to send a daily report to each developer to monitor their spending. Use AWS Cost Anomaly Detection
to detect anomalous spending and provide alerts.
• D. Use AWS Service Catalog to allow developers to launch resources within a limited cost range. Create
AWS Lambda functions in each AWS account to stop running resources at the end of each work day. Configure
the Lambda functions to resume the resources at the start of each work day.

Hide Answer
Suggested Answer: B

Community vote distribution


C (67%)
B (33%)
by sandordini at April 30, 2024, 4:30 p.m.

Question #: : 884

A solutions architect is designing a three-tier web application. The architecture consists of an internet-facing
Application Load Balancer (ALB) and a web tier that is hosted on Amazon EC2 instances in private subnets. The
application tier with the business logic runs on EC2 instances in private subnets. The database tier consists of
Microsoft SQL Server that runs on EC2 instances in private subnets. Security is a high priority for the company.

Which combination of security group configurations should the solutions architect use? (Choose three.)
• A. Configure the security group for the web tier to allow inbound HTTPS traffic from the security group
for the ALB.
• B. Configure the security group for the web tier to allow outbound HTTPS traffic to 0.0.0.0/0.
• C. Configure the security group for the database tier to allow inbound Microsoft SQL Server traffic from
the security group for the application tier.
• D. Configure the security group for the database tier to allow outbound HTTPS traffic and Microsoft
SQL Server traffic to the security group for the web tier.
• E. Configure the security group for the application tier to allow inbound HTTPS traffic from the security
group for the web tier.
• F. Configure the security group for the application tier to allow outbound HTTPS traffic and Microsoft
SQL Server traffic to the security group for the web tier.

Hide Answer
Suggested Answer: ACE

Community vote distribution


ACE (100%)
by sandordini at April 30, 2024, 4:39 p.m.

Question #: : 885

A company has released a new version of its production application. The company's workload uses Amazon EC2,
AWS Lambda, AWS Fargate, and Amazon SageMaker.

The company wants to cost optimize the workload now that usage is at a steady state. The company wants to cover
the most services with the fewest savings plans.

Which combination of savings plans will meet these requirements? (Choose two.)
• A. Purchase an EC2 Instance Savings Plan for Amazon EC2 and SageMaker.
• B. Purchase a Compute Savings Plan for Amazon EC2, Lambda, and SageMaker.
• C. Purchase a SageMaker Savings Plan.
• D. Purchase a Compute Savings Plan for Lambda, Fargate, and Amazon EC2.
• E. Purchase an EC2 Instance Savings Plan for Amazon EC2 and Fargate.

Hide Answer
Suggested Answer: BD

Community vote distribution


CD (100%)
by sandordini at April 30, 2024, 4:43 p.m.

Question #: : 886

A company uses a Microsoft SQL Server database. The company's applications are connected to the database.
The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the
application code.

Which combination of steps will meet these requirements? (Choose two.)


• A. Use the AWS Schema Conversion Tool (AWS SCT) to rewrite the SQL queries in the applications.
• B. Enable Babelfish on Aurora PostgreSQL to run the SQL queries from the applications.
• C. Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and
AWS Database Migration Service (AWS DMS).
• D. Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL.
• E. Use AWS Database Migration Service (AWS DMS) to rewrite the SQL queries in the applications.

Hide Answer
Suggested Answer: CD

Community vote distribution


BC (100%)
by sandordini at April 30, 2024, 4:48 p.m.

Question #: : 889

A global company runs its workloads on AWS. The company's application uses Amazon S3 buckets across AWS
Regions for sensitive data storage and analysis. The company stores millions of objects in multiple S3 buckets
daily. The company wants to identify all S3 buckets that are not versioning-enabled.
Which solution will meet these requirements?
• A. Set up an AWS CloudTrail event that has a rule to identify all S3 buckets that are not versioning-
enabled across Regions.
• B. Use Amazon S3 Storage Lens to identify all S3 buckets that are not versioning-enabled across Regions.
• C. Enable IAM Access Analyzer for S3 to identify all S3 buckets that are not versioning-enabled across
Regions.
• D. Create an S3 Multi-Region Access Point to identify all S3 buckets that are not versioning-enabled
across Regions.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by sandordini at April 30, 2024, 5:31 p.m.

Question #: : 890

A company needs to optimize its Amazon S3 storage costs for an application that generates many files that cannot
be recreated. Each file is approximately 5 MB and is stored in Amazon S3 Standard storage.

The company must store the files for 4 years before the files can be deleted. The files must be immediately
accessible. The files are frequently accessed in the first 30 days of object creation, but they are rarely accessed
after the first 30 days.

Which solution will meet these requirements MOST cost-effectively?


• A. Create an S3 Lifecycle policy to move the files to S3 Glacier Instant Retrieval 30 days after object
creation. Delete the files 4 years after object creation.
• B. Create an S3 Lifecycle policy to move the files to S3 One Zone-Infrequent Access (S3 One Zone-IA)
30 days after object creation. Delete the files 4 years after object creation.
• C. Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30
days after object creation. Delete the files 4 years after object creation.
• D. Create an S3 Lifecycle policy to move the files to S3 Standard-Infrequent Access (S3 Standard-IA) 30
days after object creation. Move the files to S3 Glacier Flexible Retrieval 4 years after object creation.

Hide Answer
Suggested Answer: D

Community vote distribution


A (100%)
by sandordini at April 30, 2024, 5:48 p.m.

Question #: : 892

A company is migrating a data center from its on-premises location to AWS. The company has several legacy
applications that are hosted on individual virtual servers. Changes to the application designs cannot be made.

Each individual virtual server currently runs as its own EC2 instance. A solutions architect needs to ensure that
the applications are reliable and fault tolerant after migration to AWS. The applications will run on Amazon EC2
instances.

Which solution will meet these requirements?


• A. Create an Auto Scaling group that has a minimum of one and a maximum of one. Create an Amazon
Machine Image (AMI) of each application instance. Use the AMI to create EC2 instances in the Auto Scaling
group Configure an Application Load Balancer in front of the Auto Scaling group.
• B. Use AWS Backup to create an hourly backup of the EC2 instance that hosts each application. Store
the backup in Amazon S3 in a separate Availability Zone. Configure a disaster recovery process to restore the EC2
instance for each application from its most recent backup.
• C. Create an Amazon Machine Image (AMI) of each application instance. Launch two new EC2 instances
from the AMI. Place each EC2 instance in a separate Availability Zone. Configure a Network Load Balancer that
has the EC2 instances as targets.
• D. Use AWS Mitigation Hub Refactor Spaces to migrate each application off the EC2 instance. Break
down functionality from each application into individual components. Host each application on Amazon Elastic
Container Service (Amazon ECS) with an AWS Fargate launch type.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by sandordini at April 30, 2024, 6:06 p.m.

Question #: : 896

A company is designing its production application's disaster recovery (DR) strategy. The application is backed by
a MySQL database on an Amazon Aurora cluster in the us-east-1 Region. The company has chosen the us-west-
1 Region as its DR Region.

The company's target recovery point objective (RPO) is 5 minutes and the target recovery time objective (RTO)
is 20 minutes. The company wants to minimize configuration changes.

Which solution will meet these requirements with the MOST operational efficiency?
• A. Create an Aurora read replica in us-west-1 similar in size to the production application's Aurora
MySQL cluster writer instance.
• B. Convert the Aurora cluster to an Aurora global database. Configure managed failover.
• C. Create a new Aurora cluster in us-west-1 that has Cross-Region Replication.
• D. Create a new Aurora cluster in us-west-1. Use AWS Database Migration Service (AWS DMS) to sync
both clusters.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by sandordini at April 30, 2024, 6:31 p.m.

Question #: : 898
A company runs workloads in the AWS Cloud. The company wants to centrally collect security data to assess
security across the entire company and to improve workload protection.

Which solution will meet these requirements with the LEAST development effort?
• A. Configure a data lake in AWS Lake Formation. Use AWS Glue crawlers to ingest the security data into
the data lake.
• B. Configure an AWS Lambda function to collect the security data in .csv format. Upload the data to an
Amazon S3 bucket.
• C. Configure a data lake in Amazon Security Lake to collect the security data. Upload the data to an
Amazon S3 bucket.
• D. Configure an AWS Database Migration Service (AWS DMS) replication instance to load the security
data into an Amazon RDS cluster.

Hide Answer
Suggested Answer: C

Community vote distribution


C (100%)
by sandordini at April 30, 2024, 6:37 p.m.

Question #: : 895

A company is implementing a shared storage solution for a media application that the company hosts on AWS.
The company needs the ability to use SMB clients to access stored data.
Which solution will meet these requirements with the LEAST administrative overhead?
• A. Create an AWS Storage Gateway Volume Gateway. Create a file share that uses the required client
protocol. Connect the application server to the file share.
• B. Create an AWS Storage Gateway Tape Gateway. Configure tapes to use Amazon S3. Connect the
application server to the Tape Gateway.
• C. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the
instance. Connect the application server to the file share.
• D. Create an Amazon FSx for Windows File Server file system. Connect the application server to the file
system.

Hide Answer
Suggested Answer: D

Community vote distribution


D (100%)
by trinh_le at May 1, 2024, 6:49 a.m.

Question #: : 894

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website
serves static content. Website traffic is increasing. The company wants to minimize the website hosting costs.

Which solution will meet these requirements?


• A. Move the website to an Amazon S3 bucket. Configure an Amazon CloudFront distribution for the S3
bucket.
• B. Move the website to an Amazon S3 bucket. Configure an Amazon ElastiCache cluster for the S3 bucket.
• C. Move the website to AWS Amplify. Configure an ALB to resolve to the Amplify website.
• D. Move the website to AWS Amplify. Configure EC2 instances to cache the website.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by trinh_le at May 1, 2024, 6:47 a.m.

Question #: : 893
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a
solution that centrally manages networking components for the workloads. The solution also must create accounts
with automatic security controls (guardrails).

Which solution will meet these requirements with the LEAST operational overhead?
• A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private
subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the
workload accounts.
• B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private
subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the
workload accounts.
• C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each
VPC to route through an inspection VPC by using a transit gateway attachment.
• D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each
VPC to route through an inspection VPC by using a transit gateway attachment.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by 1223d0e at April 29, 2024, 7:43 p.m.

Question #: : 901

A company is migrating its workloads to AWS. The company has sensitive and critical data in on-premises
relational databases that run on SQL Server instances.

The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases.

Which solution will meet these requirements?


• A. Migrate the databases to Amazon EC2 instances. Use an AWS Key Management Service (AWS KMS)
AWS managed key for encryption.
• B. Migrate the databases to a Multi-AZ Amazon RDS for SQL Server DB instance. Use an AWS Key
Management Service (AWS KMS) AWS managed key for encryption.
• C. Migrate the data to an Amazon S3 bucket. Use Amazon Macie to ensure data security.
• D. Migrate the databases to an Amazon DynamoDB table. Use Amazon CloudWatch Logs to ensure data
security.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by trinh_le at May 1, 2024, 6:18 a.m.

Question #: : 904

A company has an application that customers use to upload images to an Amazon S3 bucket. Each night, the
company launches an Amazon EC2 Spot Fleet that processes all the images that the company received that day.
The processing for each image takes 2 minutes and requires 512 MB of memory.

A solutions architect needs to change the application to process the images when the images are uploaded.

Which change will meet these requirements MOST cost-effectively?


• A. Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure an AWS Lambda function to read the messages from the queue and to process
the images.
• B. Use S3 Event Notifications to write a message with image details to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure an EC2 Reserved Instance to read the messages from the queue and to process
the images.
• C. Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification
Service (Amazon SNS) topic. Configure a container instance in Amazon Elastic Container Service (Amazon ECS)
to subscribe to the topic and to process the images.
• D. Use S3 Event Notifications to publish a message with image details to an Amazon Simple Notification
Service (Amazon SNS) topic. Configure an AWS Elastic Beanstalk application to subscribe to the topic and to
process the images.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by trinh_le at May 1, 2024, 6:33 a.m.

Question #: : 903

A company manages a data lake in an Amazon S3 bucket that numerous applications access. The S3 bucket
contains a unique prefix for each application. The company wants to restrict each application to its specific prefix
and to have granular control of the objects under each prefix.
Which solution will meet these requirements with the LEAST operational overhead?
• A. Create dedicated S3 access points and access point policies for each application.
• B. Create an S3 Batch Operations job to set the ACL permissions for each object in the S3 bucket.
• C. Replicate the objects in the S3 bucket to new S3 buckets for each application. Create replication rules
by prefix.
• D. Replicate the objects in the S3 bucket to new S3 buckets for each application. Create dedicated S3
access points for each application.

Hide Answer
Suggested Answer: A

Community vote distribution


B (100%)
by trinh_le at May 1, 2024, 6:28 a.m.

Question #: : 902

A company wants to migrate an application to AWS. The company wants to increase the application's current
availability. The company wants to use AWS WAF in the application's architecture.

Which solution will meet these requirements?


• A. Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application
across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as
the target. Connect a WAF to the ALB.
• B. Create a cluster placement group that contains multiple Amazon EC2 instances that hosts the
application. Configure an Application Load Balancer and set the EC2 instances as the targets. Connect a WAF to
the placement group.
• C. Create two Amazon EC2 instances that host the application across two Availability Zones. Configure
the EC2 instances as the targets of an Application Load Balancer (ALB). Connect a WAF to the ALB.
• D. Create an Auto Scaling group that contains multiple Amazon EC2 instances that host the application
across two Availability Zones. Configure an Application Load Balancer (ALB) and set the Auto Scaling group as
the target. Connect a WAF to the Auto Scaling group.

Hide Answer
Suggested Answer: A

Community vote distribution


A (100%)
by trinh_le at May 1, 2024, 6:23 a.m.
Question #: : : 901

A company is migrating its workloads to AWS. The company has sensitive and critical data in on-premises
relational databases that run on SQL Server instances.

The company wants to use the AWS Cloud to increase security and reduce operational overhead for the databases.

Which solution will meet these requirements?


• A. Migrate the databases to Amazon EC2 instances. Use an AWS Key Management Service (AWS KMS)
AWS managed key for encryption.
• B. Migrate the databases to a Multi-AZ Amazon RDS for SQL Server DB instance. Use an AWS Key
Management Service (AWS KMS) AWS managed key for encryption.
• C. Migrate the data to an Amazon S3 bucket. Use Amazon Macie to ensure data security.
• D. Migrate the databases to an Amazon DynamoDB table. Use Amazon CloudWatch Logs to ensure data
security.

Hide Answer
Suggested Answer: B

Community vote distribution


B (100%)
by trinh_le at May 1, 2024, 6:18 a.m.

You might also like