AWS SAA C03 121-160 Hung
AWS SAA C03 121-160 Hung
AWS SAA C03 121-160 Hung
A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a
Network Load Balancer (NLB) in the us-west-2 Region. Most of the company's users are located in the
United States and Europe. The company wants to improve the performance and availability of the
solution. The company launches and configures three EC2 instances in the eu-west-1 Region and adds
the EC2 instances as targets for a new NLB.
Which solution can the company use to route traffic to all the EC2 instances?
A. Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs.
Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
B. Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and
eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
C. Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing
policy to route requests to one of the six EC2 instances. Create an Amazon CloudFront distribution. Use
the Route 53 record as the distribution's origin.
D. Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53
latency routing policy to route requests to one of the two ALBs. Create an Amazon CloudFront
distribution. Use the Route 53 record as the distribution’s origin.
Answer: B
WS Global Accelerator is a networking service that helps you improve the availability and performance
of the applications that you offer to your global users. AWS Global Accelerator is easy to set up,
configure, and manage. It provides static IP addresses that provide a fixed entry point to your
applications and eliminate the complexity of managing specific IP addresses for different AWS Regions
and Availability Zones. AWS Global Accelerator always routes user traffic to the optimal endpoint based
on performance, reacting instantly to changes in application health, your user’s location, and policies
that you configure
https://aws.amazon.com/global-accelerator/faqs/
121. A company is running an online transaction processing (OLTP) workload on AWS. This workload
uses an unencrypted Amazon RDS DB instance in a Multi-AZ deployment. Daily database snapshots are
taken from this instance.
What should a solutions architect do to ensure the database and snapshots are always encrypted
moving forward?
A. Encrypt a copy of the latest DB snapshot. Replace existing DB instance by restoring the encrypted
snapshot.
B. Create a new encrypted Amazon Elastic Block Store (Amazon EBS) volume and copy the snapshots to
it. Enable encryption on the DB instance.
C. Copy the snapshots and enable encryption using AWS Key Management Service (AWS KMS) Restore
encrypted snapshot to an existing DB instance.
D. Copy the snapshots to an Amazon S3 bucket that is encrypted using server-side encryption with AWS
Key Management Service (AWS KMS) managed keys (SSE-KMS).
Answer: A
"You can enable encryption for an Amazon RDS DB instance when you create it, but not after it's
created. However, you can add encryption to an unencrypted DB instance by creating a snapshot of your
DB instance, and then creating an encrypted copy of that snapshot. You can then restore a DB instance
from the encrypted snapshot to get an encrypted copy of your original DB instance."
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazon-rds-
for-postgresql-db-instance.html
122. A company wants to build a scalable key management infrastructure to support developers who
need to encrypt data in their applications.
B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
C. Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys.
D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption
keys.
Answer: B
If you are a developer who needs to digitally sign or verify data using asymmetric keys, you should use
the service to create and manage the private keys you’ll need. If you’re looking for a scalable key
management infrastructure to support your developers and their growing number of applications, you
should use it to reduce your licensing costs and operational burden...
https://aws.amazon.com/kms/faqs/#:~:text=If%20you%20are%20a%20developer%20who%20needs
%20to%20digitally,a%20broad%20set%20of%20industry%20and%20regional%20compliance
%20regimes.
123. A company has a dynamic web application hosted on two Amazon EC2 instances. The company has
its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption
and decryption is causing the compute capacity of the web servers to reach their maximum limit.
What should a solutions architect do to increase the application's performance?
A. Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each
instance.
B. Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances
to reference the bucket for SSL termination.
C. Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and
configure it to direct connections to the existing EC2 instances.
D. Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer
with an HTTPS listener that uses the SSL certificate from ACM.
Answer: D
This issue is solved by SSL offloading, i.e. by moving the SSL termination task to the ALB.
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/
124. A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to
complete it. The job is stateless in nature, can be started and stopped at any given time with no negative
impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions
architect to design a scalable and cost-effective solution that meets the requirements of the job.
Answer: A
Cant be implemented on Lambda because the timeout for Lambda is 15mins and the Job takes
60minutes to complete
125. A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer
that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2
instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances
require internet access to complete payment processing of orders through a third-party web service.
The application must be highly available.
Which combination of configuration options will meet these requirements? (Choose two.)
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ
DB instance in private subnets.
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones.
Deploy an Application Load Balancer in the private subnets.
C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones.
Deploy an RDS Multi-AZ DB instance in private subnets.
D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two
Availability Zones. Deploy an Application Load Balancer in the public subnet.
E. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two
Availability Zones. Deploy an Application Load Balancer in the public subnets.
Answer: AE
https://www.examtopics.com/discussions/amazon/view/60023-exam-aws-certified-solutions-architect-
associate-saa-c02/
Application has to be highly available while the instance and database should not be exposed to the
public internet, but the instances still requires access to the internet. NAT gateway has to be deployed in
public subnets in this case while instances and database remain in private subnets in the VPC, therefore
answer is (A) and (E).
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
If the instances did not require access to the internet, then the answer could have been
(B) to use a private NAT gateway and keep it in the private subnets to communicate only to the VPCs.
126. A solutions architect needs to implement a solution to reduce a company's storage costs. All the
company's data is in the Amazon S3 Standard storage class. The company must keep all data for at least
25 years. Data from the most recent 2 years must be highly available and immediately retrievable.
B. Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
C. Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier
Deep Archive.
D. Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA)
immediately and to S3 Glacier Deep Archive after 2 years.
Answer: B
https://aws.amazon.com/blogs/aws/s3-intelligent-tiering-adds-archive-access-tiers/
127. A media company is evaluating the possibility of moving its systems to the AWS Cloud. The
company needs at least 10 TB of storage with the maximum possible I/O performance for video
processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet
requirements for archival media that is not in use anymore.
Which set of services should a solutions architect recommend to meet these requirements?
A. Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier
for archival storage
B. Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier
for archival storage
C. Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and
Amazon S3 for archival storage
D. Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and
Amazon S3 Glacier for archival storage
Answer: D
References: Max instance store possible at this time is 30TB for NVMe which has the higher I/O
compared to EBS. is4gen.8xlarge 4 x 7,500 GB (30 TB) NVMe SSD
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-
volumes
128. A company wants to run applications in containers in the AWS Cloud. These applications are
stateless and can tolerate disruptions within the underlying infrastructure. The company needs a
solution that minimizes cost and operational overhead.
A. Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
B. Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
C. Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
D. Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node
group.
Answer: B
References: https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
129. A company is running a multi-tier web application on premises. The web application is
containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains
user records. The operational overhead of maintaining the infrastructure and capacity planning is
limiting the company's growth. A solutions architect must improve the application's infrastructure.
Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
D. Set up Amazon ElastiCache between the web application and the PostgreSQL database.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service
(Amazon ECS).
Answer: AE
References: https://www.examtopics.com/discussions/amazon/view/46457-exam-aws-certified-
solutions-architect-associate-saa-c02/
130. An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run
in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs
best when the CPU utilization of the EC2 instances is at or near 40%.
What should a solutions architect do to maintain the desired performance across all instances in the
group?
A. Use a simple scaling policy to dynamically scale the Auto Scaling group.
B. Use a target tracking policy to dynamically scale the Auto Scaling group.
C. Use an AWS Lambda function ta update the desired Auto Scaling group capacity.
D. Use scheduled scaling actions to scale up and scale down the Auto Scaling group.
Answer: B
References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
131. A company is developing a file-sharing application that will use an Amazon S3 bucket for storage.
The company wants to serve all the files through an Amazon CloudFront distribution. The company does
not want the files to be accessible through direct navigation to the S3 URL.
B. Create an IAM user. Grant the user read permission to objects in the S3 bucket. Assign the user to
CloudFront.
C. Write an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and assigns the
target S3 bucket as the Amazon Resource Name (ARN).
D. Create an origin access identity (OAI). Assign the OAI to the CloudFront distribution. Configure the
S3 bucket permissions so that only the OAI has read permission.
Answer: D
References: I want to restrict access to my Amazon Simple Storage Service (Amazon S3) bucket so that
objects can be accessed only through my Amazon CloudFront distribution. How can I do that?
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
132. A company’s website provides users with downloadable historical performance reports. The
website needs a solution that will scale to meet the company’s website demands globally. The solution
should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest
possible response time.
Answer: A
References: The solution should be cost-effective, limit the provisioning of infrastructure resources, and
provide the fastest possible response time.
https://www.examtopics.com/discussions/amazon/view/27935-exam-aws-certified-solutions-architect-
associate-saa-c02/
133. A company runs an Oracle database on premises. As part of the company’s migration to AWS, the
company wants to upgrade the database to the most recent available version. The company also wants
to set up disaster recovery (DR) for the database. The company needs to minimize the operational
overhead for normal operations and DR setup. The company also needs to maintain access to the
database's underlying operating system.
Which solution will meet these requirements?
A. Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different
AWS Region.
B. Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to
replicate the snapshots to another AWS Region.
C. Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the
database in another AWS Region.
D. Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another
Availability Zone.
Answer: C
References:
https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-custom-oracle/ Z9 (Announced
in 2021)
134. A company wants to move its application to a serverless solution. The serverless solution needs to
analyze existing and new data by using SL. The company stores the data in an Amazon S3 bucket. The
data requires encryption and must be replicated to a different AWS Region.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication
(CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption
with AWS KMS multi-Region kays (SSE-KMS). Use Amazon Athena to query the data.
B. Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR)
to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with AWS
KMS multi-Region keys (SSE-KMS). Use Amazon RDS to query the data.
C. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate
encrypted objects to an S3 bucket in another Region. Use server-side encryption with Amazon S3
managed encryption keys (SSE-S3). Use Amazon Athena to query the data.
D. Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate
encrypted objects to an S3 bucket in another Region. Use server-side encryption with Amazon S3
managed encryption keys (SSE-S3). Use Amazon RDS to query the data.
Answer: A
References: Amazon S3 Bucket Keys reduce the cost of Amazon S3 server-side encryption using AWS Key
Management Service (SSE-KMS). This new bucket-level key for SSE can reduce AWS KMS request costs
by up to 99 percent by decreasing the request traffic from Amazon S3 to AWS KMS. With a few clicks in
the AWS Management Console, and without any changes to your client applications, you can configure
your bucket to use an S3 Bucket Key for AWS KMS-based encryption on new objects.
The Existing S3 bucket might have uncrypted data - encryption will apply new data received after the
applying of encryption on the new bucket.
135. A company runs workloads on AWS. The company needs to connect to a service from an external
provider. The service is hosted in the provider's VPC. According to the company’s security team, the
connectivity must be private and must be restricted to the target service. The connection must be
initiated only from the company’s VPC.
A. Create a VPC peering connection between the company's VPC and the provider's VPC. Update the
route table to connect to the target service.
B. Ask the provider to create a virtual private gateway in its VPC. Use AWS PrivateLink to connect to the
target service.
C. Create a NAT gateway in a public subnet of the company’s VPUpdate the route table to connect to the
target service.
D. Ask the provider to create a VPC endpoint for the target service. Use AWS PrivateLink to connect to
the target service.
Answer: D
References: **AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-
premises networks, without exposing your traffic to the public internet**. AWS PrivateLink makes it
easy to connect services across different accounts and VPCs to significantly simplify your network
architecture.
Interface **VPC endpoints**, powered by AWS PrivateLink, connect you to services hosted by AWS
Partners and supported solutions available in AWS Marketplace.
https://aws.amazon.com/privatelink/
136. A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The
on-premises database must remain online and accessible during the migration. The Aurora database
must remain synchronized with the on-premises database.
Which combination of actions must a solutions architect take to meet these requirements? (Choose
two.)
A. Create an ongoing replication task.
D. Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
E. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database
synchronization.
Answer: AC
References: AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly
and securely. The source database remains fully operational during the migration, minimizing downtime
to applications that rely on the database.
... With AWS Database Migration Service, you can also continuously replicate data with low latency from
any supported source to any supported target.
https://aws.amazon.com/dms/
137. A company uses AWS Organizations to create dedicated AWS accounts for each business unit to
manage each business unit's account independently upon request. The root email recipient missed a
notification that was sent to the root user email address of one account. The company wants to ensure
that all future notifications are not missed. Future notifications must be limited to account
administrators.
A. Configure the company’s email server to forward notification email messages that are sent to the
AWS account root user email address to all users in the organization.
B. Configure all AWS account root user email addresses as distribution lists that go to a few
administrators who can respond to alerts. Configure AWS account alternate contacts in the AWS
Organizations console or programmatically.
C. Configure all AWS account root user email messages to be sent to one administrator who is
responsible for monitoring alerts and forwarding those alerts to the appropriate groups.
D. Configure all existing AWS accounts and all newly created accounts to use the same root user email
address. Configure AWS account alternate contacts in the AWS Organizations console or
programmatically.
Answer: B
References: Use a group email address for the management account's root user
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-
acct.html#best-practices_mgmt-acct_email-address
138. A company runs its ecommerce application on AWS. Every new order is published as a massage in a
RabbitMQ queue that runs on an Amazon EC2 instance in a single Availability Zone. These messages are
processed by a different application that runs on a separate EC2 instance. This application stores the
details in a PostgreSQL database on another EC2 instance. All the EC2 instances are in the same
Availability Zone.
The company needs to redesign its architecture to provide the highest availability with the least
operational overhead.
A. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ.
Create a Multi-AZ Auto Scaling group for EC2 instances that host the application. Create another Multi-
AZ Auto Scaling group for EC2 instances that host the PostgreSQL database.
B. Migrate the queue to a redundant pair (active/standby) of RabbitMQ instances on Amazon MQ.
Create a Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database
to run on a Multi-AZ deployment of Amazon RDS for PostgreSQL.
C. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create another
Multi-AZ Auto Scaling group for EC2 instances that host the application. Migrate the database to run on
a Multi-AZ deployment of Amazon RDS for PostgreSQL.
D. Create a Multi-AZ Auto Scaling group for EC2 instances that host the RabbitMQ queue. Create
another Multi-AZ Auto Scaling group for EC2 instances that host the application. Create a third Multi-AZ
Auto Scaling group for EC2 instances that host the PostgreSQL database
Answer: B
References: Migrating to Amazon MQ reduces the overhead on the queue management. C and D are
dismissed.
Deciding between A and B means deciding to go for an AutoScaling group for EC2 or an RDS for
Postgress (both multi- AZ). The RDS option has less operational impact, as provide as a service the tools
and software required. Consider for instance, the effort to add an additional node like a read replica, to
the DB.
https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/active-standby-broker-
deployment.html
https://aws.amazon.com/rds/postgresql/
139. A reporting team receives files each day in an Amazon S3 bucket. The reporting team manually
reviews and copies the files from this initial S3 bucket to an analysis S3 bucket each day at the same
time to use with Amazon QuickSight. Additional teams are starting to send more files in larger sizes to
the initial S3 bucket.
The reporting team wants to move the files automatically analysis S3 bucket as the files enter the initial
S3 bucket. The reporting team also wants to use AWS Lambda functions to run pattern-matching code
on the copied data. In addition, the reporting team wants to send the data files to a pipeline in Amazon
SageMaker Pipelines.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
A. Create a Lambda function to copy the files to the analysis S3 bucket. Create an S3 event notification
for the analysis S3 bucket. Configure Lambda and SageMaker Pipelines as destinations of the event
notification. Configure s3:ObjectCreated:Put as the event type.
B. Create a Lambda function to copy the files to the analysis S3 bucket. Configure the analysis S3 bucket
to send event notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an
ObjectCreated rule in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as
targets for the rule.
C. Configure S3 replication between the S3 buckets. Create an S3 event notification for the analysis S3
bucket. Configure Lambda and SageMaker Pipelines as destinations of the event notification. Configure
s3:ObjectCreated:Put as the event type.
D. Configure S3 replication between the S3 buckets. Configure the analysis S3 bucket to send event
notifications to Amazon EventBridge (Amazon CloudWatch Events). Configure an ObjectCreated rule
in EventBridge (CloudWatch Events). Configure Lambda and SageMaker Pipelines as targets for the
rule.
Answer: D
A and B says you are copying the file to another bucket using lambda, C an D just uses S3 replication to
copy the files, They are doing exactly the same thing while C and D do not require setting up of lambda,
which should be more efficient. The question says the team is manually copying the files, automatically
replicating the files should be the most efficient method vs manually copying or copying with lambda.
140. A solutions architect needs to help a company optimize the cost of running an application on AWS.
The application will use Amazon EC2 instances, AWS Fargate, and AWS Lambda for compute within the
architecture.
The EC2 instances will run the data ingestion layer of the application. EC2 usage will be sporadic and
unpredictable. Workloads that run on EC2 instances can be interrupted at any time. The application
front end will run on Fargate, and Lambda will serve the API layer. The front-end utilization and API layer
utilization will be predictable over the course of the next year.
Which combination of purchasing options will provide the MOST cost-effective solution for hosting this
application? (Choose two.)
C. Purchase a 1-year Compute Savings Plan for the front end and API layer.
D. Purchase 1-year All Upfront Reserved instances for the data ingestion layer.
E. Purchase a 1-year EC2 instance Savings Plan for the front end and API layer.
Answer: AC
References: https://www.densify.com/finops/aws-savings-plan
EC2 instance Savings Plan saves 72% while Compute Savings Plans saves 66%. But according to link, it
says "Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66%.
These plans automatically apply to EC2 instance usage regardless of instance family, size, AZ, region, OS
or tenancy, and also apply to Fargate and Lambda usage." EC2 instance Savings Plans are not applied to
Fargate or Lambda.
141. A company runs a web-based portal that provides users with global breaking news, local alerts, and
weather updates. The portal delivers each user a personalized view by using mixture of static and
dynamic content. Content is served over HTTPS through an API server running on an Amazon EC2
instance behind an Application Load Balancer (ALB). The company wants the portal to provide this
content to its users across the world as quickly as possible.
How should a solutions architect design the application to ensure the LEAST amount of latency for all
users?
A. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve all static and
dynamic content by specifying the ALB as an origin.
B. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 latency routing policy to
serve all content from the ALB in the closest Region.
C. Deploy the application stack in a single AWS Region. Use Amazon CloudFront to serve the static
content. Serve the dynamic content directly from the ALB.
D. Deploy the application stack in two AWS Regions. Use an Amazon Route 53 geolocation routing policy
to serve all content from the ALB in the closest Region.
Answer: A
https://www.examtopics.com/discussions/amazon/view/81081-exam-aws-certified-solutions-architect-
associate-saa-c02/
142. A gaming company is designing a highly available architecture. The application runs on a modified
Linux kernel and supports only UDP-based traffic. The company needs the front-end tier to provide the
best possible user experience. That tier must have low latency, route traffic to the nearest edge location,
and provide static IP addresses for entry into the application endpoints.
A. Configure Amazon Route 53 to forward requests to an Application Load Balancer. Use AWS Lambda
for the application in AWS Application Auto Scaling.
B. Configure Amazon CloudFront to forward requests to a Network Load Balancer. Use AWS Lambda for
the application in an AWS Application Auto Scaling group.
C. Configure AWS Global Accelerator to forward requests to a Network Load Balancer. Use Amazon
EC2 instances for the application in an EC2 Auto Scaling group.
D. Configure Amazon API Gateway to forward requests to an Application Load Balancer. Use Amazon
EC2 instances for the application in an EC2 Auto Scaling group.
Answer: C
References: AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS
global network and its edge locations around the world. CloudFront improves performance for both
cacheable content (such as images and videos) and dynamic content (such as API acceleration and
dynamic site delivery). Global Accelerator improves performance for a wide range of applications over
TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions. Global
Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as
well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional
failover. Both services integrate with AWS Shield for DDoS protection.
143. A company wants to migrate its existing on-premises monolithic application to AWS. The company
wants to keep as much of the front-end code and the backend code as possible. However, the company
wants to break the application into smaller applications. A different team will manage each application.
The company needs a highly scalable solution that minimizes operational overhead.
A. Host the application on AWS Lambda. Integrate the application with Amazon API Gateway.
B. Host the application with AWS Amplify. Connect the application to an Amazon API Gateway API that is
integrated with AWS Lambda.
C. Host the application on Amazon EC2 instances. Set up an Application Load Balancer with EC2
instances in an Auto Scaling group as targets.
D. Host the application on Amazon Elastic Container Service (Amazon ECS). Set up an Application Load
Balancer with Amazon ECS as the target.
Answer: D
References: I think the answer here is "D" because usually when you see terms like "monolithic" the
answer will likely refer to microservices.
144. A company recently started using Amazon Aurora as the data store for its global ecommerce
application. When large reports are run, developers report that the ecommerce application is
performing poorly. After reviewing metrics in Amazon CloudWatch, a solutions architect finds that the
ReadIOPS and CPUUtilizalion metrics are spiking when monthly reports run.
Answer: B
References: ReadIOPS issue inclining towards Read Replica as the most cost effective solution here
145. A company hosts a website analytics application on a single Amazon EC2 On-Demand Instance. The
analytics software is written in PHP and uses a MySQL database. The analytics software, the web server
that provides PHP, and the database server are all hosted on the EC2 instance. The application is
showing signs of performance degradation during busy times and is presenting 5xx errors. The company
needs to make the application scale seamlessly.
A. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web
application. Use the AMI to launch a second EC2 On-Demand Instance. Use an Application Load Balancer
to distribute the load to each EC2 instance.
B. Migrate the database to an Amazon RDS for MySQL DB instance. Create an AMI of the web
application. Use the AMI to launch a second EC2 On-Demand Instance. Use Amazon Route 53 weighted
routing to distribute the load across the two EC2 instances.
C. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AWS Lambda function to
stop the EC2 instance and change the instance type. Create an Amazon CloudWatch alarm to invoke the
Lambda function when CPU utilization surpasses 75%.
D. Migrate the database to an Amazon Aurora MySQL DB instance. Create an AMI of the web
application. Apply the AMI to a launch template. Create an Auto Scaling group with the launch
template Configure the launch template to use a Spot Fleet. Attach an Application Load Balancer to
the Auto Scaling group.
Answer: D
References: Agreed with D as Spot Fleet can leverage both spot+on-demand instances, it should be the
most cost-effective.
https://www.youtube.com/watch?v=rlYLbs33Ofs&ab_channel=AmazonWebServices
146. A company runs a stateless web application in production on a group of Amazon EC2 On-Demand
Instances behind an Application Load Balancer. The application experiences heavy usage during an 8-
hour period each business day. Application usage is moderate and steady overnight. Application usage is
low during weekends.
The company wants to minimize its EC2 costs without affecting the availability of the application.
B. Use Reserved Instances for the baseline level of usage. Use Spot instances for any additional
capacity that the application needs.
C. Use On-Demand Instances for the baseline level of usage. Use Spot Instances for any additional
capacity that the application needs.
D. Use Dedicated Instances for the baseline level of usage. Use On-Demand Instances for any additional
capacity that the application needs.
Answer: B
A uses Spot instances which is not recommended for PROD and D uses Dedicated instances which are
expensive. So option B should be the one.
147. A company needs to retain application log files for a critical application for 10 years. The application
team regularly accesses logs from the past month for troubleshooting, but logs older than 1 month are
rarely accessed. The application generates more than 10 TB of logs per month.
A. Store the logs in Amazon S3. Use AWS Backup to move logs more than 1 month old to S3 Glacier Deep
Archive.
B. Store the logs in Amazon S3. Use S3 Lifecycle policies to move logs more than 1 month old to S3
Glacier Deep Archive.
C. Store the logs in Amazon CloudWatch Logs. Use AWS Backup to move logs more than 1 month old to
S3 Glacier Deep Archive.
D. Store the logs in Amazon CloudWatch Logs. Use Amazon S3 Lifecycle policies to move logs more than
1 month old to S3 Glacier Deep Archive.
Answer: B
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html
AWS Backup allows you to backup your S3 data stored in the following S3 Storage Classes:
• S3 Standard
• S3 One Zone-IA
148. A company has a data ingestion workflow that includes the following components:
An Amazon Simple Notification Service (Amazon SNS) topic that receives notifications about new data
deliveries
The ingestion workflow occasionally fails because of network connectivity issues. When failure occurs,
the corresponding data is not ingested unless the company manually reruns the job.
What should a solutions architect do to ensure that all notifications are eventually processed?
A. Configure the Lambda function for deployment across multiple Availability Zones.
B. Modify the Lambda function's configuration to increase the CPU and memory allocations for the
function.
C. Configure the SNS topic’s retry strategy to increase both the number of retries and the wait time
between retries.
D. Configure an Amazon Simple Queue Service (Amazon SQS) queue as the on-failure destination.
Modify the Lambda function to process messages in the queue.
Answer: D
149. A company has a service that produces event data. The company wants to use AWS to process the
event data as it is received. The data is written in a specific order that must be maintained throughout
processing. The company wants to implement a solution that minimizes operational overhead.
A. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue to hold messages. Set up an
AWS Lambda function to process messages from the queue.
B. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing
payloads to process. Configure an AWS Lambda function as a subscriber.
C. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to hold messages. Set up an
AWS Lambda function to process messages from the queue independently.
D. Create an Amazon Simple Notification Service (Amazon SNS) topic to deliver notifications containing
payloads to process. Configure an Amazon Simple Queue Service (Amazon SQS) queue as a subscriber.
Answer: A
B. Create Amazon CloudWatch dashboards to visualize the metrics and react to issues quickly.
C. Create Amazon CloudWatch Synthetics canaries to monitor the application and raise an alarm.
D. Create single Amazon CloudWatch metric alarms with multiple metric thresholds where possible.
Answer: A
References: Composite alarms determine their states by monitoring the states of other alarms. You can
**use composite alarms to reduce alarm noise**. For example, you can create a composite alarm where
the underlying metric alarms go into ALARM when they meet specific conditions. You then can set up
your composite alarm to go into ALARM and send you notifications when the underlying metric alarms
go into ALARM by configuring the underlying metric alarms never to take actions. Currently, composite
alarms can take the following actions:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_Composite_Alarm.html
151. A company wants to migrate its on-premises data center to AWS. According to the company's
compliance requirements, the company can use only the ap-northeast-3 Region. Company
administrators are not permitted to connect VPCs to the internet.
A. Use AWS Control Tower to implement data residency guardrails to deny internet access and deny
access to all AWS Regions except ap-northeast-3.
B. Use rules in AWS WAF to prevent internet access. Deny access to all AWS Regions except ap-
northeast-3 in the AWS account settings.
C. Use AWS Organizations to configure service control policies (SCPS) that prevent VPCs from gaining
internet access. Deny access to all AWS Regions except ap-northeast-3.
D. Create an outbound rule for the network ACL in each VPC to deny all traffic from 0.0.0.0/0. Create an
IAM policy for each user to prevent the use of any AWS Region other than ap-northeast-3.
E. Use AWS Config to activate managed rules to detect and alert for internet gateways and to detect and
alert for new resources deployed outside of ap-northeast-3.
Answer: AC
References: https://aws.amazon.com/blogs/aws/new-for-aws-control-tower-region-deny-and-
guardrails-to-help-you-meet-data-residency-requirements/
152. A company uses a three-tier web application to provide training to new employees. The application
is accessed for only 12 hours every day. The company is using an Amazon RDS for MySQL DB instance to
store information and wants to minimize costs.
A. Configure an IAM policy for AWS Systems Manager Session Manager. Create an IAM role for the
policy. Update the trust relationship of the role. Set up automatic start and stop for the DB instance.
B. Create an Amazon ElastiCache for Redis cache cluster that gives users the ability to access the data
from the cache when the DB instance is stopped. Invalidate the cache after the DB instance is started.
C. Launch an Amazon EC2 instance. Create an IAM role that grants access to Amazon RDS. Attach the
role to the EC2 instance. Configure a cron job to start and stop the EC2 instance on the desired schedule.
D. Create AWS Lambda functions to start and stop the DB instance. Create Amazon EventBridge
(Amazon CloudWatch Events) scheduled rules to invoke the Lambda functions. Configure the Lambda
functions as event targets for the rules.
Answer: D
References: https://aws.amazon.com/blogs/database/schedule-amazon-rds-stop-and-start-using-aws-
lambda/
153. A company sells ringtones created from clips of popular songs. The files containing the ringtones
are stored in Amazon S3 Standard and are at least 128 KB in size. The company has millions of files, but
downloads are infrequent for ringtones older than 90 days. The company needs to save money on
storage while keeping the most accessed files readily available for its users.
Which action should the company take to meet these requirements MOST cost-effectively?
A. Configure S3 Standard-Infrequent Access (S3 Standard-IA) storage for the initial storage tier of the
objects.
B. Move the files to S3 Intelligent-Tiering and configure it to move objects to a less expensive storage
tier after 90 days.
C. Configure S3 inventory to manage objects and move them to S3 Standard-Infrequent Access (S3
Standard-1A) after 90 days.
D. Implement an S3 Lifecycle policy that moves the objects from S3 Standard to S3 Standard-Infrequent
Access (S3 Standard-1A) after 90 days.
Answer: B
154. A company needs to save the results from a medical trial to an Amazon S3 repository. The
repository must allow a few scientists to add new files and must restrict all other users to read-only
access. No users can have the ability to modify or delete any files in the repository. The company must
keep every file in the repository for a minimum of 1 year after its creation date.
B. Use S3 Object Lock in compliance mode with a retention period of 365 days.
C. Use an IAM role to restrict all users from deleting or changing objects in the S3 bucket. Use an S3
bucket policy to only allow the IAM role.
D. Configure the S3 bucket to invoke an AWS Lambda function every time an object is added. Configure
the function to track the hash of the saved object so that modified objects can be marked accordingly.
Answer: B
References: Compliance Mode. The key difference between Compliance Mode and Governance Mode is
that there are NO users that can override the retention periods set or delete an object, and that also
includes your AWS root account which has the highest privileges.
155. A large media company hosts a web application on AWS. The company wants to start caching
confidential media files so that users around the world will have reliable access to the files. The content
is stored in Amazon S3 buckets. The company must deliver the content quickly, regardless of where the
requests originate geographically.
B. Deploy AWS Global Accelerator to connect the S3 buckets to the web application.
D. Use Amazon Simple Queue Service (Amazon SQS) to connect the S3 buckets to the web application.
Answer: C
156. A company produces batch data that comes from different databases. The company also produces
live stream data from network sensors and application APIs. The company needs to consolidate all the
data into one place for business analytics. The company needs to process the incoming data and then
stage the data in different Amazon S3 buckets. Teams will later run one-time queries and import the
data into a business intelligence tool to show key performance indicators (KPIs).
Which combination of steps will meet these requirements with the LEAST operational overhead?
(Choose two.)
A. Use Amazon Athena for one-time queries. Use Amazon QuickSight to create dashboards for KPIs.
B. Use Amazon Kinesis Data Analytics for one-time queries. Use Amazon QuickSight to create
dashboards for KPIs.
C. Create custom AWS Lambda functions to move the individual records from the databases to an
Amazon Redshift cluster.
D. Use an AWS Glue extract, transform, and load (ETL) job to convert the data into JSON format. Load
the data into multiple Amazon OpenSearch Service (Amazon Elasticsearch Service) clusters.
E. Use blueprints in AWS Lake Formation to identify the data that can be ingested into a data lake. Use
AWS Glue to crawl the source, extract the data, and load the data into Amazon S3 in Apache Parquet
format.
Answer: AC
References: AC is correct. Ans E is also correct But in ans E: since Apache Parquer format is used, this is
not correct answer as per AWS exam answer
157. A company stores data in an Amazon Aurora PostgreSQL DB cluster. The company must store all
the data for 5 years and must delete all the data after 5 years. The company also must indefinitely keep
audit logs of actions that are performed within the database. Currently, the company has automated
backups configured for Aurora.
Which combination of steps should a solutions architect take to meet these requirements? (Choose
two.)
E. Use AWS Backup to take the backups and to keep the backups for 5 years.
Answer: DE
References: https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-aurora-snapshots-can-
be-managed-via-aws-backup/?nc1=h_ls
AWS Backup adds Amazon Aurora database cluster snapshots as its latest protected resource
158. A solutions architect is optimizing a website for an upcoming musical event. Videos of the
performances will be streamed in real time and then will be available on demand. The event is expected
to attract a global online audience.
Which service will improve the performance of both the real-time and on-demand streaming?
A. Amazon CloudFront
C. Amazon Route 53
Answer: A
You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTP origin
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over
IP, as well as for HTTP use cases that specifically require static IP addresses
159. A company is running a publicly accessible serverless application that uses Amazon API Gateway
and AWS Lambda. The application’s traffic recently spiked due to fraudulent requests from botnets.
Which steps should a solutions architect take to block requests from unauthorized users? (Choose two.)
A. Create a usage plan with an API key that is shared with genuine users only.
B. Integrate logic within the Lambda function to ignore the requests from fraudulent IP addresses.
C. Implement an AWS WAF rule to target malicious requests and trigger actions to filter them out.
D. Convert the existing public API to a private API. Update the DNS records to redirect users to the new
API endpoint.
E. Create an IAM role for each user attempting to access the API. A user will assume the role when
making the API call.
Answer: AC
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
160. An ecommerce company hosts its analytics application in the AWS Cloud. The application generates
about 300 MB of data each month. The data is stored in JSON format. The company is evaluating a
disaster recovery solution to back up the data. The data must be accessible in milliseconds if it is
needed, and the data must be kept for 30 days.
B. Amazon S3 Glacier
C. Amazon S3 Standard
Answer: C
IMHO
Normally ElasticSearch would be ideal here, however as question states "Most cost-effective"