0% found this document useful (0 votes)
187 views100 pages

AWS Associate Architect Part 1

Uploaded by

Suyash Nxt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
187 views100 pages

AWS Associate Architect Part 1

Uploaded by

Suyash Nxt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

Option C is likely the most cost-effective solution given the large data size and limited internet bandwidth.

The physical data transfer and integration with the existing tape infrastructure provides efficiency benefits
that can optimize the cost.

Question: 584 CertyIQ


A company is deploying an application that processes large quantities of data in parallel. The company plans to
use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of
nodes from sharing the same underlying hardware.

Which networking solution meets these requirements?

A.Run the EC2 instances in a spread placement group.


B.Group the EC2 instances in separate accounts.
C.Configure the EC2 instances with dedicated tenancy.
D.Configure the EC2 instances with shared tenancy.

Answer: A

Explanation:

A spread placement group is a group of instances that are each placed on distinct hardware.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

Question: 585 CertyIQ


A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2 capacity in a failover
AWS Region. Business requirements state that the DR strategy must meet capacity in the failover Region.

Which solution will meet these requirements?

A.Purchase On-Demand Instances in the failover Region.


B.Purchase an EC2 Savings Plan in the failover Region.
C.Purchase regional Reserved Instances in the failover Region.
D.Purchase a Capacity Reservation in the failover Region.

Answer: D

Explanation:
1. A regional Reserved Instance does not reserve
capacityhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-scope.html
2. reserved instances for price discount. need capacity reservation.

Question: 586 CertyIQ


A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates
to the five businesses that the company owns. The company's research and development (R&D) business is
separating from the company and will need its own organization. A solutions architect creates a separate new
management account for this purpose.

What should the solutions architect do next in the new management account?
A.Have the R&D AWS account be part of both organizations during the transition.
B.Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior
organization.
C.Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account
to the new R&D AWS account.
D.Have the R&D AWS account join the new organization. Make the new management account a member of the
prior organization.

Answer: B

Explanation:
1. account can leave current organization and then join new organization.

Question: 587 CertyIQ


A company is designing a solution to capture customer activity in different web applications to process analytics
and make predictions. Customer activity in the web applications is unpredictable and can increase suddenly. The
company requires a solution that integrates with other web applications. The solution must include an
authorization step for security purposes.

Which solution will meet these requirements?

A.Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives in an Amazon Elastic File System
(Amazon EFS) file system. Authorization is resolved at the GWLB.
B.Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the
information that the company receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve
authorization.
C.Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the
information that the company receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to
resolve authorization.
D.Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives on an Amazon Elastic File System
(Amazon EFS) file system. Use an AWS Lambda function to resolve authorization.

Answer: C

Explanation:
1. https://docs.aws.amazon.com/lambda/latest/dg/services-kinesisfirehose.html
2. lambda authoriser seems to be logical solution.

Question: 588 CertyIQ


An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft
SQL Server Enterprise Edition. The company's current recovery point objective (RPO) and recovery time objective
(RTO) are 24 hours.

Which solution will meet these requirements MOST cost-effectively?

A.Create a cross-Region read replica and promote the read replica to the primary instance.
B.Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C.Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
D.Copy automatic snapshots to another Region every 24 hours.
Answer: D

Explanation:
1. This is the most cost-effective solution because it does not require any additional AWS services. Amazon
RDS automatically creates snapshots of your DB instances every hour. You can copy these snapshots to
another Region every 24 hours to meet your RPO and RTO requirements.The other solutions are more
expensive because they require additional AWS services. For example, AWS DMS is a more expensive service
than AWS RDS.
2. Snapshots are always a cost-efficience way to have a DR plan.

Question: 589 CertyIQ


A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer that has sticky sessions enabled. The web server currently hosts the user session state. The company
wants to ensure high availability and avoid user session state loss in the event of a web server outage.

Which solution will meet these requirements?

A.Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application to use
ElastiCache for Memcached to store the session state.
B.Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for
Redis to store the session state.
C.Use an AWS Storage Gateway cached volume to store session data. Update the application to use AWS
Storage Gateway cached volume to store the session state.
D.Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the session
state.

Answer: B

Explanation:
1. redis is correct since it provides high availability and data persistance
2. B is the correct answer. It suggests using Amazon ElastiCache for Redis to store the session state. Update
the application to use ElastiCache for Redis to store the session state. This solution is cost-effective and
requires minimal development effort.

Question: 590 CertyIQ


A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for
MySQL DB instance. The company sized the RDS DB instance to meet the company's average daily workload. Once
a month, the database performs slowly when the company runs queries for a report. The company wants to have
the ability to run reports and maintain the performance of the daily workloads.

Which solution will meet these requirements?

A.Create a read replica of the database. Direct the queries to the read replica.
B.Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new
database.
C.Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D.Resize the DB instance to accommodate the additional workload.

Answer: A

Explanation:
This is the most cost-effective solution because it does not require any additional AWS services. A read
replica is a copy of a database that is synchronized with the primary database. You can direct the queries for
the report to the read replica, which will not affect the performance of the daily workloads

Question: 591 CertyIQ


A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The
application includes microservices that manage customers and place orders. The company needs to route
incoming requests to the appropriate microservices.

Which solution will meet this requirement MOST cost-effectively?

A.Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B.Use the AWS Load Balancer Controller to provision an Application Load Balancer.
C.Use an AWS Lambda function to connect the requests to Amazon EKS.
D.Use Amazon API Gateway to connect the requests to Amazon EKS.

Answer: D

Explanation:
1. API Gateway is a fully managed service that makes it easy for you to create, publish, maintain, monitor, and
secure APIs at any scale. API Gateway provides an entry point to your microservices.
2. https://aws.amazon.com/blogs/containers/microservices-development-using-aws-controllers-for-
kubernetes-ack-and-amazon-eks-blueprints/

Question: 592 CertyIQ


A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be
able to access these images quickly. The company must deny access to users from specific countries. The
company wants to minimize costs as much as possible.

Which solution will meet these requirements?

A.Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket access.
Provide customers with a link to the S3 bucket.
B.Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group that has
permission to access the S3 bucket.
C.Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images. Deploy
the instances only in the countries the company services. Provide customers with links to the ALBs for their
specific country's instances.
D.Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic
restrictions. Provide a signed URL for each customer to access the data in CloudFront.

Answer: D

Explanation:
1. answer is D
2. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

Question: 593 CertyIQ


A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions
architect needs to ensure that failures do not result in performance degradation or loss of data locally and within
an AWS Region. The solution needs to provide high availability at the node level and at the Region level.

Which solution will meet these requirements?

A.Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B.Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C.Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D.Use Redis shards that contain multiple nodes with Auto Scaling turned on.

Answer: A

Explanation:

A tooI would go with A, Using AOF can't protect you from all failure scenarios.For example, if a node fails due
to a hardware fault in an underlying physical server, ElastiCache will provision a new node on a different
server. In this case, the AOF is not available and can't be used to recover the data.

Question: 594 CertyIQ


A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its application. During the
migration testing phase, a technical team observes that the application takes a long time to launch and load
memory to become fully productive.

Which solution will reduce the launch time of the application during the next testing phase?

A.Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand
Instances available during the next testing phase.
B.Launch EC2 Spot Instances to support the application and to scale the application so it is available during the
next testing phase.
C.Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools
during the next testing phase.
D.Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next
testing phase.

Answer: C

Explanation:

Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools
during the next testing phase.

Question: 595 CertyIQ


A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its
applications experience sudden traffic increases on random days of the week. The company wants to maintain
application performance during sudden traffic increases.

Which solution will meet these requirements MOST cost-effectively?

A.Use manual scaling to change the size of the Auto Scaling group.
B.Use predictive scaling to change the size of the Auto Scaling group.
C.Use dynamic scaling to change the size of the Auto Scaling group.
D.Use schedule scaling to change the size of the Auto Scaling group.
Answer: C

Explanation:

Dynamic Scaling – This is yet another type of Auto Scaling in which the number of EC2 instances is changed
automatically depending on the signals received. Dynamic Scaling is a good choice when there is a high
volume of unpredictable traffic.

https://www.developer.com/web-services/aws-auto-scaling-types-best-
practices/#:~:text=Dynamic%20Scaling%20%E2%80%93%20This%20is%20yet,high%20volume%20of%20unpredicta

Question: 596 CertyIQ


An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly
sales event, database usage increases and causes database connection issues for the application. The traffic is
unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to
maintain performance when there is an unpredictable increase in traffic.

Which solution resolves this issue in the MOST cost-effective way?

A.Migrate the PostgreSQL database to Amazon Aurora Serverless v2.


B.Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
C.Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
D.Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage.

Answer: A

Explanation:
1. A to autoscaling
2. The correct answer is A

Question: 597 CertyIQ


A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The
company’s employees report issues with high latency when they begin using the application each day. The
company wants to reduce latency.

Which solution will meet these requirements?

A.Increase the API Gateway throttling limit.


B.Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the
application each day.
C.Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning
of each day.
D.Increase the Lambda function memory.

Answer: B

Explanation:

Provisioned Concurrency incurs additional costs, so it is cost-efficient to use it only when necessary. For
example, early in the morning when activity starts, or to handle recurring peak usage.
Question: 598 CertyIQ
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS
Cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB file share.
Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically
throughout the day.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)

A.Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.


B.Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
C.Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D.Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon S3.
Provide access to analysts.
E.Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
F.Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.

Answer: ACF

Explanation:
1. https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-
home.htmlhttps://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-
s3/https://aws.amazon.com/storagegateway/faqs/
2. It should be ACF

Question: 599 CertyIQ


A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances
to build and run a payment processing application. The company will run the application in its on-premises data
center for compliance purposes.

A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the
company's operational team to build the application.

Which activities are the responsibility of the company's operational team? (Choose three.)

A.Providing resilient power and network connectivity to the Outposts racks


B.Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
C.Physical security and access controls of the data center environment
D.Availability of the Outposts infrastructure including the power supplies, servers, and networking equipment
within the Outposts racks
E.Physical maintenance of Outposts components
F.Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events

Answer: ACE

Explanation:

A.Providing resilient power and network connectivity to the Outposts racks

C.Physical security and access controls of the data center environment

E.Physical maintenance of Outposts components


Question: 600 CertyIQ
A company is planning to migrate a TCP-based application into the company's VPC. The application is publicly
accessible on a nonstandard TCP port through a hardware appliance in the company's data center. This public
endpoint can process up to 3 million requests per second with low latency. The company requires the same level of
performance for the new public endpoint in AWS.

What should a solutions architect recommend to meet this requirement?

A.Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that
the application requires.
B.Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port
that the application requires.
C.Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an
Application Load Balancer as the origin.
D.Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires.
Configure AWS Lambda functions with provisioned concurrency to process the requests.

Answer: A

Explanation:

Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that
the application requires.

Question: 601 CertyIQ


A company runs its critical database on an Amazon RDS for PostgreSQL DB instance. The company wants to
migrate to Amazon Aurora PostgreSQL with minimal downtime and data loss.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora PostgreSQL DB
cluster.
B.Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read replicate to a
new Aurora PostgreSQL DB cluster.
C.Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
D.Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a new Aurora
PostgreSQL DB cluster.

Answer: B

Explanation:

Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read replicate to a
new Aurora PostgreSQL DB cluster.

Question: 602 CertyIQ


A company's infrastructure consists of hundreds of Amazon EC2 instances that use Amazon Elastic Block Store
(Amazon EBS) storage. A solutions architect must ensure that every EC2 instance can be recovered after a
disaster.
What should the solutions architect do to meet this requirement with the LEAST amount of effort?

A.Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS CloudFormation
template to launch new EC2 instances from the EBS storage.
B.Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to set
the environment based on the EC2 template and attach the EBS storage.
C.Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup API or
the AWS CLI to speed up the restore process for multiple EC2 instances.
D.Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2 instance
and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores with the
copied AMIs and attach the EBS storage.

Answer: C

Explanation:

The key reasons are: AWS Backup automates backup of resources like EBS volumes. It allows defining
backup policies for groups of resources. This removes the need to manually create backups for each resource.
The AWS Backup API and CLI allow programmatic control of backup plans and restores. This enables
restoring hundreds of EC2 instances programmatically after a disaster instead of manually. AWS Backup
handles clean up of old backups based on policies to minimize storage costs.

Question: 603 CertyIQ


A company recently migrated to the AWS Cloud. The company wants a serverless solution for large-scale parallel
on-demand processing of a semistructured dataset. The data consists of logs, media files, sales transactions, and
IoT sensor data that is stored in Amazon S3. The company wants the solution to process thousands of items in the
dataset in parallel.

Which solution will meet these requirements with the MOST operational efficiency?

A.Use the AWS Step Functions Map state in Inline mode to process the data in parallel.
B.Use the AWS Step Functions Map state in Distributed mode to process the data in parallel.
C.Use AWS Glue to process the data in parallel.
D.Use several AWS Lambda functions to process the data in parallel.

Answer: B

Explanation:

With Step Functions, you can orchestrate large-scale parallel workloads to perform tasks, such as on-demand
processing of semi-structured data. These parallel workloads let you concurrently process large-scale data
sources stored in Amazon S3.

https://docs.aws.amazon.com/step-functions/latest/dg/concepts-orchestrate-large-scale-parallel-
workloads.html

Question: 604 CertyIQ


A company will migrate 10 PB of data to Amazon S3 in 6 weeks. The current data center has a 500 Mbps uplink to
the internet. Other on-premises applications share the uplink. The company can use 80% of the internet bandwidth
for this one-time migration task.

Which solution will meet these requirements?


A.Configure AWS DataSync to migrate the data to Amazon S3 and to automatically verify the data.
B.Use rsync to transfer the data directly to Amazon S3.
C.Use the AWS CLI and multiple copy processes to send the data directly to Amazon S3.
D.Order multiple AWS Snowball devices. Copy the data to the devices. Send the devices to AWS to copy the
data to Amazon S3.

Answer: D

Explanation:
1. D. Order multiple AWS Snowball devices. Copy the data to the devices. Send the devices to AWS to copy the
data to Amazon S3.
2. 10 PB = It's Snowballs.

Question: 605 CertyIQ


A company has several on-premises Internet Small Computer Systems Interface (ISCSI) network storage servers.
The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect
must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with
a minimal number of infrastructure changes.

Which solution will meet these requirements?

A.Deploy an Amazon S3 File Gateway.


B.Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3.
C.Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes.
D.Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.

Answer: D

Explanation:
1. The key reasons are:The Storage Gateway volume gateway provides iSCSI block storage using cached
volumes. This allows replacing the on-premises iSCSI servers with minimal changes.Cached volumes store
frequently accessed data locally for low latency access, while storing less frequently accessed data in
S3.This reduces the number of on-premises servers while still providing low latency access to hot data.EBS
does not provide iSCSI support to replace the existing servers.S3 File Gateway is for file storage, not block
storage.Stored volumes would store all data on-premises, not in S3.
2. ISCI=Volume Gateway. low-latency access to frequently used data = cached volumes

Question: 606 CertyIQ


A solutions architect is designing an application that will allow business users to upload objects to Amazon S3. The
solution needs to maximize object durability. Objects also must be readily available at any time and for any length
of time. Users will access objects frequently within the first 30 days after the objects are uploaded, but users are
much less likely to access objects that are older than 30 days.

Which solution meets these requirements MOST cost-effectively?

A.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Glacier after 30
days.
B.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Standard-
Infrequent Access (S3 Standard-IA) after 30 days.
C.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 One Zone-
Infrequent Access (S3 One Zone-IA) after 30 days.
D.Store all the objects in S3 Intelligent-Tiering with an S3 Lifecycle rule to transition the objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days.

Answer: B

Explanation:

Minimum Days for Transition to S3 Standard-IA or S3 One Zone-IA Before you transition objects to S3
Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you
cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you
create them. Amazon S3 doesn't support this transition within the first 30 days because newer objects are
often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA
storage. Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only
objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html

Question: 607 CertyIQ


A company has migrated a two-tier application from its on-premises data center to the AWS Cloud. The data tier is
a Multi-AZ deployment of Amazon RDS for Oracle with 12 TB of General Purpose SSD Amazon Elastic Block Store
(Amazon EBS) storage. The application is designed to process and store documents in the database as binary large
objects (blobs) with an average document size of 6 MB.

The database size has grown over time, reducing the performance and increasing the cost of storage. The company
must improve the database performance and needs a solution that is highly available and resilient.

Which solution will meet these requirements MOST cost-effectively?

A.Reduce the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to
Magnetic.
B.Increase the RDS DB instance size. Increase the storage capacity to 24 TiChange the storage type to
Provisioned IOPS.
C.Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object
metadata in the existing database.
D.Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration
Service (AWS DMS) to migrate data from the Oracle database to DynamoDB.

Answer: C

Explanation:

C. Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object
metadata in the existing database.

Question: 608 CertyIQ


A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations
around the world. The application consists of backend web services that are exposed over HTTPS on port 443. The
application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The retail locations
communicate with the web application over the public internet. The company allows each retail location to register
the IP address that the retail location has been allocated by its local ISP.

The company's security team recommends to increase the security of the application endpoint by restricting
access to only the IP addresses registered by the retail locations.
What should a solutions architect do to meet these requirements?

A.Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP
addresses in the rule to include the registered IP addresses.
B.Deploy AWS Firewall Manager to manage the ALConfigure firewall rules to restrict traffic to the ALModify
the firewall rules to include the registered IP addresses.
C.Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on
the ALB to validate that incoming requests are from the registered IP addresses.
D.Configure the network ACL on the subnet that contains the public interface of the ALB. Update the ingress
rules on the network ACL with entries for each of the registered IP addresses.

Answer: A

Explanation:

A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP
addresses in the rule to include the registered IP addresses.

Question: 609 CertyIQ


A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest
data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent
access to portions of the data that contain sensitive information.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an IAM role that includes permissions to access Lake Formation tables.
B.Create data filters to implement row-level security and cell-level security.
C.Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
D.Create an AWS Lambda function that periodically queries and removes sensitive information from Lake
Formation tables.

Answer: B

Explanation:
1. The key reasons are:Lake Formation data filters allow restricting access to rows or cells in data tables
based on conditions. This allows preventing access to sensitive data.Data filters are implemented within Lake
Formation and do not require additional coding or Lambda functions.Lambda functions to pre-process data or
purge tables would require ongoing development and maintenance.IAM roles only provide user-level
permissions, not row or cell level security.Data filters give granular access control over Lake Formation data
with minimal configuration, avoiding complex custom code.
2. You can create data filters based on the values of columns in a Lake Formation table. Easy. Lowest
operational overhead.

Question: 610 CertyIQ


A company deploys Amazon EC2 instances that run in a VPC. The EC2 instances load source data into Amazon S3
buckets so that the data can be processed in the future. According to compliance laws, the data must not be
transmitted over the public internet. Servers in the company's on-premises data center will consume the output
from an application that runs on the EC2 instances.

Which solution will meet these requirements?

A.Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the
company and the VPC.
B.Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between the on-
premises network and the VPC.
C.Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN
connection between the company and the VPC.
D.Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to fetch S3
data and feed the application instances.

Answer: B

Explanation:

Gateway VPC Endpoint = no internet to access S3. Direct Connect = secure access to VPC.

Question: 611 CertyIQ


A company has an application with a REST-based interface that allows data to be received in near-real time from a
third-party vendor. Once received, the application processes and stores the data for further analysis. The
application is running on Amazon EC2 instances.

The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application.
When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to
process all requests.

Which design should a solutions architect recommend to provide a more scalable solution?

A.Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
B.Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the
third-party vendor.
C.Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto
Scaling group behind an Application Load Balancer.
D.Repackage the application as a container. Deploy the application using Amazon Elastic Container Service
(Amazon ECS) using the EC2 launch type with an Auto Scaling group.

Answer: A

Explanation:

The key reasons are: Kinesis Data Streams provides an auto-scaling stream that can handle large amounts of
streaming data ingestion and throughput. This removes the bottlenecks around receiving the data. AWS
Lambda can process and store the data in a scalable serverless manner, avoiding EC2 capacity limits. API
Gateway adds API management capabilities but does not improve the underlying scalability of the EC2
application .SNS is for event publishing/notifications, not large scale data ingestion. ECS still relies on EC2
capacity.

Question: 612 CertyIQ


A company has an application that runs on Amazon EC2 instances in a private subnet. The application needs to
process sensitive information from an Amazon S3 bucket. The application must not use the internet to connect to
the S3 bucket.

Which solution will meet these requirements?

A.Configure an internet gateway. Update the S3 bucket policy to allow access from the internet gateway.
Update the application to use the new internet gateway.
B.Configure a VPN connection. Update the S3 bucket policy to allow access from the VPN connection. Update
the application to use the new VPN connection.
C.Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update the
application to use the new NAT gateway.
D.Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the
application to use the new VPC endpoint.

Answer: D

Explanation:
1. The solution that will meet these requirements is to:Configure a VPC endpoint for Amazon S3Update the S3
bucket policy to allow access from the VPC endpointUpdate the application to use the new VPC endpointThe
key reasons are:VPC endpoints allow private connectivity from VPCs to AWS services like S3 without using an
internet gateway.The application can connect to S3 through the VPC endpoint while remaining in the private
subnet, without internet access.
2. VPC Endpoint for S3.

Question: 613 CertyIQ


A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS cluster
stores sensitive information in the Kubernetes secrets object. The company wants to ensure that the information is
encrypted.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use the container application to encrypt the information by using AWS Key Management Service (AWS KMS).
B.Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS).
C.Implement an AWS Lambda function to encrypt the information by using AWS Key Management Service
(AWS KMS).
D.Use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key Management
Service (AWS KMS).

Answer: B

Explanation:

Enabling secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS) is the
least operationally overhead way to encrypt the sensitive information in the Kubernetes secrets object .When
you enable secrets encryption in the EKS cluster, AWS KMS encrypts the secrets before they are stored in
the EKS cluster. You do not need to make any changes to your container application or implement any
additional Lambda functions.

Question: 614 CertyIQ


A company is designing a new multi-tier web application that consists of the following components:

•Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
•An Amazon RDS DB instance for data storage

A solutions architect needs to limit access to the application servers so that only the web servers can access them.

Which solution will meet these requirements?


A.Deploy AWS PrivateLink in front of the application servers. Configure the network ACL to allow only the web
servers to access the application servers.
B.Deploy a VPC endpoint in front of the application servers. Configure the security group to allow only the web
servers to access the application servers.
C.Deploy a Network Load Balancer with a target group that contains the application servers' Auto Scaling
group. Configure the network ACL to allow only the web servers to access the application servers.
D.Deploy an Application Load Balancer with a target group that contains the application servers' Auto Scaling
group. Configure the security group to allow only the web servers to access the application servers.

Answer: D

Explanation:
1. The key reasons are:An Application Load Balancer (ALB) allows directing traffic to the application servers
and provides access control via security groups.Security groups act as a firewall at the instance level and can
control access to the application servers from the web servers.Network ACLs work at the subnet level and are
less flexible for security groups for instance-level access control.VPC endpoints are used to provide private
access to AWS services, not for access between EC2 instances.AWS PrivateLink provides private connectivity
between VPCs, which is not required in this single VPC scenario.
2. ALB with Security Group is simplest solution.

Question: 615 CertyIQ


A company runs a critical, customer-facing application on Amazon Elastic Kubernetes Service (Amazon EKS). The
application has a microservices architecture. The company needs to implement a solution that collects,
aggregates, and summarizes metrics and logs from the application in a centralized location.

Which solution meets these requirements?

A.Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch
console.
B.Run AWS App Mesh in the existing EKS cluster. View the metrics and logs in the App Mesh console.
C.Configure AWS CloudTrail to capture data events. Query CloudTrail by using Amazon OpenSearch Service.
D.Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in
the CloudWatch console.

Answer: D

Explanation:
1. The key reasons are:CloudWatch Container Insights automatically collects metrics and logs from containers
running in EKS clusters. This provides visibility into resource utilization, application performance, and
microservice interactions.The metrics and logs are stored in CloudWatch Logs and CloudWatch metrics for
central access.The CloudWatch console allows querying, filtering, and visualizing the metrics and logs in one
centralized place.
2. What Cloudwatch Container Insights is for.

Question: 616 CertyIQ


A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network
Load Balancer. The company stores the product’s objects in an Amazon S3 bucket.

The company recently experienced malicious attacks against its systems. The company needs a solution that
continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.
The solution must also report suspicious activity and display the information on a dashboard.
Which solution will meet these requirements?

A.Configure Amazon Macie to monitor and report findings to AWS Config.


B.Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
C.Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
D.Configure AWS Config to monitor and report findings to Amazon EventBridge.

Answer: C

Explanation:

The key reasons are: Amazon Guard Duty is a threat detection service that continuously monitors for malicious
activity and unauthorized behaviour. It analyzes AWS Cloud Trail, VPC Flow Logs, and DNS logs. GuardDuty
can detect threats like instance or S3 bucket compromise, malicious IP addresses, or unusual API
calls.Findings can be sent to AWS Security Hub which provides a centralized security dashboard and alerts.
Amazon Macie and Amazon Inspector do not monitor the breadth of activity that Guard Duty does. They focus
more on data security and application vulnerabilities respectively. AWS Config monitors for resource
configuration changes, not malicious activity.

Question: 617 CertyIQ


A company wants to migrate an on-premises data center to AWS. The data center hosts a storage server that
stores data in an NFS-based file system. The storage server holds 200 GB of data. The company needs to migrate
the data without interruption to existing services. Multiple resources in AWS must be able to access the data by
using the NFS protocol.

Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)

A.Create an Amazon FSx for Lustre file system.


B.Create an Amazon Elastic File System (Amazon EFS) file system.
C.Create an Amazon S3 bucket to receive the data.
D.Manually use an operating system copy command to push the data into the AWS destination.
E.Install an AWS DataSync agent in the on-premises data center. Use a DataSync task between the on-
premises location and AWS.

Answer: BE

Explanation:
1. Amazon EFS provides a scalable, high performance NFS file system that can be accessed from multiple
resources in AWS.AWS DataSync can perform the migration from the on-prem NFS server to EFS without
interruption to existing services.This avoids having to manually move the data which could cause downtime.
DataSync incrementally syncs changed data.EFS and DataSync together provide a cost-optimized approach
compared to using S3 or FSx, while still meeting the requirements.Manually copying 200 GB of data to AWS
would be slow and risky compared to using DataSync.
2. NFS file system = EFS, Use DataSync for the migration with NFS support.

Question: 618 CertyIQ


A company wants to use Amazon FSx for Windows File Server for its Amazon EC2 instances that have an SMB file
share mounted as a volume in the us-east-1 Region. The company has a recovery point objective (RPO) of 5 minutes
for planned system maintenance or unplanned service disruptions. The company needs to replicate the file system
to the us-west-2 Region. The replicated data must not be deleted by any user for 5 years.
Which solution will meet these requirements?

A.Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
B.Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
C.Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
D.Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.

Answer: C

Explanation:

Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use AWS
Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.

Question: 619 CertyIQ


A solutions architect is designing a security solution for a company that wants to provide developers with
individual AWS accounts through AWS Organizations, while also maintaining standard security controls. Because
the individual developers will have AWS account root user-level access to their own accounts, the solutions
architect wants to ensure that the mandatory AWS CloudTrail configuration that is applied to new developer
accounts is not modified.

Which action meets these requirements?

A.Create an IAM policy that prohibits changes to CloudTrail. and attach it to the root user.
B.Create a new trail in CloudTrail from within the developer accounts with the organization trails option
enabled.
C.Create a service control policy (SCP) that prohibits changes to CloudTrail, and attach it the developer
accounts.
D.Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon
Resource Name (ARN) in the management account.

Answer: C

Explanation:

For Organizations to restrict users in accounts, use an SCP.

Question: 620 CertyIQ


A company is planning to deploy a business-critical application in the AWS Cloud. The application requires durable
storage with consistent, low-latency performance.

Which type of storage should a solutions architect recommend to meet these requirements?

A.Instance store volume


B.Amazon ElastiCache for Memcached cluster
C.Provisioned IOPS SSD Amazon Elastic Block Store (Amazon EBS) volume
D.Throughput Optimized HDD Amazon Elastic Block Store (Amazon EBS) volume

Answer: C

Explanation:

Durable storage excludes A and B. Low-latency excludes D. Choose C.

Question: 621 CertyIQ


An online photo-sharing company stores its photos in an Amazon S3 bucket that exists in the us-west-1 Region.
The company needs to store a copy of all new photos in the us-east-1 Region.

Which solution will meet this requirement with the LEAST operational effort?

A.Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the existing S3
bucket to the second S3 bucket.
B.Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in
the CORS rule's AllowedOrigin element.
C.Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule to save
photos into the second S3 bucket.
D.Create a second S3 bucket in us-east-1. Configure S3 event notifications on object creation and update
events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.

Answer: A

Explanation:

S3 Cross-Region Replication handles automatically copying new objects added to the source bucket to the
destination bucket in a different region. It continuously replicates new photos without needing to manually
copy files or set up Lambda triggers. CORS only enables cross-origin access, it does not copy objects. Using
Lifecycle rules or Lambda functions requires custom code and logic to handle the copying.S3 Cross-Region
Replication provides automated replication that minimizes operational overhead.

Question: 622 CertyIQ


A company is creating a new web application for its subscribers. The application will consist of a static single page
and a persistent database layer. The application will have millions of users for 4 hours in the morning, but the
application will have only a few thousand users during the rest of the day. The company's data architects have
requested the ability to rapidly evolve their schema.

Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)

A.Deploy Amazon DynamoDB as the database solution. Provision on-demand capacity.


B.Deploy Amazon Aurora as the database solution. Choose the serverless DB engine mode.
C.Deploy Amazon DynamoDB as the database solution. Ensure that DynamoDB auto scaling is enabled.
D.Deploy the static content into an Amazon S3 bucket. Provision an Amazon CloudFront distribution with the S3
bucket as the origin.
E.Deploy the web servers for static content across a fleet of Amazon EC2 instances in Auto Scaling groups.
Configure the instances to periodically refresh the content from an Amazon Elastic File System (Amazon EFS)
volume.

Answer: CD

Explanation:

The key reasons are: DynamoDB auto scaling allows the database to scale up and down dynamically based on
traffic patterns. This handles the large spike in traffic in the mornings and lower traffic later in the day.S3
combined with Cloud Front provides a highly scalable infrastructure for the static content. Cloud Front
caching improves performance. Aurora serverless could be an option but may not scale as seamlessly as
DynamoDB to the very high spike in users.EC2 Auto Scaling groups add complexity compared to S3/Cloud
Front for static content hosting.

Question: 623 CertyIQ


A company uses Amazon API Gateway to manage its REST APIs that third-party service providers access. The
company must protect the REST APIs from SQL injection and cross-site scripting attacks.

What is the MOST operationally efficient solution that meets these requirements?

A.Configure AWS Shield.


B.Configure AWS WAF.
C.Set up API Gateway with an Amazon CloudFront distribution. Configure AWS Shield in CloudFront.
D.Set up API Gateway with an Amazon CloudFront distribution. Configure AWS WAF in CloudFront.

Answer: B

Explanation:
1. B. Configure AWS WAF.
2. SQL Injection and Cross-Site Scripting = WAF so Either B or D. Both B and D are valid options but the
question doesn't indicate a real need for CloudFront, so just use WAF with the API Gateway. Answer is B.

Question: 624 CertyIQ


A company wants to provide users with access to AWS resources. The company has 1,500 users and manages their
access to on-premises resources through Active Directory user groups on the corporate network. However, the
company does not want users to have to maintain another identity to access the resources. A solutions architect
must manage user access to the AWS resources while preserving access to the on-premises resources.

What should the solutions architect do to meet these requirements?

A.Create an IAM user for each user in the company. Attach the appropriate policies to each user.
B.Use Amazon Cognito with an Active Directory user pool. Create roles with the appropriate policies attached.
C.Define cross-account roles with the appropriate policies attached. Map the roles to the Active Directory
groups.
D.Configure Security Assertion Markup Language (SAML) 2 0-based federation. Create roles with the
appropriate policies attached Map the roles to the Active Directory groups.

Answer: D
Explanation:

Configure Security Assertion Markup Language (SAML) 2 0-based federation. Create roles with the
appropriate policies attached Map the roles to the Active Directory groups.

Question: 625 CertyIQ


A company is hosting a website behind multiple Application Load Balancers. The company has different
distribution rights for its content around the world. A solutions architect needs to ensure that users are served the
correct content without violating distribution rights.

Which configuration should the solutions architect choose to meet these requirements?

A.Configure Amazon CloudFront with AWS WAF.


B.Configure Application Load Balancers with AWS WAF
C.Configure Amazon Route 53 with a geolocation policy
D.Configure Amazon Route 53 with a geoproximity routing policy

Answer: C

Explanation:

Reference:

https://aws.amazon.com/about-aws/whats-new/2014/07/31/amazon-route-53-announces-domain-name-
registration-geo-routing-and-lower-pricing/

Question: 626 CertyIQ


A company stores its data on premises. The amount of data is growing beyond the company's available capacity.

The company wants to migrate its data from the on-premises location to an Amazon S3 bucket. The company
needs a solution that will automatically validate the integrity of the data after the transfer.

Which solution will meet these requirements?

A.Order an AWS Snowball Edge device. Configure the Snowball Edge device to perform the online data transfer
to an S3 bucket
B.Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data
transfer to an S3 bucket.
C.Create an Amazon S3 File Gateway on premises Configure the S3 File Gateway to perform the online data
transfer to an S3 bucket
D.Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configure the accelerator to
perform the online data transfer to an S3 bucket.

Answer: B

Explanation:

Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data
transfer to an S3 bucket.
Question: 627 CertyIQ
A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and
receive 1 million requests each day on average. The company wants to maximize availability while minimizing the
operational overhead that is related to the management of the two servers.

What should a solutions architect recommend to meet these requirements?

A.Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
B.Launch a single large Amazon EC2 instance Import zone tiles. Configure Amazon CloudWatch alarms and
notifications to alert the company about any downtime.
C.Migrate the servers to AWS by using AWS Server Migration Service (AWS SMS). Configure Amazon
CloudWatch alarms and notifications to alert the company about any downtime.
D.Launch an Amazon EC2 instance in an Auto Scaling group across two Availability Zones. Import zone files. Set
the desired capacity to 1 and the maximum capacity to 3 for the Auto Scaling group. Configure scaling alarms
to scale based on CPU utilization.

Answer: A

Explanation:

Create 200 new hosted zones in the Amazon Route 53 console Import zone files.

Reference:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html

Question: 628 CertyIQ


A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's
applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The
company wants to report on incomplete multipart uploads for cost compliance purposes.

Which solution will meet these requirements with the LEAST operational overhead?

A.Configure AWS Config with a rule to report the incomplete multipart upload object count.
B.Create a service control policy (SCP) to report the incomplete multipart upload object count.
C.Configure S3 Storage Lens to report the incomplete multipart upload object count.
D.Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.

Answer: C

Explanation:

Configure S3 Storage Lens to report the incomplete multipart upload object count.

Reference:

https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-
multipart-uploads-to-lower-amazon-s3-costs/

Question: 629 CertyIQ


A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database
version for security compliance reasons. Because the database contains critical data, the company wants a quick
solution to upgrade and test functionality without losing any data.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.
B.Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.
C.Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of
Amazon RDS for MySQL.
D.Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Answer: D

Explanation:

Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Question: 630 CertyIQ


A solutions architect is creating a data processing job that runs once daily and can take up to 2 hours to complete.
If the job is interrupted, it has to restart from the beginning.

How should the solutions architect address this issue in the MOST cost-effective manner?

A.Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
B.Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
C.Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge
scheduled event.
D.Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon
EventBridge scheduled event.

Answer: C

Explanation:

Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge
scheduled event.

Question: 631 CertyIQ


A social media company wants to store its database of user profiles, relationships, and interactions in the AWS
Cloud. The company needs an application to monitor any changes in the database. The application needs to
analyze the relationships between the data entities and to provide recommendations to users.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the
database.
B.Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
C.Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data
Streams to process changes in the database.
D.Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to
process changes in the database.
Answer: B

Explanation:

Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.

Question: 632 CertyIQ


A company is creating a new application that will store a large amount of data. The data will be analyzed hourly
and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones.
The needed amount of storage space will continue to grow for the next 6 months.

Which storage solution should a solutions architect recommend to meet these requirements?

A.Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application
instances.
B.Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the
application instances.
C.Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the
application instances.
D.Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the
application instances.

Answer: C

Explanation:

Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the
application instances.

Question: 633 CertyIQ


A company manages an application that stores data on an Amazon RDS for PostgreSQL Multi-AZ DB instance.
Increases in traffic are causing performance problems. The company determines that database queries are the
primary reason for the slow performance.

What should a solutions architect do to improve the application's performance?

A.Serve read traffic from the Multi-AZ standby replica.


B.Configure the DB instance to use Transfer Acceleration.
C.Create a read replica from the source DB instance. Serve read traffic from the read replica.
D.Use Amazon Kinesis Data Firehose between the application and Amazon RDS to increase the concurrency of
database requests.

Answer: C

Explanation:

Create a read replica from the source DB instance. Serve read traffic from the read replica.

Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
Question: 634 CertyIQ
A company collects 10 GB of telemetry data daily from various machines. The company stores the data in an
Amazon S3 bucket in a source data account.

The company has hired several consulting agencies to use this data for analysis. Each agency needs read access to
the data for its analysts. The company must share the data from the source data account by choosing a solution
that maximizes security and operational efficiency.

Which solution will meet these requirements?

A.Configure S3 global tables to replicate data for each agency.


B.Make the S3 bucket public for a limited time. Inform only the agencies.
C.Configure cross-account access for the S3 bucket to the accounts that the agencies own.
D.Set up an IAM user for each analyst in the source data account. Grant each user access to the S3 bucket.

Answer: C

Explanation:

Configure cross-account access for the S3 bucket to the accounts that the agencies own.

Question: 635 CertyIQ


A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares.
Applications that run on Amazon EC2 instances access the file shares. The company needs a storage disaster
recovery (DR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be
accessed by using the same protocols as the primary Region.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicate the S3 bucket to the
secondary Region.
B.Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary
Region. Create a new FSx for ONTAP instance from the backup.
C.Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from
the primary Region to the secondary Region.
D.Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume.
Replicate the volume to the secondary Region.

Answer: C

Explanation:

Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from
the primary Region to the secondary Region.

Question: 636 CertyIQ


A development team is creating an event-based application that uses AWS Lambda functions. Events will be
generated when files are added to an Amazon S3 bucket. The development team currently has Amazon Simple
Notification Service (Amazon SNS) configured as the event target from Amazon S3.
What should a solutions architect do to process the events from Amazon S3 in a scalable way?

A.Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon ECS)
before the event runs in Lambda.
B.Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon EKS)
before the event runs in Lambda
C.Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure
the SOS queue to trigger a Lambda function.
D.Create an SNS subscription that sends the event to AWS Server Migration Service (AWS SMS). Configure the
Lambda function to poll from the SMS event.

Answer: C

Explanation:

Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure
the SOS queue to trigger a Lambda function.

Question: 637 CertyIQ


A solutions architect is designing a new service behind Amazon API Gateway. The request patterns for the service
will be unpredictable and can change suddenly from 0 requests to over 500 per second. The total size of the data
that needs to be persisted in a backend database is currently less than 1 GB with unpredictable future growth.
Data can be queried using simple key-value requests.

Which combination ofAWS services would meet these requirements? (Choose two.)

A.AWS Fargate
B.AWS Lambda
C.Amazon DynamoDB
D.Amazon EC2 Auto Scaling
E.MySQL-compatible Amazon Aurora

Answer: BC

Explanation:

B.AWS Lambda.

C. Amazon DynamoDB.

Question: 638 CertyIQ


A company collects and shares research data with the company's employees all over the world. The company
wants to collect and store the data in an Amazon S3 bucket and process the data in the AWS Cloud. The company
will share the data with the company's employees. The company needs a secure solution in the AWS Cloud that
minimizes operational overhead.

Which solution will meet these requirements?

A.Use an AWS Lambda function to create an S3 presigned URL. Instruct employees to use the URL.
B.Create an IAM user for each employee. Create an IAM policy for each employee to allow S3 access. Instruct
employees to use the AWS Management Console.
C.Create an S3 File Gateway. Create a share for uploading and a share for downloading. Allow employees to
mount shares on their local computers to use S3 File Gateway.
D.Configure AWS Transfer Family SFTP endpoints. Select the custom identity provider options. Use AWS
Secrets Manager to manage the user credentials Instruct employees to use Transfer Family.

Answer: A

Explanation:

Use an AWS Lambda function to create an S3 presigned URL. Instruct employees to use the URL.

Question: 639 CertyIQ


A company is building a new furniture inventory application. The company has deployed the application on a fleet
ofAmazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load
Balancer (ALB) in their VPC.

A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for
some requests.

What should the solutions architect do to resolve this issue?

A.Disable session affinity (sticky sessions) on the ALB


B.Replace the ALB with a Network Load Balancer
C.Increase the number of EC2 instances in each Availability Zone
D.Adjust the frequency of the health checks on the ALB's target group

Answer: A

Explanation:

Disable session affinity (sticky sessions) on the ALB.

Reference:

https://repost.aws/knowledge-center/elb-fix-unequal-traffic-routing

Question: 640 CertyIQ


A company has an application workflow that uses an AWS Lambda function to download and decrypt files from
Amazon S3. These files are encrypted using AWS Key Management Service (AWS KMS) keys. A solutions architect
needs to design a solution that will ensure the required permissions are set correctly.

Which combination of actions accomplish this? (Choose two.)

A.Attach the kms:decrypt permission to the Lambda function’s resource policy


B.Grant the decrypt permission for the Lambda IAM role in the KMS key's policy
C.Grant the decrypt permission for the Lambda resource policy in the KMS key's policy.
D.Create a new IAM policy with the kms:decrypt permission and attach the policy to the Lambda function.
E.Create a new IAM role with the kms:decrypt permission and attach the execution role to the Lambda function.

Answer: BE

Explanation:
B. Grant the decrypt permission for the Lambda IAM role in the KMS key's policy.

E. Create a new IAM role with the kms: decrypt permission and attach the execution role to the Lambda
function.

Question: 641 CertyIQ


A company wants to monitor its AWS costs for financial review. The cloud operations team is designing an
architecture in the AWS Organizations management account to query AWS Cost and Usage Reports for all
member accounts. The team must run this query once a month and provide a detailed analysis of the bill.

Which solution is the MOST scalable and cost-effective way to meet these requirements?

A.Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use Amazon
EMR for analysis.
B.Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use Amazon
Athena for analysis.
C.Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3 Use Amazon
Redshift for analysis.
D.Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon
QuickSight tor analysis.

Answer: B

Explanation:

Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use Amazon
Athena for analysis.

Reference:

https://aws.amazon.com/blogs/big-data/analyze-amazon-s3-storage-costs-using-aws-cost-and-usage-
reports-amazon-s3-inventory-and-amazon-athena/

Question: 642 CertyIQ


A company wants to run a gaming application on Amazon EC2 instances that are part of an Auto Scaling group in
the AWS Cloud. The application will transmit data by using UDP packets. The company wants to ensure that the
application can scale out and in as traffic increases and decreases.

What should a solutions architect do to meet these requirements?

A.Attach a Network Load Balancer to the Auto Scaling group.


B.Attach an Application Load Balancer to the Auto Scaling group.
C.Deploy an Amazon Route 53 record set with a weighted policy to route traffic appropriately.
D.Deploy a NAT instance that is configured with port forwarding to the EC2 instances in the Auto Scaling group.

Answer: A

Explanation:

Attach a Network Load Balancer to the Auto Scaling group.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
Question: 643 CertyIQ
A company runs several websites on AWS for its different brands. Each website generates tens of gigabytes of
web traffic logs each day. A solutions architect needs to design a scalable solution to give the company's
developers the ability to analyze traffic patterns across all the company's websites. This analysis by the
developers will occur on demand once a week over the course of several months. The solution must support
queries with standard SQL.

Which solution will meet these requirements MOST cost-effectively?

A.Store the logs in Amazon S3. Use Amazon Athena tor analysis.
B.Store the logs in Amazon RDS. Use a database client for analysis.
C.Store the logs in Amazon OpenSearch Service. Use OpenSearch Service for analysis.
D.Store the logs in an Amazon EMR cluster Use a supported open-source framework for SQL-based analysis.

Answer: A

Explanation:

Store the logs in Amazon S3. Use Amazon Athena tor analysis.

Question: 644 CertyIQ


An international company has a subdomain for each country that the company operates in. The subdomains are
formatted as example.com, country1.example.com, and country2.example.com. The company's workloads are
behind an Application Load Balancer. The company wants to encrypt the website data that is in transit.

Which combination of steps will meet these requirements? (Choose two.)

A.Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top domain
example com and a wildcard certificate for *.example.com.
B.Use the AWS Certificate Manager (ACM) console to request a private certificate for the apex top domain
example.com and a wildcard certificate for *.example.com.
C.Use the AWS Certificate Manager (ACM) console to request a public and private certificate for the apex top
domain example.com.
D.Validate domain ownership by email address. Switch to DNS validation by adding the required DNS records to
the DNS provider.
E.Validate domain ownership for the domain by adding the required DNS records to the DNS provider.

Answer: AE

Explanation:

A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top domain
example com and a wildcard certificate for *.example.com.

E. Validate domain ownership for the domain by adding the required DNS records to the DNS provider.

Question: 645 CertyIQ


A company is required to use cryptographic keys in its on-premises key manager. The key manager is outside of
the AWS Cloud because of regulatory and compliance requirements. The company wants to manage encryption
and decryption by using cryptographic keys that are retained outside of the AWS Cloud and that support a variety
of external key managers from different vendors.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use AWS CloudHSM key store backed by a CloudHSM cluster.


B.Use an AWS Key Management Service (AWS KMS) external key store backed by an external key manager.
C.Use the default AWS Key Management Service (AWS KMS) managed key store.
D.Use a custom key store backed by an AWS CloudHSM cluster.

Answer: B

Explanation:

Use an AWS Key Management Service (AWS KMS) external key store backed by an external key manager.

Reference:

https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html

Question: 646 CertyIQ


A solutions architect needs to host a high performance computing (HPC) workload in the AWS Cloud. The workload
will run on hundreds of Amazon EC2 instances and will require parallel access to a shared file system to enable
distributed processing of large datasets. Datasets will be accessed across multiple instances simultaneously. The
workload requires access latency within 1 ms. After processing has completed, engineers will need access to the
dataset for manual postprocessing.

Which solution will meet these requirements?

A.Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.
B.Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3
bucket.
C.Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for
postprocessing.
D.Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all
instances for processing and postprocessing.

Answer: C

Explanation:

Amazon FSx for Lustre is a fully managed, high-performance file system optimized for HPC workloads. It is
designed to deliver sub-millisecond latencies and high throughput, making it ideal for applications that
require parallel access to shared storage, such as simulations and data analytics.

Question: 647 CertyIQ


A gaming company is building an application with Voice over IP capabilities. The application will serve traffic to
users across the world. The application needs to be highly available with an automated failover across AWS
Regions. The company wants to minimize the latency of users without relying on IP address caching on user
devices.

What should a solutions architect do to meet these requirements?


A.Use AWS Global Accelerator with health checks.
B.Use Amazon Route 53 with a geolocation routing policy.
C.Create an Amazon CloudFront distribution that includes multiple origins.
D.Create an Application Load Balancer that uses path-based routing.

Answer: C

Explanation:

Create an Amazon Cloud Front distribution that includes multiple origins.

Question: 648 CertyIQ


A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The
company has a high performance computing (HPC) environment in its data center and wants to expand its
forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of
sustained throughput. Files that are stored in the solution should be accessible to thousands of compute instances
that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?

A.Use Amazon FSx for Lustre scratch file systems.


B.Use Amazon FSx for Lustre persistent file systems.
C.Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
D.Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.

Answer: B

Explanation:

Use Amazon FSx for Lustre persistent file systems.

Question: 649 CertyIQ


An ecommerce company runs a PostgreSQL database on premises. The database stores data by using high IOPS
Amazon Elastic Block Store (Amazon EBS) block storage. The daily peak I/O transactions per second do not exceed
15,000 IOPS. The company wants to migrate the database to Amazon RDS for PostgreSQL and provision disk IOPS
performance independent of disk storage capacity.

Which solution will meet these requirements MOST cost-effectively?

A.Configure the General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.
B.Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.
C.Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
D.Configure the EBS magnetic volume type to achieve maximum IOPS.

Answer: C

Explanation:

Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
Question: 650 CertyIQ
A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The
company's online application uses the database to process transactions. The data analysis team uses the same
production database to run reports for analytical processing. The company wants to reduce operational overhead
by moving to managed services wherever possible.

Which solution will meet these requirements with the LEAST operational overhead?

A.Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
B.Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
C.Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
D.Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes

Answer: A

Explanation:

Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes.

Question: 651 CertyIQ


A company stores a large volume of image files in an Amazon S3 bucket. The images need to be readily available
for the first 180 days. The images are infrequently accessed for the next 180 days. After 360 days, the images
need to be archived but must be available instantly upon request. After 5 years, only auditors can access the
images. The auditors must be able to retrieve the images within 12 hours. The images cannot be lost during this
process.

A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle
rule.

Which solution will meet these requirements MOST cost-effectively?

A.Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
B.Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
C.Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
D.Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

Answer: C

Explanation:

Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.

Reference:

https://aws.amazon.com/s3/storage-classes/glacier/
Question: 652 CertyIQ
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the
process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical
data workload.

Which solution will meet these requirements MOST cost-effectively?

A.Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the
task nodes on Spot Instances.
B.Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the
task nodes on Spot Instances.
C.Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and
task nodes on Spot Instances.
D.Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on
Spot Instances, and the task nodes on Spot Instances.

Answer: B

Explanation:

A transient cluster provides cost savings because it runs only during the computation time, and it provides
scalability and flexibility in a cloud environment. Option C (transient cluster with On-Demand primary node
and Spot core and task nodes) exposes the core nodes to Spot Instance interruptions, which may not be
acceptable for a workload that cannot lose any data.

Question: 653 CertyIQ


A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an
organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a
specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user
who created the resource.

Which solution will meet these requirements?

A.Move the specific AWS account to a new organizational unit (OU) in Organizations from the management
account. Create a service control policy (SCP) that requires all existing resources to have the correct cost
center tag before the resources are created. Apply the SCP to the new OU.
B.Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate
cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail
events to invoke the Lambda function.
C.Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to
look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon
EventBridge scheduled rule to invoke the CloudFormation stack.
D.Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge
rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost
center tag.

Answer: B

Explanation:

This solution utilizes AWS Lambda and Amazon EventBridge to automate the tagging process based on
information from the RDS database and CloudTrail events.AWS Lambda Function: Create a Lambda function
that can look up the cost center information from the RDS database and tag resources accordingly.Amazon
EventBridge Rule: Set up an EventBridge rule to react to AWS CloudTrail events. The rule triggers the Lambda
function whenever a resource is created, allowing dynamic tagging based on the cost center associated with
the user in the RDS database.This solution provides automation, ensuring that resources are tagged
appropriately with the cost center ID of the user who created the resource. It also allows for flexibility in
updating cost center information without modifying the infrastructure.

Question: 654 CertyIQ


A company recently migrated its web application to the AWS Cloud. The company uses an Amazon EC2 instance to
run multiple processes to host the application. The processes include an Apache web server that serves static
content. The Apache web server makes requests to a PHP application that uses a local Redis server for user
sessions.

The company wants to redesign the architecture to be highly available and to use AWS managed solutions.

Which solution will meet these requirements?

A.Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to
deploy its EC2 instance into a public subnet. Assign a public IP address.
B.Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API
to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain
name. Configure Amazon ElastiCache for Redis to handle session information.
C.Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-
AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to
Amazon S3. Configure the backend code to reference the EC2 instance.
D.Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured
to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container
Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP
application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.

Answer: D

Explanation:

Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured
to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container
Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP
application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones

Question: 655 CertyIQ


A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The
company designed the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint
for additional security. Session affinity (sticky sessions) must be configured on the endpoint.

Which combination of steps will meet these requirements? (Choose two.)

A.Create a public Network Load Balancer. Specify the application target group.
B.Create a Gateway Load Balancer. Specify the application target group.
C.Create a public Application Load Balancer. Specify the application target group.
D.Create a second target group. Add Elastic IP addresses to the EC2 instances.
E.Create a web ACL in AWS WAF. Associate the web ACL with the endpoint

Answer: CE
Explanation:

C. Create a public Application Load Balancer. Specify the application target group.

E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint.

Question: 656 CertyIQ


A company runs a website that stores images of historical events. Website users need the ability to search and
view images based on the year that the event in the image occurred. On average, users request each image only
once or twice a year. The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?

A.Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B.Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C.Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D.Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly
deliver images by using a static website.

Answer: D

Explanation:

Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly
deliver images by using a static website.

Question: 657 CertyIQ


A company has multiple AWS accounts in an organization in AWS Organizations that different business units use.
The company has multiple offices around the world. The company needs to update security group rules to allow
new office CIDR ranges or to remove old CIDR ranges across the organization. The company wants to centralize
the management of security group rules to minimize the administrative overhead that updating CIDR ranges
requires.

Which solution will meet these requirements MOST cost-effectively?

A.Create VPC security groups in the organization's management account. Update the security groups when a
CIDR range update is necessary.
B.Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.
C.Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update
across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR
ranges change.
D.Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common
security group policy for the whole organization. Select the previously created security groups as primary
groups in the policy.

Answer: B

Explanation:

Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.

Reference:

https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.html

Question: 658 CertyIQ


A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high
performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and
its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the
file system.

Which solution will meet these requirements with the LEAST latency? (Choose two.)

A.Deploy compute optimized EC2 instances into a cluster placement group.


B.Deploy compute optimized EC2 instances into a partition placement group.
C.Attach the EC2 instances to an Amazon FSx for Lustre file system.
D.Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
E.Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.

Answer: AE

Explanation:

https://aws.amazon.com/fsx/netapp-
ontap/features/#:~:text=Amazon%20FSx%20for%20NetApp%20ONTAP%20provides%20access%20to%20shared%20
"Amazon FSx for NetApp ONTAP provides access to shared file storage over all versions of the Network File
System (NFS) and Server Message Block (SMB) protocols, and also supports multi-protocol access (i.e.
concurrent NFS and SMB access) to the same data."

Question: 659 CertyIQ


A company is relocating its data center and wants to securely transfer 50 TB of data to AWS within 2 weeks. The
existing data center has a Site-to-Site VPN connection to AWS that is 90% utilized.

Which AWS service should a solutions architect use to meet these requirements?

A.AWS DataSync with a VPC endpoint


B.AWS Direct Connect
C.AWS Snowball Edge Storage Optimized
D.AWS Storage Gateway

Answer: C

Explanation:

50 TB of data to AWS within 2 weeks = Snowball Edge Storage Optimized.


Question: 660 CertyIQ
A company hosts an application on Amazon EC2 On-Demand Instances in an Auto Scaling group. Application peak
hours occur at the same time each day. Application users report slow application performance at the start of peak
hours. The application performs normally 2-3 hours after peak hours begin. The company wants to ensure that the
application works properly at the start of peak hours.

Which solution will meet these requirements?

A.Configure an Application Load Balancer to distribute traffic properly to the instances.


B.Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on memory
utilization.
C.Configure a dynamic scaling policy for the Auto Scaling group to launch new instances based on CPU
utilization.
D.Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

Answer: D

Explanation:

Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.

Question: 661 CertyIQ


A company runs applications on AWS that connect to the company's Amazon RDS database. The applications scale
on weekends and at peak times of the year. The company wants to scale the database more effectively for its
applications that connect to the database.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change
the applications to use the DynamoDB endpoint.
B.Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy
endpoint.
C.Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to
use the custom proxy endpoint.
D.Use an AWS Lambda function to provide connection pooling with a target group configuration for the
database. Change the applications to use the Lambda function.

Answer: B

Explanation:

Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database
Service (RDS) that makes applications more resilient to database failures. Many applications, including those
built on modern serverless architectures, can have a large number of open connections to the database server
and may open and close database connections at a high rate, exhausting database memory and compute
resources. Amazon RDS Proxy allows applications to pool and share connections established with the
database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora
and RDS databases are reduced by up to 66%

Question: 662 CertyIQ


A company uses AWS Cost Explorer to monitor its AWS costs. The company notices that Amazon Elastic Block
Store (Amazon EBS) storage and snapshot costs increase every month. However, the company does not purchase
additional EBS storage every month. The company wants to optimize monthly costs for its current storage usage.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS
Elastic Volumes to reduce the size of the EBS volumes.
B.Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS
volumes.
C.Delete all expired and unused snapshots to reduce snapshot costs.
D.Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots
according to the company's snapshot policy requirements.

Answer: D

Explanation:

This option involves managing snapshots efficiently to optimize costs with minimal operational
overhead.Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot
storage.Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of
snapshots based on defined policies. This reduces operational overhead by automating snapshot management
according to the company's snapshot policy requirements.

Question: 663 CertyIQ


A company is developing a new application on AWS. The application consists of an Amazon Elastic Container
Service (Amazon ECS) cluster, an Amazon S3 bucket that contains assets for the application, and an Amazon RDS
for MySQL database that contains the dataset for the application. The dataset contains sensitive information. The
company wants to ensure that only the ECS cluster can access the data in the RDS for MySQL database and the
data in the S3 bucket.

Which solution will meet these requirements?

A.Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3
bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt
permissions for the ECS task execution role.
B.Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and
the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C.Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC
endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the
subnets that the ECS cluster will generate tasks in.
D.Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow
access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3.
Update the S3 bucket policy to allow access from only the S3 VPC endpoint.

Answer: D

Explanation:
1. Option D is the most comprehensive solution as it leverages VPC endpoints for both Amazon RDS and
Amazon S3, along with proper network-level controls to restrict access to only the necessary resources from
the ECS cluster.
2. Create a VPC endpoint for Amazon RDS for MySQL: This ensures that the ECS cluster can access the RDS
database directly within the same Virtual Private Cloud (VPC), without having to go over the internet. By
updating the security group to allow access only from the specific subnets that the ECS cluster will generate
tasks in, you limit access to only the authorized entities.Create a VPC endpoint for Amazon S3: This allows the
ECS cluster to access the S3 bucket directly within the same VPC. By updating the S3 bucket policy to allow
access only from the S3 VPC endpoint, you restrict access to the designated VPC, ensuring that only
authorized resources can access the S3 bucket.

Question: 664 CertyIQ


A company has a web application that runs on premises. The application experiences latency issues during peak
hours. The latency issues occur twice each month. At the start of a latency issue, the application's CPU utilization
immediately increases to 10 times its normal amount.

The company wants to migrate the application to AWS to improve latency. The company also wants to scale the
application automatically when application demand increases. The company will use AWS Elastic Beanstalk for
application deployment.

Which solution will meet these requirements?

A.Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode.
Configure the environment to scale based on requests.
B.Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment
to scale based on requests.
C.Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment
to scale on a schedule.
D.Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode.
Configure the environment to scale on predictive metrics.

Answer: D

Explanation:

In this scenario, the application experiences latency issues during peak hours with a sudden increase in CPU
utilization. Using burstable performance instances in unlimited mode allows the application to burst beyond
the baseline performance when needed. Configuring the environment to scale on predictive metrics enables
proactive scaling based on anticipated increases in demand.

Question: 665 CertyIQ


A company has customers located across the world. The company wants to use automation to secure its systems
and network infrastructure. The company's security team must be able to track and audit all incremental changes
to the infrastructure.

Which solution will meet these requirements?

A.Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B.Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C.Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D.Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.

Answer: B

Explanation:

Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.
Question: 666 CertyIQ
A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a
stateless Python application and a MySQL database. The website serves only a small amount of traffic. The
company is concerned about the reliability of the instance and needs to migrate to a highly available architecture.
The company cannot modify the application code.

Which combination of actions should a solutions architect take to achieve high availability for the website?
(Choose two.)

A.Provision an internet gateway in each Availability Zone in use.


B.Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.
C.Migrate the database to Amazon DynamoDB, and enable DynamoDB auto scaling.
D.Use AWS DataSync to synchronize the database data across multiple EC2 instances.
E.Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are
distributed across two Availability Zones.

Answer: BE

Explanation:

B. Migrate the database to an Amazon RDS for MySQL Multi-AZ DB instance.

E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are
distributed across two Availability Zones.

Question: 667 CertyIQ


A company is moving its data and applications to AWS during a multiyear migration project. The company wants to
securely access data on Amazon S3 from the company's AWS Region and from the company's on-premises
location. The data must not traverse the internet. The company has established an AWS Direct Connect connection
between its Region and its on-premises location.

Which solution will meet these requirements?

A.Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the
Region and the on-premises location.
B.Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-
premises location.
C.Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the
Region and the on-premises location.
D.Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the
on-premises location.

Answer: C

Explanation:

Amazon VPC interface endpoints enable you to privately connect


your VPC to supported AWS services without requiring an internet
gateway, NAT device, VPN, or Direct Connect connection.By
creating interface endpoints for Amazon S3 in both the AWS Region
and the on-premises location, you can securely access data without
traversing the internet.Direct Connect Connection:With an AWS
Direct Connect connection established between the AWS Region
and the on-premises location, the data can flow over the dedicated,
private connection rather than going over the public internet.

Question: 668 CertyIQ


A company created a new organization in AWS Organizations. The organization has multiple accounts for the
company's development teams. The development team members use AWS IAM Identity Center (AWS Single Sign-
On) to access the accounts. For each of the company's applications, the development teams must use a predefined
application name to tag resources that are created.

A solutions architect needs to design a solution that gives the development team the ability to create resources
only if the application name tag has an approved value.

Which solution will meet these requirements?

A.Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified
for resources to be created.
B.Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C.Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all
accounts.
D.Create a tag policy in Organizations that has a list of allowed application names.

Answer: D

Explanation:

Create a tag policy in Organizations that has a list of allowed application names.

Reference:

https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html

Question: 669 CertyIQ


A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage
the master user password by rotating the password every 30 days.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B.Use the modify-db-instance command in the AWS CLI to change the password.
C.Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D.Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password
rotation.

Answer: C

Explanation:

Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
Reference:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html

Question: 670 CertyIQ


A company performs tests on an application that uses an Amazon DynamoDB table. The tests run for 4 hours once
a week. The company knows how many read and write operations the application performs to the table each
second during the tests. The company does not currently use DynamoDB for any other use case. A solutions
architect needs to optimize the costs for the table.

Which solution will meet these requirements?

A.Choose on-demand mode. Update the read and write capacity units appropriately.
B.Choose provisioned mode. Update the read and write capacity units appropriately.
C.Purchase DynamoDB reserved capacity for a 1-year term.
D.Purchase DynamoDB reserved capacity for a 3-year term.

Answer: B

Explanation:

Provisioned Mode (Option B): Provisioned mode allows you to specify the desired read and write capacity
units. Since the workload occurs once a week for 4 hours, you can provision the read and write capacity units
accordingly to handle the expected load during that time. This can be a more cost-effective option than on-
demand pricing for predictable workloads.

Question: 671 CertyIQ


A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments
of its AWS costs. The company recently identified unusual spending.

The company needs a solution to prevent unusual spending. The solution must monitor costs and notify
responsible stakeholders in the event of unusual spending.

Which solution will meet these requirements?

A.Use an AWS Budgets template to create a zero spend budget.


B.Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.
C.Create AWS Pricing Calculator estimates for the current running workload pricing details.
D.Use Amazon CloudWatch to monitor costs and to identify unusual spending.

Answer: B

Explanation:

AWS Cost Anomaly Detection (Option B): AWS Cost Anomaly Detection is designed to automatically detect
unusual spending patterns based on machine learning algorithms. It can identify anomalies and send
notifications when it detects unexpected changes in spending. This aligns well with the requirement to
prevent unusual spending and notify stakeholders.

Reference:
https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/

Question: 672 CertyIQ


A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign.
The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine
whether to process the data further in the data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B.Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C.Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D.Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to
query the data.

Answer: B

Explanation:

AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and
Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries.
By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use
Athena to query the data directly without the need to load it into a separate database. This minimizes
operational overhead.

Question: 673 CertyIQ


A company runs an SMB file server in its data center. The file server stores large files that the company frequently
accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the
files with a maximum retrieval time of 24 hours.

Which solution will meet these requirements?

A.Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B.Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to
transition the data to S3 Glacier Deep Archive after 7 days.
C.Create an Amazon FSx File Gateway to increase the company's storage space. Create an Amazon S3
Lifecycle policy to transition the data after 7 days.
D.Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3
Glacier Flexible Retrieval after 7 days.

Answer: B

Explanation:

Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to
transition the data to S3 Glacier Deep Archive after 7 days.

Reference:

https://docs.aws.amazon.com/filegateway/latest/files3/file-gateway-concepts.html
Question: 674 CertyIQ
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a
database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic
increases. The database experiences a heavy read load during periods of high traffic.

Which actions should a solutions architect take to resolve these performance issues? (Choose two.)

A.Turn on auto scaling for the DB instance.


B.Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
C.Convert the DB instance to a Multi-AZ DB instance deployment. Configure the application to send read traffic
to the standby DB instance.
D.Create an Amazon ElastiCache cluster. Configure the application to cache query results in the ElastiCache
cluster.
E.Configure the Auto Scaling group subnets to ensure that the EC2 instances are provisioned in the same
Availability Zone as the DB instance.

Answer: BD

Explanation:

B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the
load and improving overall performance during periods of heavy read traffic. D. Create an Amazon Elasti
Cache cluster. Configure the application to cache query results in the ElastiCache cluster.Amazon
ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is
particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache
rather than making repeated database queries.

Question: 675 CertyIQ


A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an
application. The company creates one snapshot of each EBS volume every day to meet compliance requirements.
The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots.
The solution must not change the administrative rights of the storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?

A.Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the
AWS CLI from the new EC2 instance to delete snapshots.
B.Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C.Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D.Lock the EBS snapshots to prevent deletion.

Answer: D

Explanation:

Locking EBS Snapshots (Option D): The "lock" feature in AWS allows you to prevent accidental deletion of
resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and
effective way to meet the requirements without changing the administrative rights of the storage
administrator user.
Question: 676 CertyIQ
A company's application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and
databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and
from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to
Amazon OpenSearch Service for analysis.

Which solution will meet these requirements?

A.Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log
group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
B.Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log
group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
C.Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon
Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D.Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon
Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.

Answer: B

Explanation:

Amazon CloudWatch Logs and VPC Flow Logs (Option B): VPC Flow Logs capture information about the IP
traffic going to and from network interfaces in a VPC. By configuring VPC Flow Logs to send the log data to a
log group in Amazon CloudWatch Logs, you can then use Amazon Kinesis Data Firehose to stream the logs
from the log group to Amazon OpenSearch Service for analysis. This approach provides near real-time
streaming of logs to the analytics service.

Question: 677 CertyIQ


A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon
EKS) cluster. The EKS cluster has managed node groups that are provisioned with On-Demand Instances.

The company needs a dedicated EKS cluster for development work. The company will use the development cluster
infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.

Which solution will meet these requirements MOST cost-effectively?

A.Create a managed node group that contains only Spot Instances.


B.Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second
node group with Spot Instances.
C.Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user
data to add the nodes to the EKS cluster.
D.Create a managed node group that contains only On-Demand Instances.

Answer: A

Explanation:

Create a managed node group that contains only Spot Instances.

Question: 678 CertyIQ


A company stores sensitive data in Amazon S3. A solutions architect needs to create an encryption solution. The
company needs to fully control the ability of users to create, rotate, and disable encryption keys with minimal
effort for any data that must be encrypted.

Which solution will meet these requirements?

A.Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive
data.
B.Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to
encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C.Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to
encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D.Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys.
Upload the encrypted objects back into Amazon S3.

Answer: B

Explanation:

SSE-KMS with Customer Managed Key (Option B): This option allows you to create a customer managed key
using AWS KMS. With a customer managed key, you have full control over key lifecycle management,
including the ability to create, rotate, and disable keys with minimal effort. SSE-KMS also integrates with
AWS Identity and Access Management (IAM) for fine-grained access control.

Question: 679 CertyIQ


A company wants to back up its on-premises virtual machines (VMs) to AWS. The company's backup solution
exports on-premises backups to an Amazon S3 bucket as objects. The S3 backups must be retained for 30 days
and must be automatically deleted after 30 days.

Which combination of steps will meet these requirements? (Choose three.)

A.Create an S3 bucket that has S3 Object Lock enabled.


B.Create an S3 bucket that has object versioning enabled.
C.Configure a default retention period of 30 days for the objects.
D.Configure an S3 Lifecycle policy to protect the objects for 30 days.
E.Configure an S3 Lifecycle policy to expire the objects after 30 days.
F.Configure the backup solution to tag the objects with a 30-day retention period

Answer: ACE

Explanation:

A. Create an S3 bucket that has S3 Object Lock enabled.

C. Configure a default retention period of 30 days for the objects.

E. Configure an S3 Lifecycle policy to expire the objects after 30 days.

Question: 680 CertyIQ


A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon
EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original
S3 bucket consistently. The copied files should be overwritten only if the source file changes.
Which solution will meet these requirements with the LEAST operational overhead?

A.Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task
for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has
changed.
B.Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to
invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to
the file system and the destination S3 bucket.
C.Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task
for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
D.Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to
routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the
mounted file system.

Answer: A

Explanation:

AWS DataSync (Option A): AWS DataSync is designed for efficient and reliable copying of data between
different storage solutions. By setting up an AWS DataSync task with the transfer mode set to transfer only
data that has changed, you ensure that only the new or modified files are copied. This minimizes data transfer
and operational overhead. Transfer only data that has changed – DataSync copies only the data and metadata
that differs between the source and destination location.Transfer all data – DataSync copies everything in the
source to the destination without comparing differences between the locations.

Question: 681 CertyIQ


A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes.
The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS).
The company must be able to control rotation of the encryption keys.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create a customer managed key. Use the key to encrypt the EBS volumes.
B.Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
C.Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
D.Use an AWS owned key to encrypt the EBS volumes.

Answer: A

Explanation:

Create a customer managed key. Use the key to encrypt the EBS volumes.

Question: 682 CertyIQ


A company needs a solution to enforce data encryption at rest on Amazon EC2 instances. The solution must
automatically identify noncompliant resources and enforce compliance policies on findings.

Which solution will meet these requirements with the LEAST administrative overhead?

A.Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS)
volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of
unencrypted EBS volumes.
B.Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store
(Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation
of unencrypted EBS volumes.
C.Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS
Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
D.Use Amazon inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS
Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.

Answer: A

Explanation:

IAM Policy and AWS Config (Option A): By creating an IAM policy that allows users to create only encrypted
EBS volumes, you proactively prevent the creation of unencrypted volumes. Using AWS Config, you can set up
rules to detect noncompliant resources, and AWS Systems Manager Automation can be used for automated
remediation. This approach provides a proactive and automated solution.

Question: 683 CertyIQ


A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node
MySQL database and a multi-node web tier. The company must minimize changes to the application during the
migration. The company wants to improve application resiliency after the migration.

Which combination of steps will meet these requirements? (Choose two.)

A.Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
B.Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
C.Migrate the database to an Amazon RDS Multi-AZ deployment.
D.Migrate the web tier to an AWS Lambda function.
E.Migrate the database to an Amazon DynamoDB table.

Answer: AC

Explanation:

Web Tier Migration (Option A): Migrating the web tier to Amazon EC2 instances in an Auto Scaling group
behind an Application Load Balancer (ALB) provides horizontal scalability, automatic scaling, and improved
resiliency. Auto Scaling helps in managing and maintaining the desired number of EC2 instances based on
demand, and the ALB distributes incoming traffic across multiple instances.Database Migration to Amazon
RDS Multi-AZ (Option C): Migrating the database to Amazon RDS in a Multi-AZ deployment provides high
availability and automatic failover. In a Multi-AZ deployment, Amazon RDS maintains a standby replica in a
different Availability Zone, and in the event of a failure, it automatically promotes the replica to the primary
instance. This enhances the resiliency of the database.

Question: 684 CertyIQ


A company wants to migrate its web applications from on premises to AWS. The company is located close to the
eu-central-1 Region. Because of regulations, the company cannot launch some of its applications in eu-central-1.
The company wants to achieve single-digit millisecond latency.

Which solution will meet these requirements?

A.Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to an edge location in
Amazon CloudFront.
B.Deploy the applications in AWS Local Zones by extending the company's VPC from eu-central-1 to the chosen
Local Zone.
C.Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to the regional edge
caches in Amazon CloudFront.
D.Deploy the applications in AWS Wavelength Zones by extending the company’s VPC from eu-central-1 to the
chosen Wavelength Zone.

Answer: B

Explanation:

Option B - AWS Local Zones place AWS compute, storage, database, and other select services closer to end-
users. This would allow the company to deploy applications within geographic proximity to eu-central-1
without being directly in the region, potentially meeting regulatory requirements and achieving low
latency.Whereas Option D - AWS Wavelength Zones are designed to provide developers the ability to build
applications that deliver single-digit millisecond latencies to MOBILE and connected devices. And it's more
focused on 5G Apps and may not be directly relevant to Web Apps hosting.

Question: 685 CertyIQ


A company’s ecommerce website has unpredictable traffic and uses AWS Lambda functions to directly access a
private Amazon RDS for PostgreSQL DB instance. The company wants to maintain predictable database
performance and ensure that the Lambda invocations do not overload the database with too many connections.

What should a solutions architect do to meet these requirements?

A.Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
B.Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
C.Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
D.Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a VPC.

Answer: B

Explanation:

Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.

Question: 686 CertyIQ


A company is creating an application. The company stores data from tests of the application in multiple on-
premises locations.

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The
number of accounts and VPCs will increase during the next year. The network architecture must simplify the
administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead?

A.Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-
premises locations.
B.Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to
connect all VPCs and on-premises locations.
C.Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the
on-premises connections.
D.Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect
the central VPC to other VPCs by using peering connections.

Answer: C

Explanation:

Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the
on-premises connections.

Question: 687 CertyIQ


A company that uses AWS needs a solution to predict the resources needed for manufacturing processes each
month. The solution must use historical values that are currently stored in an Amazon S3 bucket. The company has
no machine learning (ML) experience and wants to use a managed service for the training and predictions.

Which combination of steps will meet these requirements? (Choose two.)

A.Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference.


B.Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
C.Configure an AWS Lambda function with a function URL that uses Amazon SageMaker endpoints to create
predictions based on the inputs.
D.Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a
prediction based on the inputs.
E.Train an Amazon Forsecast predictor by using the historical data in the S3 bucket.

Answer: BD

Explanation:

B.Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.

D.Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a
prediction based on the inputs.

Question: 688 CertyIQ


A company manages AWS accounts in AWS Organizations. AWS IAM Identity Center (AWS Single Sign-On) and
AWS Control Tower are configured for the accounts. The company wants to manage multiple user permissions
across all the accounts.

The permissions will be used by multiple IAM users and must be split between the developer and administrator
teams. Each team requires different permissions. The company wants a solution that includes new users that are
hired on both teams.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create individual users in IAM Identity Center for each account. Create separate developer and administrator
groups in IAM Identity Center. Assign the users to the appropriate groups. Create a custom IAM policy for each
group to set fine-grained permissions.
B.Create individual users in IAM Identity Center for each account. Create separate developer and administrator
groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed IAM policies
to each user as needed for fine-grained permissions.
C.Create individual users in IAM Identity Center. Create new developer and administrator groups in IAM Identity
Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new
groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are
hired, add them to the appropriate group.
D.Create individual users in IAM Identity Center. Create new permission sets that include the appropriate IAM
policies for each user. Assign the users to the appropriate accounts. Grant additional IAM permissions to the
users from within specific accounts. When new users are hired, add them to IAM Identity Center and assign
them to the accounts.

Answer: C

Explanation:

Reference:

https://docs.aws.amazon.com/controltower/latest/userguide/sso.html

Question: 689 CertyIQ


A company wants to standardize its Amazon Elastic Block Store (Amazon EBS) volume encryption strategy. The
company also wants to minimize the cost and configuration effort required to operate the volume encryption
check.

Which solution will meet these requirements?

A.Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon
EventBridge to schedule an AWS Lambda function to run the API calls.
B.Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls
on an AWS Fargate task.
C.Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes.
Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources
manually.
D.Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is
not encrypted.

Answer: D

Explanation:

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. It can check whether your resources comply with certain conditions (such as being encrypted), and
it can flag or take action on resources that do not comply.

Question: 690 CertyIQ


A company regularly uploads GB-sized files to Amazon S3. After the company uploads the files, the company uses
a fleet of Amazon EC2 Spot Instances to transcode the file format. The company needs to scale throughput when
the company uploads data from the on-premises data center to Amazon S3 and when the company downloads
data from Amazon S3 to the EC2 instances.

Which solutions will meet these requirements? (Choose two.)

A.Use the S3 bucket access point instead of accessing the S3 bucket directly.
B.Upload the files into multiple S3 buckets.
C.Use S3 multipart uploads.
D.Fetch multiple byte-ranges of an object in parallel.
E.Add a random prefix to each object when uploading the files.

Answer: CD

Explanation:

C.Use S3 multipart uploads.

D.Fetch multiple byte-ranges of an object in parallel.

Question: 691 CertyIQ


A solutions architect is designing a shared storage solution for a web application that is deployed across multiple
Availability Zones. The web application runs on Amazon EC2 instances that are in an Auto Scaling group. The
company plans to make frequent changes to the content. The solution must have strong consistency in returning
the new content as soon as the changes occur.

Which solutions meet these requirements? (Choose two.)

A.Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block
storage that is mounted to the individual EC2 instances.
B.Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual
EC2 instances.
C.Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual
EC2 instances.
D.Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling
group.
E.Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to no-
cache. Use Amazon CloudFront to deliver the content.

Answer: BE

Explanation:

B.Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the
individual EC2 instances.

E.Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to
no-cache. Use Amazon CloudFront to deliver the content.

Question: 692 CertyIQ


A company is deploying an application in three AWS Regions using an Application Load Balancer. Amazon Route
53 will be used to distribute traffic between these Regions.

Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?

A.Create an A record with a latency policy.


B.Create an A record with a geolocation policy.
C.Create a CNAME record with a failover policy.
D.Create a CNAME record with a geoproximity policy.

Answer: B
Explanation:

Create an A record with a geolocation policy.

Question: 693 CertyIQ


A company has a web application that includes an embedded NoSQL database. The application runs on Amazon
EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group
in a single Availability Zone.

A recent increase in traffic requires the application to be highly available and for the database to be eventually
consistent.

Which solution will meet these requirements with the LEAST operational overhead?

A.Replace the ALB with a Network Load Balancer. Maintain the embedded NoSQL database with its replication
service on the EC2 instances.
B.Replace the ALB with a Network Load Balancer. Migrate the embedded NoSQL database to Amazon
DynamoDB by using AWS Database Migration Service (AWS DMS).
C.Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the embedded
NoSQL database with its replication service on the EC2 instances.
D.Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded
NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).

Answer: D

Explanation:

Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded
NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).

Question: 694 CertyIQ


A company is building a shopping application on AWS. The application offers a catalog that changes once each
month and needs to scale with traffic volume. The company wants the lowest possible latency from the
application. Data from each user's shopping cart needs to be highly available. User session data must be available
even if the user is disconnected and reconnects.

What should a solutions architect do to ensure that the shopping cart data is preserved at all times?

A.Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for access to
the catalog in Amazon Aurora.
B.Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
C.Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
D.Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog
and shopping cart. Configure automated snapshots.

Answer: B

Explanation:

Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
Question: 695 CertyIQ
A company is building a microservices-based application that will be deployed on Amazon Elastic Kubernetes
Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the
application is observable to identify performance issues in the future.

Which solution will meet these requirements?

A.Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the
microservices.
B.Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters. Configure AWS
X-Ray to trace the requests between the microservices.
C.Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to observe the
microservice interactions.
D.Use AWS Trusted Advisor to understand the performance of the application.

Answer: B

Explanation:

Option B Amazon CloudWatch Container Insights: This service provides monitoring and troubleshooting
capabilities for containerized applications. It collects and aggregates metrics, logs, and events from Amazon
EKS clusters and containers. This helps in monitoring the performance and health of microservices.

Question: 696 CertyIQ


A company needs to provide customers with secure access to its data. The company processes customer data and
stores the results in an Amazon S3 bucket.

All the data is subject to strong regulations and security requirements. The data must be encrypted at rest. Each
customer must be able to access only their data from their AWS account. Company employees must not be able to
access the data.

Which solution will meet these requirements?

A.Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In
the private certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
B.Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In the S3 bucket policy, deny decryption of data for all principals except an IAM role that the
customer provides.
C.Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.
D.Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In
the public certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.

Answer: C

Explanation:

Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.

Question: 697 CertyIQ


A solutions architect creates a VPC that includes two public subnets and two private subnets. A corporate security
mandate requires the solutions architect to launch all Amazon EC2 instances in a private subnet. However, when
the solutions architect launches an EC2 instance that runs a web server on ports 80 and 443 in a private subnet, no
external internet traffic can connect to the server.

What should the solutions architect do to resolve this issue?

A.Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record for the
website resolves to the Auto Scaling group identifier.
B.Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the
target group that is associated with the ALEnsure that the DNS record for the website resolves to the ALB.
C.Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a default
route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
D.Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80 and HTTPS
traffic on port 443. Ensure that the DNS record for the website resolves to the public IP address of the EC2
instance.

Answer: B

Explanation:

Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the
target group that is associated with the ALEnsure that the DNS record for the website resolves to the ALB.

Question: 698 CertyIQ


A company is deploying a new application to Amazon Elastic Kubernetes Service (Amazon EKS) with an AWS
Fargate cluster. The application needs a storage solution for data persistence. The solution must be highly
available and fault tolerant. The solution also must be shared between multiple application containers.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS worker
nodes are placed. Register the volumes in a StorageClass object on an EKS cluster. Use EBS Multi-Attach to
share the data between containers.
B.Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass
object on an EKS cluster. Use the same file system for all containers.
C.Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in a StorageClass object on
an EKS cluster. Use the same volume for all containers.
D.Create Amazon Elastic File System (Amazon EFS) file systems in the same Availability Zones where EKS
worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS
Lambda function to synchronize the data between file systems.

Answer: B

Explanation:

Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass
object on an EKS cluster. Use the same file system for all containers.
Question: 699 CertyIQ
A company has an application that uses Docker containers in its local data center. The application runs on a
container host that stores persistent data in a volume on the host. The container instances use the stored
persistent data.

The company wants to move the application to a fully managed service because the company does not want to
manage any servers or storage infrastructure.

Which solution will meet these requirements?

A.Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic
Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent
volume mounted in the containers.
B.Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.
C.Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon S3
bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
D.Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.

Answer: B

Explanation:

Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.

Question: 700 CertyIQ


A gaming company wants to launch a new internet-facing application in multiple AWS Regions. The application
will use the TCP and UDP protocols for communication. The company needs to provide high availability and
minimum latency for global users.

Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)

A.Create internal Network Load Balancers in front of the application in each Region.
B.Create external Application Load Balancers in front of the application in each Region.
C.Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
D.Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
E.Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region

Answer: AC

Explanation:

A.Create internal Network Load Balancers in front of the application in each Region.

C.Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
Question: 701 CertyIQ
A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer
(ALB). The application's users have reported sporadic performance, which appears to be related to DDoS attacks
originating from random IP addresses. The city needs a solution that requires minimal configuration changes and
provides an audit trail for the DDoS sources.

Which solution meets these requirements?

A.Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic from unknown sources.
B.Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.
C.Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.
D.Create an Amazon CloudFront distribution for the application, and set the ALB as the origin. Enable an AWS
WAF web ACL on the distribution, and configure rules to block traffic from unknown sources

Answer: C

Explanation:

Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.

Question: 702 CertyIQ


A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized
devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and
gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-
throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the
devices back to AWS.

Which solution will meet these requirements?

A.Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file
gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
B.Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file
system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster
instances.
C.Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data
into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from
the HPC cluster instances.
D.Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system.
Access the FSx for Lustre file system from the HPC cluster instances.

Answer: D

Explanation:

Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system.
Access the FSx for Lustre file system from the HPC cluster instances.

Question: 703 CertyIQ


A company has NFS servers in an on-premises data center that need to periodically back up small amounts of data
to Amazon S3.

Which solution meets these requirements and is MOST cost-effective?

A.Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
B.Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
C.Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
D.Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the
data to Amazon S3.

Answer: B

Explanation:

Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.

Question: 704 CertyIQ


An online video game company must maintain ultra-low latency for its game servers. The game servers run on
Amazon EC2 instances. The company needs a solution that can handle millions of UDP internet traffic requests
each second.

Which solution will meet these requirements MOST cost-effectively?

A.Configure an Application Load Balancer with the required protocol and ports for the internet traffic. Specify
the EC2 instances as the targets.
B.Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances as the targets.
C.Configure a Network Load Balancer with the required protocol and ports for the internet traffic. Specify the
EC2 instances as the targets.
D.Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet traffic to
both sets of EC2 instances.

Answer: C

Explanation:

Configure a Network Load Balancer with the required protocol and ports for the internet traffic. Specify the
EC2 instances as the targets.

Reference:

https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html

Question: 705 CertyIQ


A company runs a three-tier application in a VPC. The database tier uses an Amazon RDS for MySQL DB instance.

The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster. The
company needs a solution that replicates the data changes that happen during the migration to the new database.

Which combination of steps will meet these requirements? (Choose two.)

A.Use AWS Database Migration Service (AWS DMS) Schema Conversion to transform the database objects.
B.Use AWS Database Migration Service (AWS DMS) Schema Conversion to create an Aurora PostgreSQL read
replica on the RDS for MySQL DB instance.
C.Configure an Aurora MySQL read replica for the RDS for MySQL DB instance.
D.Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the
data.
E.Promote the Aurora PostgreSQL read replica to a standalone Aurora PostgreSQL DB cluster when the replica
lag is zero.

Answer: AD

Explanation:

A.Use AWS Database Migration Service (AWS DMS) Schema Conversion to transform the database objects.

D.Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the
data.

Question: 706 CertyIQ


A company hosts a database that runs on an Amazon RDS instance that is deployed to multiple Availability Zones.
The company periodically runs a script against the database to report new entries that are added to the database.
The script that runs against the database negatively affects the performance of a critical application. The company
needs to improve application performance with minimal costs.

Which solution will meet these requirements with the LEAST operational overhead?

A.Add functionality to the script to identify the instance that has the fewest active connections. Configure the
script to read from that instance to report the total new entries.
B.Create a read replica of the database. Configure the script to query only the read replica to report the total
new entries.
C.Instruct the development team to manually export the new entries for the day in the database at the end of
each day.
D.Use Amazon ElastiCache to cache the common queries that the script runs against the database.

Answer: B

Explanation:

Create a read replica of the database. Configure the script to query only the read replica to report the total
new entries.

Question: 707 CertyIQ


A company is using an Application Load Balancer (ALB) to present its application to the internet. The company
finds abnormal traffic access patterns across the application. A solutions architect needs to improve visibility into
the infrastructure to help the company understand these abnormalities better.

What is the MOST operationally efficient solution that meets these requirements?

A.Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information.
B.Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
C.Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the
relevant information.
D.Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log
information.
Answer: B

Explanation:

Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.

Question: 708 CertyIQ


A company wants to use NAT gateways in its AWS environment. The company's Amazon EC2 instances in private
subnets must be able to connect to the public internet through the NAT gateways.

Which solution will meet these requirements?

A.Create public NAT gateways in the same private subnets as the EC2 instances.
B.Create private NAT gateways in the same private subnets as the EC2 instances.
C.Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.
D.Create private NAT gateways in public subnets in the same VPCs as the EC2 instances.

Answer: C

Explanation:

C.Public NAT GW in Public Subnet to have access to internet. Private NAT GW is used for VPC or on-prem

the correct is C, because D would require more than just private NAT gateway.Private – Instances in private
subnets can connect to other VPCs or your on-premises network through a private NAT gateway. You can
route traffic from the NAT gateway through a transit gateway or a virtual private gateway. You cannot
associate an elastic IP address with a private NAT gateway. You can attach an internet gateway to a VPC with
a private NAT gateway, but if you route traffic from the private NAT gateway to the internet gateway, the
internet gateway drops the traffic.

https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

Question: 709 CertyIQ


A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four AWS
accounts in the root organizational unit (OU). There are three nonproduction accounts and one production account.
The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction accounts.
The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited
types.

Which solutions to deploy the SCP will meet these requirements? (Choose two.)

A.Attach the SCP to the root OU for the organization.


B.Attach the SCP to the three nonproduction Organizations member accounts.
C.Attach the SCP to the Organizations management account.
D.Create an OU for the production account. Attach the SCP to the OU. Move the production member account
into the new OU.
E.Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts
into the new OU.

Answer: BE

Explanation:
B.Attach the SCP to the three nonproduction Organizations member accounts.

E.Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.

Question: 710 CertyIQ


A company’s website hosted on Amazon EC2 instances processes classified data stored in Amazon S3. Due to
security concerns, the company requires a private and secure connection between its EC2 resources and Amazon
S3.

Which solution meets these requirements?

A.Set up S3 bucket policies to allow access from a VPC endpoint.


B.Set up an IAM policy to grant read-write access to the S3 bucket.
C.Set up a NAT gateway to access resources outside the private subnet.
D.Set up an access key ID and a secret access key to access the S3 bucket.

Answer: A

Explanation:

Set up S3 bucket policies to allow access from a VPC endpoint.

Question: 711 CertyIQ


An ecommerce company runs its application on AWS. The application uses an Amazon Aurora PostgreSQL cluster
in Multi-AZ mode for the underlying database. During a recent promotional campaign, the application experienced
heavy read load and write load. Users experienced timeout issues when they attempted to access the application.

A solutions architect needs to make the application architecture more scalable and highly available.

Which solution will meet these requirements with the LEAST downtime?

A.Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda function
to log the state change events of the Aurora cluster. Add the Lambda function as a target for the EventBridge
rule. Add additional reader nodes to fail over to.
B.Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database Activity
Streams on the cluster to track the cluster status.
C.Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for the
Aurora cluster.
D.Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by using AWS
Database Migration Service (AWS DMS) with a write-around approach.

Answer: C

Explanation:

Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for the
Aurora cluster.
Question: 712 CertyIQ
A company is designing a web application on AWS. The application will use a VPN connection between the
company’s existing data centers and the company's VPCs.

The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to
communicate with the on-premises services from a VPC.

Which solution will meet these requirements in the MOST secure manner?

A.Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.
B.Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.
C.Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
D.Create a Route 53 public hosted zone. Create a record for each service to allow service communication

Answer: A

Explanation:

Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.

Question: 713 CertyIQ


A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple
countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less
than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata to
determine which photos to display to each user.

Which solution provides the appropriate user access MOST cost-effectively?

A.Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed
items.
B.Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3
location in DynamoDB.
C.Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos
older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags
to keep track of metadata.
D.Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older
than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in
Amazon OpenSearch Service.

Answer: B

Explanation:

Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3
location in DynamoDB.

Question: 714 CertyIQ


A company runs a highly available web application on Amazon EC2 instances behind an Application Load Balancer.
The company uses Amazon CloudWatch metrics.
As the traffic to the web application increases, some EC2 instances become overloaded with many outstanding
requests. The CloudWatch metrics show that the number of requests processed and the time to receive the
responses from some EC2 instances are both higher compared to other EC2 instances. The company does not want
new requests to be forwarded to the EC2 instances that are already overloaded.

Which solution will meet these requirements?

A.Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnectionCount
CloudWatch metrics.
B.Use the least outstanding requests algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.
C.Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch
metrics.
D.Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime
CloudWatch metrics.

Answer: B

Explanation:

Use the least outstanding requests algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.

Question: 715 CertyIQ


A company uses Amazon EC2, AWS Fargate, and AWS Lambda to run multiple workloads in the company's AWS
account. The company wants to fully make use of its Compute Savings Plans. The company wants to receive
notification when coverage of the Compute Savings Plans drops.

Which solution will meet these requirements with the MOST operational efficiency?

A.Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a coverage
threshold to send notifications to the appropriate email message recipients.
B.Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple Email
Service (Amazon SES) to email the report to the appropriate email message recipients.
C.Create an AWS Budgets report for the Savings Plans budget. Set the frequency to daily.
D.Create a Savings Plans alert subscription. Enable all notification options. Enter an email address to receive
notifications.

Answer: A

Explanation:

Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a coverage
threshold to send notifications to the appropriate email message recipients.

Reference:

https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-usingBudgets.html

Question: 716 CertyIQ


A company runs a real-time data ingestion solution on AWS. The solution consists of the most recent version of
Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC in private subnets
across three Availability Zones.

A solutions architect needs to redesign the data ingestion solution to be publicly available over the internet. The
data in transit must also be encrypted.

Which solution will meet these requirements with the MOST operational efficiency?

A.Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.
B.Create a new VPC that has public subnets. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.
C.Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALB security group
inbound rule to allow inbound traffic from the VPC CIDR block for HTTPS protocol.
D.Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLB listener for HTTPS
communication over the internet.

Answer: A

Explanation:

Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.

Question: 717 CertyIQ


A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order
files from an on-premises enterprise resource planning (ERP) system. The application then uploads the files to an
SFTP server. The application uses a scheduled job that checks for order files every hour.

The company already has an AWS account that has connectivity to the on-premises network. The new application
on AWS must support integration with the existing ERP system. The new application must be secure and resilient
and must use the SFTP protocol to process orders from the ERP system immediately.

Which solution will meet these requirements?

A.Create an AWS Transfer Family SFTP internet-facing server in two Availability Zones. Use Amazon S3
storage. Create an AWS Lambda function to process order files. Use S3 Event Notifications to send
s3:ObjectCreated:* events to the Lambda function.
B.Create an AWS Transfer Family SFTP internet-facing server in one Availability Zone. Use Amazon Elastic File
System (Amazon EFS) storage. Create an AWS Lambda function to process order files. Use a Transfer Family
managed workflow to invoke the Lambda function.
C.Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon Elastic File
System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use
Amazon EventBridge Scheduler to invoke the state machine to periodically check Amazon EFS for order files.
D.Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon S3 storage.
Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the
Lambda function.

Answer: D

Explanation:

D looks more secure over existing on-prem to AWS connection-Transfer Family SFTP internal server in two
Availability Zones.-Use Amazon S3 storage. -Use a Transfer Family managed workflow to invoke the Lambda
function"
Question: 718 CertyIQ
A company’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing
infrastructure is not scalable and is complex to manage.

A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep
the data processing on premises.

Which solution will meet these requirements?

A.Use AWS Site-to-Site VPN to access the on-premises Hadoop Distributed File System (HDFS) data and
application. Use an Amazon EMR cluster to process the data.
B.Use AWS DataSync to connect to the on-premises Hadoop Distributed File System (HDFS) cluster. Create an
Amazon EMR cluster to process the data.
C.Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS
Outposts. Use the EMR clusters to process the data.
D.Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Create an Amazon EMR cluster to
process the data.

Answer: C

Explanation:

Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS
Outposts. Use the EMR clusters to process the data.

Question: 719 CertyIQ


A company is migrating a large amount of data from on-premises storage to AWS. Windows, Mac, and Linux based
Amazon EC2 instances in the same AWS Region will access the data by using SMB and NFS storage protocols. The
company will access a portion of the data routinely. The company will access the remaining data infrequently.

The company needs to design a solution to host the data.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use AWS
DataSync to migrate the data to the EFS volume.
B.Create an Amazon FSx for ONTAP instance. Create an FSx for ONTAP file system with a root volume that
uses the auto tiering policy. Migrate the data to the FSx for ONTAP volume.
C.Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by using an
AWS Storage Gateway Amazon S3 File Gateway.
D.Create an Amazon FSx for OpenZFS file system. Migrate the data to the new volume.

Answer: C

Explanation:

Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by using an
AWS Storage Gateway Amazon S3 File Gateway.

Question: 720 CertyIQ


A manufacturing company runs its report generation application on AWS. The application generates each report in
about 20 minutes. The application is built as a monolith that runs on a single Amazon EC2 instance. The application
requires frequent updates to its tightly coupled modules. The application becomes complex to maintain as the
company adds new features.

Each time the company patches a software module, the application experiences downtime. Report generation
must restart from the beginning after any interruptions. The company wants to redesign the application so that the
application can be flexible, scalable, and gradually improved. The company wants to minimize application
downtime.

Which solution will meet these requirements?

A.Run the application on AWS Lambda as a single function with maximum provisioned concurrency.
B.Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation
strategy.
C.Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling.
D.Run the application on AWS Elastic Beanstalk as a single application environment with an all-at-once
deployment strategy.

Answer: C

Explanation:

Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling.

Question: 721 CertyIQ


A company wants to rearchitect a large-scale web application to a serverless microservices architecture. The
application uses Amazon EC2 instances and is written in Python.

The company selected one component of the web application to test as a microservice. The component supports
hundreds of requests each second. The company wants to create and test the microservice on an AWS solution
that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal
operational support.

Which solution will meet these requirements?

A.Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating system.
B.Use an AWS Elastic Beanstalk web server environment that has high availability configured.
C.Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2
instances.
D.Use an AWS Lambda function that runs custom developed code.

Answer: C

Explanation:

Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2
instances.

Question: 722 CertyIQ


A company has an AWS Direct Connect connection from its on-premises location to an AWS account. The AWS
account has 30 different VPCs in the same AWS Region. The VPCs use private virtual interfaces (VIFs). Each VPC
has a CIDR block that does not overlap with other networks under the company's control.
The company wants to centrally manage the networking architecture while still allowing each VPC to
communicate with all other VPCs and on-premises networks.

Which solution will meet these requirements with the LEAST amount of operational overhead?

A.Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn on the
transit gateway's route propagation feature.
B.Create a Direct Connect gateway. Recreate the private VIFs to use the new gateway. Associate each VPC by
creating new virtual private gateways.
C.Create a transit VPConnect the Direct Connect connection to the transit VPCreate a peering connection
between all other VPCs in the Region. Update the route tables.
D.Create AWS Site-to-Site VPN connections from on premises to each VPC. Ensure that both VPN tunnels are
UP for each connection. Turn on the route propagation feature.

Answer: A

Explanation:

Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn on the
transit gateway's route propagation feature.

Question: 723 CertyIQ


A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS
databases by using an IAM role that has associated policies. The company wants to use AWS Systems Manager to
patch the EC2 instances without disrupting the running applications.

Which solution will meet these requirements?

A.Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the
new IAM role to the EC2 instances and the existing IAM role.
B.Create an IAM user. Attach the AmazonSSMManagedInstanceCore policy to the IAM user. Configure Systems
Manager to use the IAM user to manage the EC2 instances.
C.Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
D.Remove the existing policies from the existing IAM role. Add the AmazonSSMManagedInstanceCore policy to
the existing IAM role.

Answer: A

Explanation:

Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the
new IAM role to the EC2 instances and the existing IAM role.

Question: 724 CertyIQ


A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS) and the
Kubernetes Horizontal Pod Autoscaler. The workload is not consistent throughout the day. A solutions architect
notices that the number of nodes does not automatically scale out when the existing nodes have reached
maximum capacity in the cluster, which causes performance issues.

Which solution will resolve this issue with the LEAST administrative overhead?

A.Scale out the nodes by tracking the memory usage.


B.Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
C.Use an AWS Lambda function to resize the EKS cluster automatically.
D.Use an Amazon EC2 Auto Scaling group to distribute the workload.

Answer: B

Explanation:

Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.

Question: 725 CertyIQ


A company maintains about 300 TB in Amazon S3 Standard storage month after month. The S3 objects are each
typically around 50 GB in size and are frequently replaced with multipart uploads by their global application. The
number and size of S3 objects remain constant, but the company's S3 storage costs are increasing each month.

How should a solutions architect reduce costs in this situation?

A.Switch from multipart uploads to Amazon S3 Transfer Acceleration.


B.Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.
C.Configure S3 inventory to prevent objects from being archived too quickly.
D.Configure Amazon CloudFront to reduce the number of objects stored in Amazon S3.

Answer: B

Explanation:

Enable an S3 Lifecycle policy that deletes incomplete multipart uploads.

Question: 726 CertyIQ


A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of
players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of
locations.

The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During
peak usage periods, the database is unable to maintain the performance that is needed for reading and writing
updates. The game's user base is increasing rapidly.

What should a solutions architect do to improve the performance of the data tier?

A.Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
B.Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
C.Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use
DAX.
D.Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use
Redis.

Answer: D

Explanation:

Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use
Redis.
Question: 727 CertyIQ
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator
accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the
company's operations. The company wants to prevent this type of disruption in the future.

Which solution will meet this requirement with the LEAST operational overhead?

A.Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS
Lambda function to automatically restore deleted DynamoDB tables.
B.Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
C.Configure deletion protection on the DynamoDB tables.
D.Enable point-in-time recovery on the DynamoDB tables.

Answer: C

Explanation:

Configure deletion protection on the DynamoDB tables.

Question: 728 CertyIQ


A company has an on-premises data center that is running out of storage capacity. The company wants to migrate
its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate
retrieval of data at no additional cost.

How can these requirements be met?

A.Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the
workload.
B.Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while
retaining copies of frequently accessed data subsets locally.
C.Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
D.Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway
to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to
Amazon S3.

Answer: C

Explanation:

Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.

Question: 729 CertyIQ


A company runs a three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances
run in an Auto Scaling group for the application tier.

The company needs to make an automated scaling plan that will analyze each resource's daily and weekly
historical workload trends. The configuration must scale resources appropriately according to both the forecast
and live changes in utilization.

Which scaling strategy should a solutions architect recommend to meet these requirements?

A.Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.
B.Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
C.Create an automated scheduled scaling action based on the traffic patterns of the web application.
D.Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.

Answer: B

Explanation:

Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.

Question: 730 CertyIQ


A package delivery company has an application that uses Amazon EC2 instances and an Amazon Aurora MySQL
DB cluster. As the application becomes more popular, EC2 instance usage increases only slightly. DB cluster usage
increases at a much faster rate.

The company adds a read replica, which reduces the DB cluster usage for a short period of time. However, the load
continues to increase. The operations that cause the increase in DB cluster usage are all repeated read statements
that are related to delivery details. The company needs to alleviate the effect of repeated reads on the DB cluster.

Which solution will meet these requirements MOST cost-effectively?

A.Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
B.Add an additional read replica to the DB cluster.
C.Configure Aurora Auto Scaling for the Aurora read replicas.
D.Modify the DB cluster to have multiple writer instances.

Answer: A

Explanation:

Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.

Question: 731 CertyIQ


A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers
that many requests to the table are not returning the latest data. The company's users have not reported any other
issues with database performance. Latency is in an acceptable range.

Which design change should the solutions architect recommend?

A.Add read replicas to the table.


B.Use a global secondary index (GSI).
C.Request strongly consistent reads for the table.
D.Request eventually consistent reads for the table.

Answer: C
Explanation:

Request strongly consistent reads for the table.

Question: 732 CertyIQ


A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company
used the principle of least privilege to configure the database access credentials. The company's security team
wants to protect the application and the database from SQL injection and other web-based attacks.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use security groups and network ACLs to secure the database and application servers.
B.Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
C.Use AWS Network Firewall to protect the application and the database.
D.Use different database accounts in the application code for different functions. Avoid granting excessive
privileges to the database users.

Answer: B

Explanation:

Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.

Question: 733 CertyIQ


An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations.
The applications run on Amazon Aurora PostgreSQL databases across all the accounts. The company needs to
prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases.

Which solution will meet these requirements in the MOST operationally efficient way?

A.Attach service control policies (SCPs) to the root of the organization to identity the failed login attempts.
B.Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
C.Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a central
Amazon S3 bucket.
D.Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.

Answer: B

Explanation:

Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.

Question: 734 CertyIQ


A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1
Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection
between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and
the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The
company needs a solution that is scalable while reducing operational overhead.

What should a solutions architect do to meet these requirements?

A.Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in eu-west-2.
B.Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu-west-2.
C.Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to
send and receive data between the data centers and each VPC.
D.Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the virtual
private gateways of the VPCs in each Region to the Direct Connect gateway.

Answer: D

Explanation:

Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the virtual
private gateways of the VPCs in each Region to the Direct Connect gateway.

Question: 735 CertyIQ


A company is developing a mobile game that streams score updates to a backend processor and then posts results
on a leaderboard. A solutions architect needs to design a solution that can handle large traffic spikes, process the
mobile game updates in order of receipt, and store the processed updates in a highly available database. The
company also wants to minimize the management overhead required to maintain the solution.

What should the solutions architect do to meet these requirements?

A.Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS
Lambda. Store the processed updates in Amazon DynamoDB.
B.Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2
instances set up for Auto Scaling. Store the processed updates in Amazon Redshift.
C.Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS
Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database
running on Amazon EC2.
D.Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2
instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an
Amazon RDS Multi-AZ DB instance.

Answer: A

Explanation:

Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS
Lambda. Store the processed updates in Amazon DynamoDB.

Question: 736 CertyIQ


A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are
stored within Amazon S3 buckets in each account. The company wants to build a centralized log analysis solution
that uses a single S3 bucket. Logs must not leave us-west-2, and the company wants to incur minimal operational
overhead.

Which solution meets these requirements and is MOST cost-effective?

A.Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the
centralized S3 bucket.
B.Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2.
Use this S3 bucket for log analysis.
C.Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to
another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
D.Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3
buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log
analysis.

Answer: B

Explanation:

Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2.
Use this S3 bucket for log analysis.

Question: 737 CertyIQ


A company has an application that delivers on-demand training videos to students around the world. The
application also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket
in the us-east-2 Region.

The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in the ap-southeast-1 Region.
The company wants to replicate the data to the new S3 buckets. The company needs to minimize latency for
developers who upload videos and students who stream videos near eu-west-2 and ap-southeast-1.

Which combination of steps will meet these requirements with the FEWEST changes to the application? (Choose
two.)

A.Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way
replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
B.Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way
replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
C.Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
D.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
E.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming and uploads.

Answer: CE

Explanation:

C.Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.

E.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming and uploads.

Question: 738 CertyIQ


A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users
also can post photos and videos from inside the app.

Users access content often in the first minutes after the content is posted. New content quickly replaces older
content, and then the older content disappears. The local nature of the news means that users consume 90% of
the content within the AWS Region where it is uploaded.

Which solution will optimize the user experience by providing the LOWEST latency for content uploads?

A.Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
B.Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
C.Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon
S3.
D.Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of
Amazon CloudFront.

Answer: B

Explanation:

Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.

Question: 739 CertyIQ


A company is building a new application that uses serverless architecture. The architecture will consist of an
Amazon API Gateway REST API and AWS Lambda functions to manage incoming requests.

The company wants to add a service that can send messages received from the API Gateway REST API to multiple
target Lambda functions for processing. The service must offer message filtering that gives the target Lambda
functions the ability to receive only the messages the functions need.

Which solution will meet these requirements with the LEAST operational overhead?

A.Send the requests from the API Gateway REST API to an Amazon Simple Notification Service (Amazon SNS)
topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target
Lambda functions to poll the different SQS queues.
B.Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke
the target Lambda functions.
C.Send the requests from the API Gateway REST API to Amazon Managed Streaming for Apache Kafka
(Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
D.Send the requests from the API Gateway REST API to multiple Amazon Simple Queue Service (Amazon SQS)
queues. Configure the target Lambda functions to poll the different SQS queues.

Answer: A

Explanation:

Send the requests from the API Gateway REST API to an Amazon Simple Notification Service (Amazon SNS)
topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target
Lambda functions to poll the different SQS queues.

Question: 740 CertyIQ


A company migrated millions of archival files to Amazon S3. A solutions architect needs to implement a solution
that will encrypt all the archival data by using a customer-provided key. The solution must encrypt existing
unencrypted objects and future objects.

Which solution will meet these requirements?

A.Create a list of unencrypted objects by filtering an Amazon S3 Inventory report. Configure an S3 Batch
Operations job to encrypt the objects from the list with a server-side encryption with a customer-provided key
(SSE-C). Configure the S3 default encryption feature to use a server-side encryption with a customer-provided
key (SSE-C).
B.Use S3 Storage Lens metrics to identify unencrypted S3 buckets. Configure the S3 default encryption
feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
C.Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure an AWS
Batch job to encrypt the objects from the list with a server-side encryption with AWS KMS keys (SSE-KMS).
Configure the S3 default encryption feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
D.Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure the S3
default encryption feature to use a server-side encryption with a customer-provided key (SSE-C).

Answer: A

Explanation:

Create a list of unencrypted objects by filtering an Amazon S3 Inventory report. Configure an S3 Batch
Operations job to encrypt the objects from the list with a server-side encryption with a customer-provided key
(SSE-C). Configure the S3 default encryption feature to use a server-side encryption with a customer-
provided key (SSE-C).

Reference:

https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/

Question: 741 CertyIQ


The DNS provider that hosts a company's domain name records is experiencing outages that cause service
disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service
and wants the service to run on AWS.

What should a solutions architect do to rapidly migrate the DNS hosting service?

A.Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.
B.Create an Amazon Route 53 private hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.
C.Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory
Service for Microsoft Active Directory for the domain records.
D.Create an Amazon Route 53 Resolver inbound endpoint in the VPC. Specify the IP addresses that the
provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the
domain to the IP addresses that are specified in the inbound endpoint.

Answer: A

Explanation:

Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.

Question: 742 CertyIQ


A company is building an application on AWS that connects to an Amazon RDS database. The company wants to
manage the application configuration and to securely store and retrieve credentials for the database and other
services.
Which solution will meet these requirements with the LEAST administrative overhead?

A.Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store
and retrieve the credentials.
B.Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter
Store to store and retrieve the credentials.
C.Use an encrypted application configuration file. Store the file in Amazon S3 for the application configuration.
Create another S3 file to store and retrieve the credentials.
D.Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and
retrieve the credentials.

Answer: A

Explanation:

Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store
and retrieve the credentials.

Question: 743 CertyIQ


To meet security requirements, a company needs to encrypt all of its application data in transit while
communicating with an Amazon RDS MySQL DB instance. A recent security audit revealed that encryption at rest
is enabled using AWS Key Management Service (AWS KMS), but data in transit is not enabled.

What should a solutions architect do to satisfy the security requirements?

A.Enable IAM database authentication on the database.


B.Provide self-signed certificates. Use the certificates in all connections to the RDS instance.
C.Take a snapshot of the RDS instance. Restore the snapshot to a new instance with encryption enabled.
D.Download AWS-provided root certificates. Provide the certificates in all connections to the RDS instance.

Answer: D

Explanation:

Download AWS-provided root certificates. Provide the certificates in all connections to the RDS instance.

Question: 744 CertyIQ


A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancing
(ELB) load balancer. However, many of the web service clients can only reach IP addresses authorized on their
firewalls.

What should a solutions architect recommend to meet the clients’ needs?

A.A Network Load Balancer with an associated Elastic IP address.


B.An Application Load Balancer with an associated Elastic IP address.
C.An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
D.An EC2 instance with a public IP address running as a proxy in front of the load balancer.

Answer: C

Explanation:
An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.

Question: 745 CertyIQ


A company has established a new AWS account. The account is newly provisioned and no changes have been
made to the default settings. The company is concerned about the security of the AWS account root user.

What should be done to secure the root user?

A.Create IAM users for daily administrative tasks. Disable the root user.
B.Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C.Generate an access key for the root user. Use the access key for daily administration tasks instead of the
AWS Management Console.
D.Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the
root user for daily administration tasks.

Answer: B

Explanation:

Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.

Question: 746 CertyIQ


A company is deploying an application that processes streaming data in near-real time. The company plans to use
Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest
possible latency between nodes.

Which combination of network solutions will meet these requirements? (Choose two.)

A.Enable and configure enhanced networking on each EC2 instance.


B.Group the EC2 instances in separate accounts.
C.Run the EC2 instances in a cluster placement group.
D.Attach multiple elastic network interfaces to each EC2 instance.
E.Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.

Answer: AC

Explanation:

A. Enable and configure enhanced networking on each EC2 instance. Enhanced networking provides higher
bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.

C. Run the EC2 instances in a cluster placement group. A cluster placement group is a logical grouping of
instances within a single Availability Zone. This configuration is recommended for applications that need low
network latency, high network throughput, or both.

Question: 747 CertyIQ


A financial services company wants to shut down two data centers and migrate more than 100 TB of data to AWS.
The data has an intricate directory structure with millions of small files stored in deep hierarchies of subfolders.
Most of the data is unstructured, and the company’s file storage consists of SMB-based storage types from
multiple vendors. The company does not want to change its applications to access the data after migration.

What should a solutions architect do to meet these requirements with the LEAST operational overhead?

A.Use AWS Direct Connect to migrate the data to Amazon S3.


B.Use AWS DataSync to migrate the data to Amazon FSx for Lustre.
C.Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
D.Use AWS Direct Connect to migrate the data on-premises file storage to an AWS Storage Gateway volume
gateway.

Answer: C

Explanation:

Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.

Question: 748 CertyIQ


A company uses an organization in AWS Organizations to manage AWS accounts that contain applications. The
company sets up a dedicated monitoring member account in the organization. The company wants to query and
visualize observability data across the accounts by using Amazon CloudWatch.

Which solution will meet these requirements?

A.Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation
template provided by the monitoring account in each AWS account to share the data with the monitoring
account.
B.Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account under the
Organizations root organizational unit (OU).
C.Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to have
access to query and visualize the CloudWatch data in the account. Attach the new IAM policy to the new IAM
user.
D.Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account.
Attach the IAM policies to the new IAM user.

Answer: A

Explanation:

Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation
template provided by the monitoring account in each AWS account to share the data with the monitoring
account.

Question: 749 CertyIQ


A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto
Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution, and
AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront
distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from
accessing the website.

What should a solutions architect do to protect the application?

A.Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B.Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C.Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP
address.
D.Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP
address.

Answer: B

Explanation:

Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.

Question: 750 CertyIQ


A company sets up an organization in AWS Organizations that contains 10 AWS accounts. A solutions architect
must design a solution to provide access to the accounts for several thousand employees. The company has an
existing identity provider (IdP). The company wants to use the existing IdP for authentication to AWS.

Which solution will meet these requirements?

A.Create IAM users for the employees in the required AWS accounts. Connect IAM users to the existing IdP.
Configure federated authentication for the IAM users.
B.Set up AWS account root users with user email addresses and passwords that are synchronized from the
existing IdP.
C.Configure AWS IAM Identity Center (AWS Single Sign-On). Connect IAM Identity Center to the existing IdP.
Provision users and groups from the existing IdP.
D.Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the
existing IdP.

Answer: C

Explanation:

Configure AWS IAM Identity Center (AWS Single Sign-On). Connect IAM Identity Center to the existing IdP.
Provision users and groups from the existing IdP.

Question: 751 CertyIQ


A solutions architect is designing an AWS Identity and Access Management (IAM) authorization model for a
company's AWS account. The company has designated five specific employees to have full access to AWS
services and resources in the AWS account.

The solutions architect has created an IAM user for each of the five designated employees and has created an IAM
user group.

Which solution will meet these requirements?

A.Attach the AdministratorAccess resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
B.Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
C.Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
D.Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.

Answer: C

Explanation:

Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.

Question: 752 CertyIQ


A company has a multi-tier payment processing application that is based on virtual machines (VMs). The
communication between the tiers occurs asynchronously through a third-party middleware solution that
guarantees exactly-once delivery.

The company needs a solution that requires the least amount of infrastructure management. The solution must
guarantee exactly-once delivery for application messaging.

Which combination of actions will meet these requirements? (Choose two.)

A.Use AWS Lambda for the compute layers in the architecture.


B.Use Amazon EC2 instances for the compute layers in the architecture.
C.Use Amazon Simple Notification Service (Amazon SNS) as the messaging component between the compute
layers.
D.Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the
compute layers.
E.Use containers that are based on Amazon Elastic Kubernetes Service (Amazon EKS) for the compute layers in
the architecture.

Answer: AD

Explanation:

A.Use AWS Lambda for the compute layers in the architecture.

D.Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the
compute layers.

Question: 753 CertyIQ


A company has a nightly batch processing routine that analyzes report files that an on-premises file system
receives daily through SFTP. The company wants to move the solution to the AWS Cloud. The solution must be
highly available and resilient. The solution also must minimize operational effort.

Which solution meets these requirements?

A.Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for storage. Use
an Amazon EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation.
B.Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic Block Store
(Amazon EBS) volume for storage. Use an Auto Scaling group with the minimum number of instances and
desired number of instances set to 1.
C.Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic File System
(Amazon EFS) file system for storage. Use an Auto Scaling group with the minimum number of instances and
desired number of instances set to 1.
D.Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch
files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling group
with a scheduled scaling policy to run the batch operation.

Answer: D

Explanation:

Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch
files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling
group with a scheduled scaling policy to run the batch operation.

Question: 754 CertyIQ


A company has users all around the world accessing its HTTP-based application deployed on Amazon EC2
instances in multiple AWS Regions. The company wants to improve the availability and performance of the
application. The company also wants to protect the application against common web exploits that may affect
availability, compromise security, or consume excessive resources. Static IP addresses are required.

What should a solutions architect recommend to accomplish this?

A.Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs.
Create an accelerator using AWS Global Accelerator and register the NLBs as endpoints.
B.Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Deploy AWS WAF on the
ALBs. Create an accelerator using AWS Global Accelerator and register the ALBs as endpoints.
C.Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs.
Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to
route requests to the NLBs.
D.Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon
CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to
the ALBs. Deploy AWS WAF on the CloudFront distribution.

Answer: D

Explanation:

Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon
CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to
the ALBs. Deploy AWS WAF on the CloudFront distribution.

Question: 755 CertyIQ


A company’s data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and
multiple DB instances across different Availability Zones. Users have recently reported errors from the database
that indicate that there are too many connections. The company wants to reduce the failover time by 20% when a
read replica is promoted to primary writer.

Which solution will meet this requirement?

A.Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.


B.Use Amazon RDS Proxy in front of the Aurora database.
C.Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
D.Switch to Amazon Redshift with relocation capability.

Answer: B
Explanation:

Use Amazon RDS Proxy in front of the Aurora database.

Question: 756 CertyIQ


A company stores text files in Amazon S3. The text files include customer chat messages, date and time
information, and customer personally identifiable information (PII).

The company needs a solution to provide samples of the conversations to an external service provider for quality
control. The external service provider needs to randomly pick sample conversations up to the most recent
conversation. The company must not share the customer PII with the external service provider. The solution must
scale when the number of customer conversations increases.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when the
function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
B.Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the PII from the
files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access the
bucket that does not contain the PII.
B. Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the PII from the
files, and allows the external service provider to download new versions of the files that have the PII redacted.
D.Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the files that
does not contain PII. Configure the Lambda function to store the non-PII data in the DynamoDB table when a
new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.

Answer: A

Explanation:

Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when the
function reads the file. Instruct the external service provider to access the Object Lambda Access Point.

Question: 757 CertyIQ


A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and
the system cannot run on more than one instance. A solutions architect must design a resilient solution that can
improve the recovery time for the system.

What should the solutions architect recommend to meet these requirements?

A.Enable termination protection for the EC2 instance.


B.Configure the EC2 instance for Multi-AZ deployment.
C.Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
D.Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID
configurations for storage redundancy.

Answer: C

Explanation:

Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
Question: 758 CertyIQ
A company wants to deploy its containerized application workloads to a VPC across three Availability Zones. The
company needs a solution that is highly available across Availability Zones. The solution must require minimal
changes to the application.

Which solution will meet these requirements with the LEAST operational overhead?

A.Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling to use
target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an
Availability Zone attribute.
B.Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure Application Auto
Scaling to use target tracking scaling. Set the minimum capacity to 3.
C.Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spread placement group. Configure an
Auto Scaling group to use target tracking scaling. Set the minimum capacity to 3.
D.Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure Application
Auto Scaling to use Lambda as a scalable target. Set the minimum capacity to 3.

Answer: A

Explanation:

Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling to use
target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with
an Availability Zone attribute.

Question: 759 CertyIQ


A media company stores movies in Amazon S3. Each movie is stored in a single video file that ranges from 1 GB to
10 GB in size.

The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There
is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The
company wants to minimize hosting service costs based on demand.

Which solution will meet these requirements?

A.Store all media content in Amazon S3. Use S3 Lifecycle policies to move media data into the Infrequent
Access tier when the demand for a movie decreases.
B.Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-infrequent Access
(S3 Standard-IA). When a user orders an older movie, retrieve the video file by using standard retrieval.
C.Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible
Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
D.Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval.
When a user orders an older movie, retrieve the video file by using bulk retrieval.

Answer: C

Explanation:

Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible
Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.

Question: 760 CertyIQ


A solutions architect needs to design the architecture for an application that a vendor provides as a Docker
container image. The container needs 50 GB of storage available for temporary files. The infrastructure must be
serverless.

Which solution meets these requirements with the LEAST operational overhead?

A.Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted volume
that has more than 50 GB of space.
B.Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store
(Amazon EBS) volume that has more than 50 GB of space.
C.Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type.
Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.
D.Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type
with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task
definition for the container image. Create a service with that task definition.

Answer: C

Explanation:

Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type.
Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.

Question: 761 CertyIQ


A company needs to use its on-premises LDAP directory service to authenticate its users to the AWS Management
Console. The directory service is not compatible with Security Assertion Markup Language (SAML).

Which solution meets these requirements?

A.Enable AWS IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP.
B.Create an IAM policy that uses AWS credentials, and integrate the policy into LDAP.
C.Set up a process that rotates the IAM credentials whenever LDAP credentials are updated.
D.Develop an on-premises custom identity broker application or process that uses AWS Security Token Service
(AWS STS) to get short-lived credentials.

Answer: D

Explanation:

Develop an on-premises custom identity broker application or process that uses AWS Security Token Service
(AWS STS) to get short-lived credentials.

Question: 762 CertyIQ


A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2
instances. The AMIs contain critical data and configurations that are necessary for the company’s operations. The
company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate
AWS account.
B.Copy all AMIs to another AWS account periodically.
C.Create a retention rule in Recycle Bin.
D.Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.

Answer: C

Explanation:

Create a retention rule in Recycle Bin.

Reference:

https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-ec2-recycle-bin-machine-images/

Question: 763 CertyIQ


A company has 150 TB of archived image data stored on-premises that needs to be moved to the AWS Cloud within
the next month. The company’s current network connection allows up to 100 Mbps uploads for this purpose during
the night only.

What is the MOST cost-effective mechanism to move this data and meet the migration deadline?

A.Use AWS Snowmobile to ship the data to AWS.


B.Order multiple AWS Snowball devices to ship the data to AWS.
C.Enable Amazon S3 Transfer Acceleration and securely upload the data.
D.Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.

Answer: B

Explanation:

Order multiple AWS Snowball devices to ship the data to AWS.

Question: 764 CertyIQ


A company wants to migrate its three-tier application from on premises to AWS. The web tier and the application
tier are running on third-party virtual machines (VMs). The database tier is running on MySQL.

The company needs to migrate the application by making the fewest possible changes to the architecture. The
company also needs a database solution that can restore data to a specific point in time.

Which solution will meet these requirements with the LEAST operational overhead?

A.Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.
B.Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances
in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C.Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances
in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
D.Migrate the web tier and the application tier to Amazon EC2 instances in public subnets. Migrate the
database tier to Amazon Aurora MySQL in public subnets.

Answer: A
Explanation:

Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.

Question: 765 CertyIQ


A development team is collaborating with another company to create an integrated product. The other company
needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development
team's account. The other company wants to poll the queue without giving up its own account permissions to do
so.

How should a solutions architect provide access to the SQS queue?

A.Create an instance profile that provides the other company access to the SQS queue.
B.Create an IAM policy that provides the other company access to the SQS queue.
C.Create an SQS access policy that provides the other company access to the SQS queue.
D.Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company
access to the SQS queue.

Answer: C

Explanation:

Create an SQS access policy that provides the other company access to the SQS queue.

Question: 766 CertyIQ


A company’s developers want a secure way to gain SSH access on the company's Amazon EC2 instances that run
the latest version of Amazon Linux. The developers work remotely and in the corporate office.

The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private
subnet and access the internet through a NAT gateway that is deployed in a public subnet.

What should a solutions architect do to meet these requirements MOST cost-effectively?

A.Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM
permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2
instances.
B.Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the
developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the
corporate network. Instruct the developers to set up another VPN connection for access when they work
remotely.
C.Create a bastion host in the public subnet of the VPConfigure the security groups and SSH keys of the
bastion host to only allow connections and SSH authentication from the developers’ corporate and remote
networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2
instances.
D.Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2
instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.

Answer: D

Explanation:

Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2
instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2
instances.

Question: 767 CertyIQ


A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown
exponentially over the past few months. The company's researchers regularly require a subset of the entire
dataset to be immediately available with minimal lag. However, the entire dataset does not need to be accessed on
a daily basis. All the data currently resides in on-premises storage arrays, and the company wants to reduce
ongoing capital expenses.

Which storage solution should a solutions architect recommend to meet these requirements?

A.Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing basis.
B.Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage. Migrate the
data to the Storage Gateway appliance.
C.Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the
target storage. Migrate the data to the Storage Gateway appliance.
D.Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate data to
an Amazon Elastic File System (Amazon EFS) file system.

Answer: C

Explanation:

Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the
target storage. Migrate the data to the Storage Gateway appliance.

Question: 768 CertyIQ


A company has a business-critical application that runs on Amazon EC2 instances. The application stores data in
an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours.

Which solution meets these requirements with the LEAST operational overhead?

A.Configure point-in-time recovery for the table.


B.Use AWS Backup for the table.
C.Use an AWS Lambda function to make an on-demand backup of the table every hour.
D.Turn on streams on the table to capture a log of all changes to the table in the last 24 hours. Store a copy of
the stream in an Amazon S3 bucket.

Answer: A

Explanation:

Configure point-in-time recovery for the table.

Question: 769 CertyIQ


A company hosts an application used to upload files to an Amazon S3 bucket. Once uploaded, the files are
processed to extract metadata, which takes less than 5 seconds. The volume and frequency of the uploads varies
from a few files each hour to hundreds of concurrent uploads. The company has asked a solutions architect to
design a cost-effective architecture that will meet these requirements.

What should the solutions architect recommend?

A.Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files.
B.Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to
process the files.
C.Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS Lambda
function to process the files.
D.Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded to
Amazon S3. Invoke an AWS Lambda function to process the files.

Answer: B

Explanation:

Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to
process the files.

Question: 770 CertyIQ


A company’s application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-
driven architecture. The company uses nonproduction development environments in a different AWS account to
test new features before the company deploys the features to production.

The production instances show constant usage because of customers in different time zones. The company uses
nonproduction instances only during business hours on weekdays. The company does not use the nonproduction
instances on the weekends. The company wants to optimize the costs to run its application on AWS.

Which solution will meet these requirements MOST cost-effectively?

A.Use On-Demand Instances for the production instances. Use Dedicated Hosts for the nonproduction
instances on weekends only.
B.Use Reserved Instances for the production instances and the nonproduction instances. Shut down the
nonproduction instances when not in use.
C.Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction
instances. Shut down the nonproduction instances when not in use.
D.Use Dedicated Hosts for the production instances. Use EC2 Instance Savings Plans for the nonproduction
instances.

Answer: C

Explanation:

Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction
instances. Shut down the nonproduction instances when not in use.

Question: 771 CertyIQ


A company stores data in an on-premises Oracle relational database. The company needs to make the data
available in Amazon Aurora PostgreSQL for analysis. The company uses an AWS Site-to-Site VPN connection to
connect its on-premises network to AWS.

The company must capture the changes that occur to the source database during the migration to Aurora
PostgreSQL.
Which solution will meet these requirements?

A.Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
B.Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL
by using the Aurora PostgreSQL aws_s3 extension.
C.Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the
ongoing changes.
D.Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora
PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.

Answer: C

Explanation:

Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the
ongoing changes.

Question: 772 CertyIQ


A company built an application with Docker containers and needs to run the application in the AWS Cloud. The
company wants to use a managed service to host the application.

The solution must scale in and out appropriately according to demand on the individual container services. The
solution also must not result in additional operational overhead or infrastructure to manage.

Which solutions will meet these requirements? (Choose two.)

A.Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B.Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C.Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D.Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E.Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

Answer: AB

Explanation:

A.Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.

B.Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.

Question: 773 CertyIQ


An ecommerce company is running a seasonal online sale. The company hosts its website on Amazon EC2
instances spanning multiple Availability Zones. The company wants its website to manage sudden traffic increases
during the sale.

Which solution will meet these requirements MOST cost-effectively?

A.Create an Auto Scaling group that is large enough to handle peak traffic load. Stop half of the Amazon EC2
instances. Configure the Auto Scaling group to use the stopped instances to scale out when traffic increases.
B.Create an Auto Scaling group for the website. Set the minimum size of the Auto Scaling group so that it can
handle high traffic volumes without the need to scale out.
C.Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an Auto Scaling group set
as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and
ElastiCache. Scale in after the cache is fully populated.
D.Configure an Auto Scaling group to scale out as traffic increases. Create a launch template to start new
instances from a preconfigured Amazon Machine Image (AMI).

Answer: D

Explanation:

Configure an Auto Scaling group to scale out as traffic increases. Create a launch template to start new
instances from a preconfigured Amazon Machine Image (AMI).

Question: 774 CertyIQ


A solutions architect must provide an automated solution for a company's compliance policy that states security
groups cannot include a rule that allows SSH from 0.0.0.0/0. The company needs to be notified if there is any
breach in the policy. A solution is needed as soon as possible.

What should the solutions architect do to meet these requirements with the LEAST operational overhead?

A.Write an AWS Lambda script that monitors security groups for SSH being open to 0.0.0.0/0 addresses and
creates a notification every time it finds one.
B.Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service
(Amazon SNS) notification when a noncompliant rule is created.
C.Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon
Simple Notification Service (Amazon SNS) topic to generate a notification every time the role is assumed by a
user.
D.Configure a service control policy (SCP) that prevents non-administrative users from creating or editing
security groups. Create a notification in the ticketing system when a user requests a rule that needs
administrator permissions.

Answer: B

Explanation:

Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service
(Amazon SNS) notification when a noncompliant rule is created.

Question: 775 CertyIQ


Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.

A company has deployed an application in an AWS account. The application consists of microservices that run on
AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice.
The company has multiple AWS accounts and wants to give each team its own account for its microservices.

A solutions architect needs to design a solution that will provide service-to-service communication over HTTPS
(port 443). The solution also must provide a service registry for service discovery.

Which solution will meet these requirements with the LEAST administrative overhead?

A.Create an inspection VPC. Deploy an AWS Network Firewall firewall to the inspection VPC. Attach the
inspection VPC to a new transit gateway. Route VPC-to-VPC traffic to the inspection VPC. Apply firewall rules
to allow only HTTPS communication.
B.Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS
listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.
C.Create a Network Load Balancer (NLB) with an HTTPS listener and target groups for each microservice.
Create an AWS PrivateLink endpoint service for each microservice. Create an interface VPC endpoint in each
VPC that needs to consume that microservice.
D.Create peering connections between VPCs that contain microservices. Create a prefix list for each service
that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create
security groups to allow only HTTPS communication.

Answer: B

Explanation:

Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS
listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.

Question: 776 CertyIQ


A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game
increased in popularity, developers noticed slowdowns related to the game's metadata load times. Performance
metrics indicate that simply scaling the database will not help. A solutions architect must explore all options that
include capabilities for snapshots, replication, and sub-millisecond response times.

What should the solutions architect recommend to solve these issues?

A.Migrate the database to Amazon Aurora with Aurora Replicas.


B.Migrate the database to Amazon DynamoDB with global tables.
C.Add an Amazon ElastiCache for Redis layer in front of the database.
D.Add an Amazon ElastiCache for Memcached layer in front of the database.

Answer: C

Explanation:

Add an Amazon ElastiCache for Redis layer in front of the database.

Question: 777 CertyIQ


A company uses AWS Organizations for its multi-account AWS setup. The security organizational unit (OU) of the
company needs to share approved Amazon Machine Images (AMIs) with the development OU. The AMIs are
created by using AWS Key Management Service (AWS KMS) encrypted snapshots.

Which solution will meet these requirements? (Choose two.)

A.Add the development team's OU Amazon Resource Name (ARN) to the launch permission list for the AMIs.
B.Add the Organizations root Amazon Resource Name (ARN) to the launch permission list for the AMIs.
C.Update the key policy to allow the development team's OU to use the AWS KMS keys that are used to
decrypt the snapshots.
D.Add the development team’s account Amazon Resource Name (ARN) to the launch permission list for the
AMIs.
E.Recreate the AWS KMS key. Add a key policy to allow the Organizations root Amazon Resource Name (ARN)
to use the AWS KMS key.

Answer: AC

Explanation:

A.Add the development team's OU Amazon Resource Name (ARN) to the launch permission list for the AMIs.

C.Update the key policy to allow the development team's OU to use the AWS KMS keys that are used to
decrypt the snapshots.

Question: 778 CertyIQ


A data analytics company has 80 offices that are distributed globally. Each office hosts 1 PB of data and has
between 1 and 2 Gbps of internet bandwidth.

The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The
company must complete the migration within 4 weeks.

Which solution will meet these requirements MOST cost-effectively?

A.Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B.Use multiple AWS Snowball Edge storage-optimized devices to store and transfer the data to Amazon S3.
C.Use an AWS Snowmobile to store and transfer the data to Amazon S3.
D.Set up an AWS Storage Gateway Volume Gateway to transfer the data to Amazon S3.

Answer: B

Explanation:

Use multiple AWS Snowball Edge storage-optimized devices to store and transfer the data to Amazon S3.

Question: 779 CertyIQ


A company has an Amazon Elastic File System (Amazon EFS) file system that contains a reference dataset. The
company has applications on Amazon EC2 instances that need to read the dataset. However, the applications must
not be able to change the dataset. The company wants to use IAM access control to prevent the applications from
being able to modify or delete the dataset.

Which solution will meet these requirements?

A.Mount the EFS file system in read-only mode from within the EC2 instances.
B.Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to the
IAM roles that are attached to the EC2 instances.
C.Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the
EFS file system.
D.Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file
permissions to allow read-only access to files in the root directory.

Answer: B

Explanation:

Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to the
IAM roles that are attached to the EC2 instances.

Question: 780 CertyIQ


A company has hired an external vendor to perform work in the company’s AWS account. The vendor uses an
automated tool that is hosted in an AWS account that the vendor owns. The vendor does not have IAM access to
the company’s AWS account. The company needs to grant the vendor access to the company’s AWS account.

Which solution will meet these requirements MOST securely?

A.Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the
appropriate IAM policies to the role for the permissions that the vendor requires.
B.Create an IAM user in the company’s account with a password that meets the password complexity
requirements. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.
C.Create an IAM group in the company’s account. Add the automated tool’s IAM user from the vendor account
to the group. Attach the appropriate IAM policies to the group for the permissions that the vendor requires.
D.Create an IAM user in the company’s account that has a permission boundary that allows the vendor’s
account. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.

Answer: A

Explanation:

Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the
appropriate IAM policies to the role for the permissions that the vendor requires.

Question: 781 CertyIQ


A company wants to run its experimental workloads in the AWS Cloud. The company has a budget for cloud
spending. The company's CFO is concerned about cloud spending accountability for each department. The CFO
wants to receive notification when the spending threshold reaches 60% of the budget.

Which solution will meet these requirements?

A.Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an
alert threshold to receive notification when spending exceeds 60% of the budget.
B.Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create
alert threshold notifications when spending exceeds 60% of the budget.
C.Use cost allocation tags on AWS resources to label owners. Use AWS Support API on AWS Trusted Advisor to
create alert threshold notifications when spending exceeds 60% of the budget.
D.Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add
an alert threshold to receive notification when spending exceeds 60% of the budget.

Answer: A

Explanation:

Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an
alert threshold to receive notification when spending exceeds 60% of the budget.

Question: 782 CertyIQ


A company wants to deploy an internal web application on AWS. The web application must be accessible only from
the company's office. The company needs to download security patches for the web application from the internet.

The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company's office.
A solutions architect must design a secure architecture for the web application.

Which solution will meet these requirements?

A.Deploy the web application on Amazon EC2 instances in public subnets behind a public Application Load
Balancer (ALB). Attach an internet gateway to the VPC. Set the inbound source of the ALB's security group to
0.0.0.0/0.
B.Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the
inbound source of the ALB's security group to the company's office network CIDR block.
C.Deploy the web application on Amazon EC2 instances in public subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in private subnets. Attach an internet gateway to the VPSet the
outbound destination of the ALB’s security group to the company's office network CIDR block.
D.Deploy the web application on Amazon EC2 instances in private subnets behind a public Application Load
Balancer (ALB). Attach an internet gateway to the VPC. Set the outbound destination of the ALB’s security
group to 0.0.0.0/0.

Answer: B

Explanation:

Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the
inbound source of the ALB's security group to the company's office network CIDR block.

Question: 783 CertyIQ


A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The
company needs to migrate the data to an AWS managed service for development and maintenance of the
application data. The solution must require minimal operational support and provide immutable, cryptographically
verifiable logs of data changes.

Which solution will meet these requirements MOST cost-effectively?

A.Copy the records from the application into an Amazon Redshift cluster.
B.Copy the records from the application into an Amazon Neptune cluster.
C.Copy the records from the application into an Amazon Timestream database.
D.Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

Answer: D

Explanation:

Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

Question: 784 CertyIQ


A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket. A series of data
preparation jobs aggregate the data for reporting. The data preparation jobs need to run at regular intervals in
parallel. A few jobs need to run in a specific order later.
The company wants to remove the operational overhead of job error handling, retry logic, and state management.

Which solution will meet these requirements?

A.Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket. Invoke
other Lambda functions at regularly scheduled intervals.
B.Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a regular
internal.
C.Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run the DataBrew
data preparation jobs.
D.Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at midnight.

Answer: C

Explanation:

Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run the DataBrew
data preparation jobs.

Question: 785 CertyIQ


A solutions architect is designing a payment processing application that runs on AWS Lambda in private subnets
across multiple Availability Zones. The application uses multiple Lambda functions and processes millions of
transactions each day.

The architecture must ensure that the application does not process duplicate payments.

Which solution will meet these requirements?

A.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon S3 bucket. Configure the
S3 bucket with an event notification to invoke another Lambda function to process the due payments.
B.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure another Lambda function to poll the SQS queue and to process the due
payments.
C.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and to process the due
payments.
D.Use Lambda to retrieve all due payments. Store the due payments in an Amazon DynamoDB table. Configure
streams on the DynamoDB table to invoke another Lambda function to process the due payments.

Answer: C

Explanation:

Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and to process the due
payments.

Question: 786 CertyIQ


A company runs multiple workloads in its on-premises data center. The company's data center cannot scale fast
enough to meet the company's expanding business needs. The company wants to collect usage and configuration
data about the on-premises servers and workloads to plan a migration to AWS.

Which solution will meet these requirements?


A.Set the home AWS Region in AWS Migration Hub. Use AWS Systems Manager to collect data about the on-
premises servers.
B.Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data
about the on-premises servers.
C.Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS Trusted Advisor
to collect data about the on-premises servers.
D.Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates. Use AWS Database
Migration Service (AWS DMS) to collect data about the on-premises servers.

Answer: B

Explanation:

Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data
about the on-premises servers.

Question: 787 CertyIQ


A company has an organization in AWS Organizations that has all features enabled. The company requires that all
API calls and logins in any existing or new AWS account must be audited. The company needs a managed solution
to prevent additional work and to minimize costs. The company also needs to know when any AWS account is not
compliant with the AWS Foundational Security Best Practices (FSBP) standard.

Which solution will meet these requirements with the LEAST operational overhead?

A.Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS Security
Hub and AWS Control Tower Account Factory in the environment.
B.Deploy an AWS Control Tower environment in a dedicated Organizations member account. Enable AWS
Security Hub and AWS Control Tower Account Factory in the environment.
C.Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC
to self-service provision Amazon GuardDuty in the MALZ.
D.Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC
to self-service provision AWS Security Hub in the MALZ.

Answer: A

Explanation:

Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS Security
Hub and AWS Control Tower Account Factory in the environment.

Question: 788 CertyIQ


A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket. The company
occasionally needs to use SQL to analyze the log files.

Which solution will meet these requirements MOST cost-effectively?

A.Create an Amazon Aurora MySQL database. Migrate the data from the S3 bucket into Aurora by using AWS
Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
B.Create an Amazon Redshift cluster. Use Redshift Spectrum to run SQL statements directly on the data in the
S3 bucket.
C.Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to
run SQL statements directly on the data in the S3 bucket.
D.Create an Amazon EMR cluster. Use Apache Spark SQL to run SQL statements directly on the data in the S3
bucket.

Answer: C

Explanation:

Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to
run SQL statements directly on the data in the S3 bucket.

Question: 789 CertyIQ


A company needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access
Management (IAM) resources that include an inline policy or “*” in the statement. The solution must also prohibit
deployment of Amazon EC2 instances with public IP addresses. The company has AWS Control Tower enabled in
its organization in AWS Organizations.

Which solution will meet these requirements?

A.Use AWS Control Tower proactive controls to block deployment of EC2 instances with public IP addresses
and inline policies with elevated access or “*”.
B.Use AWS Control Tower detective controls to block deployment of EC2 instances with public IP addresses
and inline policies with elevated access or “*”.
C.Use AWS Config to create rules for EC2 and IAM compliance. Configure the rules to run an AWS Systems
Manager Session Manager automation to delete a resource when it is not compliant.
D.Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead
to noncompliance.

Answer: D

Explanation:

Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead
to noncompliance.

Question: 790 CertyIQ


A company's web application that is hosted in the AWS Cloud recently increased in popularity. The web application
currently exists on a single Amazon EC2 instance in a single public subnet. The web application has not been able
to meet the demand of the increased web traffic.

The company needs a solution that will provide high availability and scalability to meet the increased user demand
without rewriting the web application.

Which combination of steps will meet these requirements? (Choose two.)

A.Replace the EC2 instance with a larger compute optimized instance.


B.Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.
C.Configure a NAT gateway in a public subnet to handle web requests.
D.Replace the EC2 instance with a larger memory optimized instance.
E.Configure an Application Load Balancer in a public subnet to distribute web traffic.

Answer: BE
Explanation:

B.Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.

E.Configure an Application Load Balancer in a public subnet to distribute web traffic.

Question: 791 CertyIQ


A company has AWS Lambda functions that use environment variables. The company does not want its developers
to see environment variables in plaintext.

Which solution will meet these requirements?

A.Deploy code to Amazon EC2 instances instead of using Lambda functions.


B.Configure SSL encryption on the Lambda functions to use AWS CloudHSM to store and encrypt the
environment variables.
C.Create a certificate in AWS Certificate Manager (ACM). Configure the Lambda functions to use the certificate
to encrypt the environment variables.
D.Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda
functions to use the KMS key to store and encrypt the environment variables.

Answer: D

Explanation:

Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda
functions to use the KMS key to store and encrypt the environment variables.

Question: 792 CertyIQ


An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs to
offer a web analytics service to millions of users. Users must be verified by using an authentication service to
access the APIs.

Which solution will meet these requirements with the MOST operational efficiency?

A.Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs
with a Cognito authorizer.
B.Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP
APIs with a Cognito authorizer.
C.Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway REST
APIs with a Lambda authorizer.
D.Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs with an IAM
authorizer.

Answer: A

Explanation:

Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs
with a Cognito authorizer.
Question: 793 CertyIQ
A company has a mobile app for customers. The app’s data is sensitive and must be encrypted at rest. The
company uses AWS Key Management Service (AWS KMS).

The company needs a solution that prevents the accidental deletion of KMS keys. The solution must use Amazon
Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to
delete a KMS key.

Which solution will meet these requirements with the LEAST operational overhead?

A.Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an AWS
Config rule that cancels any deletion of a KMS key. Add the AWS Config rule as a target of the EventBridge
rule. Create an SNS topic that notifies the administrators.
B.Create an AWS Lambda function that has custom logic to prevent KMS key deletion. Create an Amazon
CloudWatch alarm that is activated when a user tries to delete a KMS key. Create an Amazon EventBridge rule
that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic. Configure
the EventBridge rule to publish an SNS message that notifies the administrators.
C.Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed. Configure
the rule to initiate an AWS Systems Manager Automation runbook. Configure the runbook to cancel the
deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message that
notifies the administrators.
D.Create an AWS CloudTrail trail. Configure the trail to deliver logs to a new Amazon CloudWatch log group.
Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use
Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.

Answer: C

Explanation:

Reference:

https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-and-remediate-scheduled-
deletion-of-aws-kms-keys.html

Question: 794 CertyIQ


A company wants to analyze and generate reports to track the usage of its mobile app. The app is popular and has
a global user base. The company uses a custom report building program to analyze application usage.

The program generates multiple reports during the last week of each month. The program takes less than 10
minutes to produce each report. The company rarely uses the program to generate reports outside of the last
week of each month The company wants to generate reports in the least amount of time when the reports are
requested.

Which solution will meet these requirements MOST cost-effectively?

A.Run the program by using Amazon EC2 On-Demand Instances. Create an Amazon EventBridge rule to start
the EC2 instances when reports are requested. Run the EC2 instances continuously during the last week of
each month.
B.Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports
are requested.
C.Run the program in Amazon Elastic Container Service (Amazon ECS). Schedule Amazon ECS to run the
program when reports are requested.
D.Run the program by using Amazon EC2 Spot Instances. Create an Amazon EventBndge rule to start the EC2
instances when reports are requested. Run the EC2 instances continuously during the last week of each month.

Answer: B
Explanation:

Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports
are requested.

Question: 795 CertyIQ


A company is designing a tightly coupled high performance computing (HPC) environment in the AWS Cloud. The
company needs to include features that will optimize the HPC environment for networking and storage.

Which combination of solutions will meet these requirements? (Choose two.)

A.Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
B.Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
C.Create an Amazon CloudFront distribution. Configure the viewer protocol policy to be HTTP and HTTPS.
D.Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
E.Create an AWS Elastic Beanstalk deployment to manage the environment.

Answer: BD

Explanation:

B.Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.

D.Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.

Question: 796 CertyIQ


A company needs a solution to prevent photos with unwanted content from being uploaded to the company's web
application. The solution must not involve training a machine learning (ML) model.

Which solution will meet these requirements?

A.Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that the web
application invokes when new photos are uploaded.
B.Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a
Lambda function URL that the web application invokes when new photos are uploaded.
C.Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content.
Associate the function with the web application.
D.Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Create a
Lambda function URL that the web application invokes when new photos are uploaded.

Answer: B

Explanation:

Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda
function URL that the web application invokes when new photos are uploaded.

Question: 797 CertyIQ


A company uses AWS to run its ecommerce platform. The platform is critical to the company's operations and has
a high volume of traffic and transactions. The company configures a multi-factor authentication (MFA) device to
secure its AWS account root user credentials. The company wants to ensure that it will not lose access to the root
user account if the MFA device is lost.

Which solution will meet these requirements?

A.Set up a backup administrator account that the company can use to log in if the company loses the MFA
device.
B.Add multiple MFA devices for the root user account to handle the disaster scenario.
C.Create a new administrator account when the company cannot access the root account.
D.Attach the administrator policy to another IAM user when the company cannot access the root account.

Answer: B

Explanation:

B. Add multiple MFA devices for the root user account to handle the disaster scenario.

By adding multiple MFA devices for the root user account, the company ensures that it can still access the
account even if one MFA device is lost. This approach provides a backup for authentication, addressing the
concern of losing access to the root user account if the MFA device is lost.

Question: 798 CertyIQ


A social media company is creating a rewards program website for its users. The company gives users points when
users create and upload videos to the website. Users redeem their points for gifts or discounts from the company's
affiliated partners. A unique ID identifies users. The partners refer to this ID to verify user eligibility for rewards.

The partners want to receive notification of user IDs through an HTTP endpoint when the company gives users
points. Hundreds of vendors are interested in becoming affiliated partners every day. The company wants to
design an architecture that gives the website the ability to add partners rapidly in a scalable way.

Which solution will meet these requirements with the LEAST implementation effort?

A.Create an Amazon Timestream database to keep a list of affiliated partners. Implement an AWS Lambda
function to read the list. Configure the Lambda function to send user IDs to each partner when the company
gives users points.
B.Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe
the partners to the topic. Publish user IDs to the topic when the company gives users points.
C.Create an AWS Step Functions state machine. Create a task for every affiliated partner. Invoke the state
machine with user IDs as input when the company gives users points.
D.Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer applications. Store
a list of affiliated partners in the data stream. Send user IDs when the company gives users points.

Answer: B

Explanation:

SNS is designed for precisely this kind of use case. It allows you to publish messages to a topic, which can
then be delivered to multiple subscribers. Partners can subscribe to the SNS topic using an HTTP endpoint as
the protocol, which meets the requirement to notify partners via an HTTP endpoint. This approach is highly
scalable and requires the least implementation effort because it leverages managed services without the
need for custom logic to manage subscriptions or deliver notifications.

You might also like