AWS Associate Architect Part 1
AWS Associate Architect Part 1
The physical data transfer and integration with the existing tape infrastructure provides efficiency benefits
that can optimize the cost.
Answer: A
Explanation:
A spread placement group is a group of instances that are each placed on distinct hardware.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Answer: D
Explanation:
1. A regional Reserved Instance does not reserve
capacityhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/reserved-instances-scope.html
2. reserved instances for price discount. need capacity reservation.
What should the solutions architect do next in the new management account?
A.Have the R&D AWS account be part of both organizations during the transition.
B.Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior
organization.
C.Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account
to the new R&D AWS account.
D.Have the R&D AWS account join the new organization. Make the new management account a member of the
prior organization.
Answer: B
Explanation:
1. account can leave current organization and then join new organization.
A.Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives in an Amazon Elastic File System
(Amazon EFS) file system. Authorization is resolved at the GWLB.
B.Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the
information that the company receives in an Amazon S3 bucket. Use an AWS Lambda function to resolve
authorization.
C.Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the
information that the company receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to
resolve authorization.
D.Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS)
container instance that stores the information that the company receives on an Amazon Elastic File System
(Amazon EFS) file system. Use an AWS Lambda function to resolve authorization.
Answer: C
Explanation:
1. https://docs.aws.amazon.com/lambda/latest/dg/services-kinesisfirehose.html
2. lambda authoriser seems to be logical solution.
A.Create a cross-Region read replica and promote the read replica to the primary instance.
B.Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C.Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
D.Copy automatic snapshots to another Region every 24 hours.
Answer: D
Explanation:
1. This is the most cost-effective solution because it does not require any additional AWS services. Amazon
RDS automatically creates snapshots of your DB instances every hour. You can copy these snapshots to
another Region every 24 hours to meet your RPO and RTO requirements.The other solutions are more
expensive because they require additional AWS services. For example, AWS DMS is a more expensive service
than AWS RDS.
2. Snapshots are always a cost-efficience way to have a DR plan.
A.Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application to use
ElastiCache for Memcached to store the session state.
B.Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for
Redis to store the session state.
C.Use an AWS Storage Gateway cached volume to store session data. Update the application to use AWS
Storage Gateway cached volume to store the session state.
D.Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the session
state.
Answer: B
Explanation:
1. redis is correct since it provides high availability and data persistance
2. B is the correct answer. It suggests using Amazon ElastiCache for Redis to store the session state. Update
the application to use ElastiCache for Redis to store the session state. This solution is cost-effective and
requires minimal development effort.
A.Create a read replica of the database. Direct the queries to the read replica.
B.Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new
database.
C.Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
D.Resize the DB instance to accommodate the additional workload.
Answer: A
Explanation:
This is the most cost-effective solution because it does not require any additional AWS services. A read
replica is a copy of a database that is synchronized with the primary database. You can direct the queries for
the report to the read replica, which will not affect the performance of the daily workloads
A.Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B.Use the AWS Load Balancer Controller to provision an Application Load Balancer.
C.Use an AWS Lambda function to connect the requests to Amazon EKS.
D.Use Amazon API Gateway to connect the requests to Amazon EKS.
Answer: D
Explanation:
1. API Gateway is a fully managed service that makes it easy for you to create, publish, maintain, monitor, and
secure APIs at any scale. API Gateway provides an entry point to your microservices.
2. https://aws.amazon.com/blogs/containers/microservices-development-using-aws-controllers-for-
kubernetes-ack-and-amazon-eks-blueprints/
A.Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket access.
Provide customers with a link to the S3 bucket.
B.Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group that has
permission to access the S3 bucket.
C.Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images. Deploy
the instances only in the countries the company services. Provide customers with links to the ALBs for their
specific country's instances.
D.Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic
restrictions. Provide a signed URL for each customer to access the data in CloudFront.
Answer: D
Explanation:
1. answer is D
2. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
A.Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B.Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C.Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D.Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Answer: A
Explanation:
A tooI would go with A, Using AOF can't protect you from all failure scenarios.For example, if a node fails due
to a hardware fault in an underlying physical server, ElastiCache will provision a new node on a different
server. In this case, the AOF is not available and can't be used to recover the data.
Which solution will reduce the launch time of the application during the next testing phase?
A.Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand
Instances available during the next testing phase.
B.Launch EC2 Spot Instances to support the application and to scale the application so it is available during the
next testing phase.
C.Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools
during the next testing phase.
D.Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next
testing phase.
Answer: C
Explanation:
Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools
during the next testing phase.
A.Use manual scaling to change the size of the Auto Scaling group.
B.Use predictive scaling to change the size of the Auto Scaling group.
C.Use dynamic scaling to change the size of the Auto Scaling group.
D.Use schedule scaling to change the size of the Auto Scaling group.
Answer: C
Explanation:
Dynamic Scaling – This is yet another type of Auto Scaling in which the number of EC2 instances is changed
automatically depending on the signals received. Dynamic Scaling is a good choice when there is a high
volume of unpredictable traffic.
https://www.developer.com/web-services/aws-auto-scaling-types-best-
practices/#:~:text=Dynamic%20Scaling%20%E2%80%93%20This%20is%20yet,high%20volume%20of%20unpredicta
Answer: A
Explanation:
1. A to autoscaling
2. The correct answer is A
Answer: B
Explanation:
Provisioned Concurrency incurs additional costs, so it is cost-efficient to use it only when necessary. For
example, early in the morning when activity starts, or to handle recurring peak usage.
Question: 598 CertyIQ
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS
Cloud to analyze the data. The devices generate .csv files and support writing the data to an SMB file share.
Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically
throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
Answer: ACF
Explanation:
1. https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-
home.htmlhttps://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-
s3/https://aws.amazon.com/storagegateway/faqs/
2. It should be ACF
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the
company's operational team to build the application.
Which activities are the responsibility of the company's operational team? (Choose three.)
Answer: ACE
Explanation:
A.Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that
the application requires.
B.Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port
that the application requires.
C.Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an
Application Load Balancer as the origin.
D.Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires.
Configure AWS Lambda functions with provisioned concurrency to process the requests.
Answer: A
Explanation:
Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that
the application requires.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create a DB snapshot of the RDS for PostgreSQL DB instance to populate a new Aurora PostgreSQL DB
cluster.
B.Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read replicate to a
new Aurora PostgreSQL DB cluster.
C.Use data import from Amazon S3 to migrate the database to an Aurora PostgreSQL DB cluster.
D.Use the pg_dump utility to back up the RDS for PostgreSQL database. Restore the backup to a new Aurora
PostgreSQL DB cluster.
Answer: B
Explanation:
Create an Aurora read replica of the RDS for PostgreSQL DB instance. Promote the Aurora read replicate to a
new Aurora PostgreSQL DB cluster.
A.Take a snapshot of the EBS storage that is attached to each EC2 instance. Create an AWS CloudFormation
template to launch new EC2 instances from the EBS storage.
B.Take a snapshot of the EBS storage that is attached to each EC2 instance. Use AWS Elastic Beanstalk to set
the environment based on the EC2 template and attach the EBS storage.
C.Use AWS Backup to set up a backup plan for the entire group of EC2 instances. Use the AWS Backup API or
the AWS CLI to speed up the restore process for multiple EC2 instances.
D.Create an AWS Lambda function to take a snapshot of the EBS storage that is attached to each EC2 instance
and copy the Amazon Machine Images (AMIs). Create another Lambda function to perform the restores with the
copied AMIs and attach the EBS storage.
Answer: C
Explanation:
The key reasons are: AWS Backup automates backup of resources like EBS volumes. It allows defining
backup policies for groups of resources. This removes the need to manually create backups for each resource.
The AWS Backup API and CLI allow programmatic control of backup plans and restores. This enables
restoring hundreds of EC2 instances programmatically after a disaster instead of manually. AWS Backup
handles clean up of old backups based on policies to minimize storage costs.
Which solution will meet these requirements with the MOST operational efficiency?
A.Use the AWS Step Functions Map state in Inline mode to process the data in parallel.
B.Use the AWS Step Functions Map state in Distributed mode to process the data in parallel.
C.Use AWS Glue to process the data in parallel.
D.Use several AWS Lambda functions to process the data in parallel.
Answer: B
Explanation:
With Step Functions, you can orchestrate large-scale parallel workloads to perform tasks, such as on-demand
processing of semi-structured data. These parallel workloads let you concurrently process large-scale data
sources stored in Amazon S3.
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-orchestrate-large-scale-parallel-
workloads.html
Answer: D
Explanation:
1. D. Order multiple AWS Snowball devices. Copy the data to the devices. Send the devices to AWS to copy the
data to Amazon S3.
2. 10 PB = It's Snowballs.
Answer: D
Explanation:
1. The key reasons are:The Storage Gateway volume gateway provides iSCSI block storage using cached
volumes. This allows replacing the on-premises iSCSI servers with minimal changes.Cached volumes store
frequently accessed data locally for low latency access, while storing less frequently accessed data in
S3.This reduces the number of on-premises servers while still providing low latency access to hot data.EBS
does not provide iSCSI support to replace the existing servers.S3 File Gateway is for file storage, not block
storage.Stored volumes would store all data on-premises, not in S3.
2. ISCI=Volume Gateway. low-latency access to frequently used data = cached volumes
A.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Glacier after 30
days.
B.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 Standard-
Infrequent Access (S3 Standard-IA) after 30 days.
C.Store all the objects in S3 Standard with an S3 Lifecycle rule to transition the objects to S3 One Zone-
Infrequent Access (S3 One Zone-IA) after 30 days.
D.Store all the objects in S3 Intelligent-Tiering with an S3 Lifecycle rule to transition the objects to S3
Standard-Infrequent Access (S3 Standard-IA) after 30 days.
Answer: B
Explanation:
Minimum Days for Transition to S3 Standard-IA or S3 One Zone-IA Before you transition objects to S3
Standard-IA or S3 One Zone-IA, you must store them for at least 30 days in Amazon S3. For example, you
cannot create a Lifecycle rule to transition objects to the S3 Standard-IA storage class one day after you
create them. Amazon S3 doesn't support this transition within the first 30 days because newer objects are
often accessed more frequently or deleted sooner than is suitable for S3 Standard-IA or S3 One Zone-IA
storage. Similarly, if you are transitioning noncurrent objects (in versioned buckets), you can transition only
objects that are at least 30 days noncurrent to S3 Standard-IA or S3 One Zone-IA storage.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
The database size has grown over time, reducing the performance and increasing the cost of storage. The company
must improve the database performance and needs a solution that is highly available and resilient.
A.Reduce the RDS DB instance size. Increase the storage capacity to 24 TiB. Change the storage type to
Magnetic.
B.Increase the RDS DB instance size. Increase the storage capacity to 24 TiChange the storage type to
Provisioned IOPS.
C.Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object
metadata in the existing database.
D.Create an Amazon DynamoDB table. Update the application to use DynamoDB. Use AWS Database Migration
Service (AWS DMS) to migrate data from the Oracle database to DynamoDB.
Answer: C
Explanation:
C. Create an Amazon S3 bucket. Update the application to store documents in the S3 bucket. Store the object
metadata in the existing database.
The company's security team recommends to increase the security of the application endpoint by restricting
access to only the IP addresses registered by the retail locations.
What should a solutions architect do to meet these requirements?
A.Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP
addresses in the rule to include the registered IP addresses.
B.Deploy AWS Firewall Manager to manage the ALConfigure firewall rules to restrict traffic to the ALModify
the firewall rules to include the registered IP addresses.
C.Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on
the ALB to validate that incoming requests are from the registered IP addresses.
D.Configure the network ACL on the subnet that contains the public interface of the ALB. Update the ingress
rules on the network ACL with entries for each of the registered IP addresses.
Answer: A
Explanation:
A. Associate an AWS WAF web ACL with the ALB. Use IP rule sets on the ALB to filter traffic. Update the IP
addresses in the rule to include the registered IP addresses.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an IAM role that includes permissions to access Lake Formation tables.
B.Create data filters to implement row-level security and cell-level security.
C.Create an AWS Lambda function that removes sensitive information before Lake Formation ingests the data.
D.Create an AWS Lambda function that periodically queries and removes sensitive information from Lake
Formation tables.
Answer: B
Explanation:
1. The key reasons are:Lake Formation data filters allow restricting access to rows or cells in data tables
based on conditions. This allows preventing access to sensitive data.Data filters are implemented within Lake
Formation and do not require additional coding or Lambda functions.Lambda functions to pre-process data or
purge tables would require ongoing development and maintenance.IAM roles only provide user-level
permissions, not row or cell level security.Data filters give granular access control over Lake Formation data
with minimal configuration, avoiding complex custom code.
2. You can create data filters based on the values of columns in a Lake Formation table. Easy. Lowest
operational overhead.
A.Deploy an interface VPC endpoint for Amazon EC2. Create an AWS Site-to-Site VPN connection between the
company and the VPC.
B.Deploy a gateway VPC endpoint for Amazon S3. Set up an AWS Direct Connect connection between the on-
premises network and the VPC.
C.Set up an AWS Transit Gateway connection from the VPC to the S3 buckets. Create an AWS Site-to-Site VPN
connection between the company and the VPC.
D.Set up proxy EC2 instances that have routes to NAT gateways. Configure the proxy EC2 instances to fetch S3
data and feed the application instances.
Answer: B
Explanation:
Gateway VPC Endpoint = no internet to access S3. Direct Connect = secure access to VPC.
The third-party vendor has received many 503 Service Unavailable Errors when sending data to the application.
When the data volume spikes, the compute capacity reaches its maximum limit and the application is unable to
process all requests.
Which design should a solutions architect recommend to provide a more scalable solution?
A.Use Amazon Kinesis Data Streams to ingest the data. Process the data using AWS Lambda functions.
B.Use Amazon API Gateway on top of the existing application. Create a usage plan with a quota limit for the
third-party vendor.
C.Use Amazon Simple Notification Service (Amazon SNS) to ingest the data. Put the EC2 instances in an Auto
Scaling group behind an Application Load Balancer.
D.Repackage the application as a container. Deploy the application using Amazon Elastic Container Service
(Amazon ECS) using the EC2 launch type with an Auto Scaling group.
Answer: A
Explanation:
The key reasons are: Kinesis Data Streams provides an auto-scaling stream that can handle large amounts of
streaming data ingestion and throughput. This removes the bottlenecks around receiving the data. AWS
Lambda can process and store the data in a scalable serverless manner, avoiding EC2 capacity limits. API
Gateway adds API management capabilities but does not improve the underlying scalability of the EC2
application .SNS is for event publishing/notifications, not large scale data ingestion. ECS still relies on EC2
capacity.
A.Configure an internet gateway. Update the S3 bucket policy to allow access from the internet gateway.
Update the application to use the new internet gateway.
B.Configure a VPN connection. Update the S3 bucket policy to allow access from the VPN connection. Update
the application to use the new VPN connection.
C.Configure a NAT gateway. Update the S3 bucket policy to allow access from the NAT gateway. Update the
application to use the new NAT gateway.
D.Configure a VPC endpoint. Update the S3 bucket policy to allow access from the VPC endpoint. Update the
application to use the new VPC endpoint.
Answer: D
Explanation:
1. The solution that will meet these requirements is to:Configure a VPC endpoint for Amazon S3Update the S3
bucket policy to allow access from the VPC endpointUpdate the application to use the new VPC endpointThe
key reasons are:VPC endpoints allow private connectivity from VPCs to AWS services like S3 without using an
internet gateway.The application can connect to S3 through the VPC endpoint while remaining in the private
subnet, without internet access.
2. VPC Endpoint for S3.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use the container application to encrypt the information by using AWS Key Management Service (AWS KMS).
B.Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS).
C.Implement an AWS Lambda function to encrypt the information by using AWS Key Management Service
(AWS KMS).
D.Use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key Management
Service (AWS KMS).
Answer: B
Explanation:
Enabling secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS) is the
least operationally overhead way to encrypt the sensitive information in the Kubernetes secrets object .When
you enable secrets encryption in the EKS cluster, AWS KMS encrypts the secrets before they are stored in
the EKS cluster. You do not need to make any changes to your container application or implement any
additional Lambda functions.
•Web and application servers that run on Amazon EC2 instances as part of Auto Scaling groups
•An Amazon RDS DB instance for data storage
A solutions architect needs to limit access to the application servers so that only the web servers can access them.
Answer: D
Explanation:
1. The key reasons are:An Application Load Balancer (ALB) allows directing traffic to the application servers
and provides access control via security groups.Security groups act as a firewall at the instance level and can
control access to the application servers from the web servers.Network ACLs work at the subnet level and are
less flexible for security groups for instance-level access control.VPC endpoints are used to provide private
access to AWS services, not for access between EC2 instances.AWS PrivateLink provides private connectivity
between VPCs, which is not required in this single VPC scenario.
2. ALB with Security Group is simplest solution.
A.Run the Amazon CloudWatch agent in the existing EKS cluster. View the metrics and logs in the CloudWatch
console.
B.Run AWS App Mesh in the existing EKS cluster. View the metrics and logs in the App Mesh console.
C.Configure AWS CloudTrail to capture data events. Query CloudTrail by using Amazon OpenSearch Service.
D.Configure Amazon CloudWatch Container Insights in the existing EKS cluster. View the metrics and logs in
the CloudWatch console.
Answer: D
Explanation:
1. The key reasons are:CloudWatch Container Insights automatically collects metrics and logs from containers
running in EKS clusters. This provides visibility into resource utilization, application performance, and
microservice interactions.The metrics and logs are stored in CloudWatch Logs and CloudWatch metrics for
central access.The CloudWatch console allows querying, filtering, and visualizing the metrics and logs in one
centralized place.
2. What Cloudwatch Container Insights is for.
The company recently experienced malicious attacks against its systems. The company needs a solution that
continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.
The solution must also report suspicious activity and display the information on a dashboard.
Which solution will meet these requirements?
Answer: C
Explanation:
The key reasons are: Amazon Guard Duty is a threat detection service that continuously monitors for malicious
activity and unauthorized behaviour. It analyzes AWS Cloud Trail, VPC Flow Logs, and DNS logs. GuardDuty
can detect threats like instance or S3 bucket compromise, malicious IP addresses, or unusual API
calls.Findings can be sent to AWS Security Hub which provides a centralized security dashboard and alerts.
Amazon Macie and Amazon Inspector do not monitor the breadth of activity that Guard Duty does. They focus
more on data security and application vulnerabilities respectively. AWS Config monitors for resource
configuration changes, not malicious activity.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
Answer: BE
Explanation:
1. Amazon EFS provides a scalable, high performance NFS file system that can be accessed from multiple
resources in AWS.AWS DataSync can perform the migration from the on-prem NFS server to EFS without
interruption to existing services.This avoids having to manually move the data which could cause downtime.
DataSync incrementally syncs changed data.EFS and DataSync together provide a cost-optimized approach
compared to using S3 or FSx, while still meeting the requirements.Manually copying 200 GB of data to AWS
would be slow and risky compared to using DataSync.
2. NFS file system = EFS, Use DataSync for the migration with NFS support.
A.Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
B.Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
C.Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
D.Create an FSx for Windows File Server file system in us-east-1 that has a Single-AZ 2 deployment type. Use
AWS Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in governance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
Answer: C
Explanation:
Create an FSx for Windows File Server file system in us-east-1 that has a Multi-AZ deployment type. Use AWS
Backup to create a daily backup plan that includes a backup rule that copies the backup to us-west-2.
Configure AWS Backup Vault Lock in compliance mode for a target vault in us-west-2. Configure a minimum
duration of 5 years.
A.Create an IAM policy that prohibits changes to CloudTrail. and attach it to the root user.
B.Create a new trail in CloudTrail from within the developer accounts with the organization trails option
enabled.
C.Create a service control policy (SCP) that prohibits changes to CloudTrail, and attach it the developer
accounts.
D.Create a service-linked role for CloudTrail with a policy condition that allows changes only from an Amazon
Resource Name (ARN) in the management account.
Answer: C
Explanation:
Which type of storage should a solutions architect recommend to meet these requirements?
Answer: C
Explanation:
Which solution will meet this requirement with the LEAST operational effort?
A.Create a second S3 bucket in us-east-1. Use S3 Cross-Region Replication to copy photos from the existing S3
bucket to the second S3 bucket.
B.Create a cross-origin resource sharing (CORS) configuration of the existing S3 bucket. Specify us-east-1 in
the CORS rule's AllowedOrigin element.
C.Create a second S3 bucket in us-east-1 across multiple Availability Zones. Create an S3 Lifecycle rule to save
photos into the second S3 bucket.
D.Create a second S3 bucket in us-east-1. Configure S3 event notifications on object creation and update
events to invoke an AWS Lambda function to copy photos from the existing S3 bucket to the second S3 bucket.
Answer: A
Explanation:
S3 Cross-Region Replication handles automatically copying new objects added to the source bucket to the
destination bucket in a different region. It continuously replicates new photos without needing to manually
copy files or set up Lambda triggers. CORS only enables cross-origin access, it does not copy objects. Using
Lifecycle rules or Lambda functions requires custom code and logic to handle the copying.S3 Cross-Region
Replication provides automated replication that minimizes operational overhead.
Which solutions will meet these requirements and provide the MOST scalability? (Choose two.)
Answer: CD
Explanation:
The key reasons are: DynamoDB auto scaling allows the database to scale up and down dynamically based on
traffic patterns. This handles the large spike in traffic in the mornings and lower traffic later in the day.S3
combined with Cloud Front provides a highly scalable infrastructure for the static content. Cloud Front
caching improves performance. Aurora serverless could be an option but may not scale as seamlessly as
DynamoDB to the very high spike in users.EC2 Auto Scaling groups add complexity compared to S3/Cloud
Front for static content hosting.
What is the MOST operationally efficient solution that meets these requirements?
Answer: B
Explanation:
1. B. Configure AWS WAF.
2. SQL Injection and Cross-Site Scripting = WAF so Either B or D. Both B and D are valid options but the
question doesn't indicate a real need for CloudFront, so just use WAF with the API Gateway. Answer is B.
A.Create an IAM user for each user in the company. Attach the appropriate policies to each user.
B.Use Amazon Cognito with an Active Directory user pool. Create roles with the appropriate policies attached.
C.Define cross-account roles with the appropriate policies attached. Map the roles to the Active Directory
groups.
D.Configure Security Assertion Markup Language (SAML) 2 0-based federation. Create roles with the
appropriate policies attached Map the roles to the Active Directory groups.
Answer: D
Explanation:
Configure Security Assertion Markup Language (SAML) 2 0-based federation. Create roles with the
appropriate policies attached Map the roles to the Active Directory groups.
Which configuration should the solutions architect choose to meet these requirements?
Answer: C
Explanation:
Reference:
https://aws.amazon.com/about-aws/whats-new/2014/07/31/amazon-route-53-announces-domain-name-
registration-geo-routing-and-lower-pricing/
The company wants to migrate its data from the on-premises location to an Amazon S3 bucket. The company
needs a solution that will automatically validate the integrity of the data after the transfer.
A.Order an AWS Snowball Edge device. Configure the Snowball Edge device to perform the online data transfer
to an S3 bucket
B.Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data
transfer to an S3 bucket.
C.Create an Amazon S3 File Gateway on premises Configure the S3 File Gateway to perform the online data
transfer to an S3 bucket
D.Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configure the accelerator to
perform the online data transfer to an S3 bucket.
Answer: B
Explanation:
Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data
transfer to an S3 bucket.
Question: 627 CertyIQ
A company wants to migrate two DNS servers to AWS. The servers host a total of approximately 200 zones and
receive 1 million requests each day on average. The company wants to maximize availability while minimizing the
operational overhead that is related to the management of the two servers.
A.Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
B.Launch a single large Amazon EC2 instance Import zone tiles. Configure Amazon CloudWatch alarms and
notifications to alert the company about any downtime.
C.Migrate the servers to AWS by using AWS Server Migration Service (AWS SMS). Configure Amazon
CloudWatch alarms and notifications to alert the company about any downtime.
D.Launch an Amazon EC2 instance in an Auto Scaling group across two Availability Zones. Import zone files. Set
the desired capacity to 1 and the maximum capacity to 3 for the Auto Scaling group. Configure scaling alarms
to scale based on CPU utilization.
Answer: A
Explanation:
Create 200 new hosted zones in the Amazon Route 53 console Import zone files.
Reference:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/migrate-dns-domain-in-use.html
Which solution will meet these requirements with the LEAST operational overhead?
A.Configure AWS Config with a rule to report the incomplete multipart upload object count.
B.Create a service control policy (SCP) to report the incomplete multipart upload object count.
C.Configure S3 Storage Lens to report the incomplete multipart upload object count.
D.Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
Answer: C
Explanation:
Configure S3 Storage Lens to report the incomplete multipart upload object count.
Reference:
https://aws.amazon.com/blogs/aws-cloud-financial-management/discovering-and-deleting-incomplete-
multipart-uploads-to-lower-amazon-s3-costs/
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.
B.Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.
C.Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of
Amazon RDS for MySQL.
D.Use Amazon RDS Blue/Green Deployments to deploy and test production changes.
Answer: D
Explanation:
Use Amazon RDS Blue/Green Deployments to deploy and test production changes.
How should the solutions architect address this issue in the MOST cost-effective manner?
A.Create a script that runs locally on an Amazon EC2 Reserved Instance that is triggered by a cron job.
B.Create an AWS Lambda function triggered by an Amazon EventBridge scheduled event.
C.Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge
scheduled event.
D.Use an Amazon Elastic Container Service (Amazon ECS) task running on Amazon EC2 triggered by an Amazon
EventBridge scheduled event.
Answer: C
Explanation:
Use an Amazon Elastic Container Service (Amazon ECS) Fargate task triggered by an Amazon EventBridge
scheduled event.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon Neptune to store the information. Use Amazon Kinesis Data Streams to process changes in the
database.
B.Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
C.Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Amazon Kinesis Data
Streams to process changes in the database.
D.Use Amazon Quantum Ledger Database (Amazon QLDB) to store the information. Use Neptune Streams to
process changes in the database.
Answer: B
Explanation:
Use Amazon Neptune to store the information. Use Neptune Streams to process changes in the database.
Which storage solution should a solutions architect recommend to meet these requirements?
A.Store the data in Amazon S3 Glacier. Update the S3 Glacier vault policy to allow access to the application
instances.
B.Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the
application instances.
C.Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the
application instances.
D.Store the data in an Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS volume shared between the
application instances.
Answer: C
Explanation:
Store the data in an Amazon Elastic File System (Amazon EFS) file system. Mount the file system on the
application instances.
Answer: C
Explanation:
Create a read replica from the source DB instance. Serve read traffic from the read replica.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
Question: 634 CertyIQ
A company collects 10 GB of telemetry data daily from various machines. The company stores the data in an
Amazon S3 bucket in a source data account.
The company has hired several consulting agencies to use this data for analysis. Each agency needs read access to
the data for its analysts. The company must share the data from the source data account by choosing a solution
that maximizes security and operational efficiency.
Answer: C
Explanation:
Configure cross-account access for the S3 bucket to the accounts that the agencies own.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an AWS Lambda function to copy the data to an Amazon S3 bucket. Replicate the S3 bucket to the
secondary Region.
B.Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary
Region. Create a new FSx for ONTAP instance from the backup.
C.Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from
the primary Region to the secondary Region.
D.Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume.
Replicate the volume to the secondary Region.
Answer: C
Explanation:
Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from
the primary Region to the secondary Region.
A.Create an SNS subscription that processes the event in Amazon Elastic Container Service (Amazon ECS)
before the event runs in Lambda.
B.Create an SNS subscription that processes the event in Amazon Elastic Kubernetes Service (Amazon EKS)
before the event runs in Lambda
C.Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure
the SOS queue to trigger a Lambda function.
D.Create an SNS subscription that sends the event to AWS Server Migration Service (AWS SMS). Configure the
Lambda function to poll from the SMS event.
Answer: C
Explanation:
Create an SNS subscription that sends the event to Amazon Simple Queue Service (Amazon SQS). Configure
the SOS queue to trigger a Lambda function.
Which combination ofAWS services would meet these requirements? (Choose two.)
A.AWS Fargate
B.AWS Lambda
C.Amazon DynamoDB
D.Amazon EC2 Auto Scaling
E.MySQL-compatible Amazon Aurora
Answer: BC
Explanation:
B.AWS Lambda.
C. Amazon DynamoDB.
A.Use an AWS Lambda function to create an S3 presigned URL. Instruct employees to use the URL.
B.Create an IAM user for each employee. Create an IAM policy for each employee to allow S3 access. Instruct
employees to use the AWS Management Console.
C.Create an S3 File Gateway. Create a share for uploading and a share for downloading. Allow employees to
mount shares on their local computers to use S3 File Gateway.
D.Configure AWS Transfer Family SFTP endpoints. Select the custom identity provider options. Use AWS
Secrets Manager to manage the user credentials Instruct employees to use Transfer Family.
Answer: A
Explanation:
Use an AWS Lambda function to create an S3 presigned URL. Instruct employees to use the URL.
A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for
some requests.
Answer: A
Explanation:
Reference:
https://repost.aws/knowledge-center/elb-fix-unequal-traffic-routing
Answer: BE
Explanation:
B. Grant the decrypt permission for the Lambda IAM role in the KMS key's policy.
E. Create a new IAM role with the kms: decrypt permission and attach the execution role to the Lambda
function.
Which solution is the MOST scalable and cost-effective way to meet these requirements?
A.Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use Amazon
EMR for analysis.
B.Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use Amazon
Athena for analysis.
C.Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3 Use Amazon
Redshift for analysis.
D.Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon
QuickSight tor analysis.
Answer: B
Explanation:
Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3 Use Amazon
Athena for analysis.
Reference:
https://aws.amazon.com/blogs/big-data/analyze-amazon-s3-storage-costs-using-aws-cost-and-usage-
reports-amazon-s3-inventory-and-amazon-athena/
Answer: A
Explanation:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
Question: 643 CertyIQ
A company runs several websites on AWS for its different brands. Each website generates tens of gigabytes of
web traffic logs each day. A solutions architect needs to design a scalable solution to give the company's
developers the ability to analyze traffic patterns across all the company's websites. This analysis by the
developers will occur on demand once a week over the course of several months. The solution must support
queries with standard SQL.
A.Store the logs in Amazon S3. Use Amazon Athena tor analysis.
B.Store the logs in Amazon RDS. Use a database client for analysis.
C.Store the logs in Amazon OpenSearch Service. Use OpenSearch Service for analysis.
D.Store the logs in an Amazon EMR cluster Use a supported open-source framework for SQL-based analysis.
Answer: A
Explanation:
Store the logs in Amazon S3. Use Amazon Athena tor analysis.
A.Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top domain
example com and a wildcard certificate for *.example.com.
B.Use the AWS Certificate Manager (ACM) console to request a private certificate for the apex top domain
example.com and a wildcard certificate for *.example.com.
C.Use the AWS Certificate Manager (ACM) console to request a public and private certificate for the apex top
domain example.com.
D.Validate domain ownership by email address. Switch to DNS validation by adding the required DNS records to
the DNS provider.
E.Validate domain ownership for the domain by adding the required DNS records to the DNS provider.
Answer: AE
Explanation:
A. Use the AWS Certificate Manager (ACM) console to request a public certificate for the apex top domain
example com and a wildcard certificate for *.example.com.
E. Validate domain ownership for the domain by adding the required DNS records to the DNS provider.
Which solution will meet these requirements with the LEAST operational overhead?
Answer: B
Explanation:
Use an AWS Key Management Service (AWS KMS) external key store backed by an external key manager.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html
A.Use Amazon Elastic File System (Amazon EFS) as a shared file system. Access the dataset from Amazon EFS.
B.Mount an Amazon S3 bucket to serve as the shared file system. Perform postprocessing directly from the S3
bucket.
C.Use Amazon FSx for Lustre as a shared file system. Link the file system to an Amazon S3 bucket for
postprocessing.
D.Configure AWS Resource Access Manager to share an Amazon S3 bucket so that it can be mounted to all
instances for processing and postprocessing.
Answer: C
Explanation:
Amazon FSx for Lustre is a fully managed, high-performance file system optimized for HPC workloads. It is
designed to deliver sub-millisecond latencies and high throughput, making it ideal for applications that
require parallel access to shared storage, such as simulations and data analytics.
Answer: C
Explanation:
A solutions architect must identify a highly available cloud storage solution that can handle large amounts of
sustained throughput. Files that are stored in the solution should be accessible to thousands of compute instances
that will simultaneously access and process the entire dataset.
Answer: B
Explanation:
A.Configure the General Purpose SSD (gp2) EBS volume storage type and provision 15,000 IOPS.
B.Configure the Provisioned IOPS SSD (io1) EBS volume storage type and provision 15,000 IOPS.
C.Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
D.Configure the EBS magnetic volume type to achieve maximum IOPS.
Answer: C
Explanation:
Configure the General Purpose SSD (gp3) EBS volume storage type and provision 15,000 IOPS.
Question: 650 CertyIQ
A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The
company's online application uses the database to process transactions. The data analysis team uses the same
production database to run reports for analytical processing. The company wants to reduce operational overhead
by moving to managed services wherever possible.
Which solution will meet these requirements with the LEAST operational overhead?
A.Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes
B.Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes
C.Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes
D.Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes
Answer: A
Explanation:
Migrate to Amazon RDS for Microsoft SOL Server. Use read replicas for reporting purposes.
A developer will use S3 Standard storage for the first 180 days. The developer needs to configure an S3 Lifecycle
rule.
A.Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
B.Transition the objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 180 days. S3 Glacier Flexible
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
C.Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
D.Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Flexible
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
Answer: C
Explanation:
Transition the objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 180 days, S3 Glacier Instant
Retrieval after 360 days, and S3 Glacier Deep Archive after 5 years.
Reference:
https://aws.amazon.com/s3/storage-classes/glacier/
Question: 652 CertyIQ
A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the
process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical
data workload.
A.Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the
task nodes on Spot Instances.
B.Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the
task nodes on Spot Instances.
C.Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and
task nodes on Spot Instances.
D.Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on
Spot Instances, and the task nodes on Spot Instances.
Answer: B
Explanation:
A transient cluster provides cost savings because it runs only during the computation time, and it provides
scalability and flexibility in a cloud environment. Option C (transient cluster with On-Demand primary node
and Spot core and task nodes) exposes the core nodes to Spot Instance interruptions, which may not be
acceptable for a workload that cannot lose any data.
A.Move the specific AWS account to a new organizational unit (OU) in Organizations from the management
account. Create a service control policy (SCP) that requires all existing resources to have the correct cost
center tag before the resources are created. Apply the SCP to the new OU.
B.Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate
cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail
events to invoke the Lambda function.
C.Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to
look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon
EventBridge scheduled rule to invoke the CloudFormation stack.
D.Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge
rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost
center tag.
Answer: B
Explanation:
This solution utilizes AWS Lambda and Amazon EventBridge to automate the tagging process based on
information from the RDS database and CloudTrail events.AWS Lambda Function: Create a Lambda function
that can look up the cost center information from the RDS database and tag resources accordingly.Amazon
EventBridge Rule: Set up an EventBridge rule to react to AWS CloudTrail events. The rule triggers the Lambda
function whenever a resource is created, allowing dynamic tagging based on the cost center associated with
the user in the RDS database.This solution provides automation, ensuring that resources are tagged
appropriately with the cost center ID of the user who created the resource. It also allows for flexibility in
updating cost center information without modifying the infrastructure.
The company wants to redesign the architecture to be highly available and to use AWS managed solutions.
A.Use AWS Elastic Beanstalk to host the static content and the PHP application. Configure Elastic Beanstalk to
deploy its EC2 instance into a public subnet. Assign a public IP address.
B.Use AWS Lambda to host the static content and the PHP application. Use an Amazon API Gateway REST API
to proxy requests to the Lambda function. Set the API Gateway CORS configuration to respond to the domain
name. Configure Amazon ElastiCache for Redis to handle session information.
C.Keep the backend code on the EC2 instance. Create an Amazon ElastiCache for Redis cluster that has Multi-
AZ enabled. Configure the ElastiCache for Redis cluster in cluster mode. Copy the frontend resources to
Amazon S3. Configure the backend code to reference the EC2 instance.
D.Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured
to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container
Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP
application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones.
Answer: D
Explanation:
Configure an Amazon CloudFront distribution with an Amazon S3 endpoint to an S3 bucket that is configured
to host the static content. Configure an Application Load Balancer that targets an Amazon Elastic Container
Service (Amazon ECS) service that runs AWS Fargate tasks for the PHP application. Configure the PHP
application to use an Amazon ElastiCache for Redis cluster that runs in multiple Availability Zones
The application must be available publicly over the internet as an endpoint. A WAF must be applied to the endpoint
for additional security. Session affinity (sticky sessions) must be configured on the endpoint.
A.Create a public Network Load Balancer. Specify the application target group.
B.Create a Gateway Load Balancer. Specify the application target group.
C.Create a public Application Load Balancer. Specify the application target group.
D.Create a second target group. Add Elastic IP addresses to the EC2 instances.
E.Create a web ACL in AWS WAF. Associate the web ACL with the endpoint
Answer: CE
Explanation:
C. Create a public Application Load Balancer. Specify the application target group.
E. Create a web ACL in AWS WAF. Associate the web ACL with the endpoint.
A.Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
B.Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
C.Store images in Amazon S3 Standard. Use S3 Standard to directly deliver images by using a static website.
D.Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly
deliver images by using a static website.
Answer: D
Explanation:
Store images in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Standard-IA to directly
deliver images by using a static website.
A.Create VPC security groups in the organization's management account. Update the security groups when a
CIDR range update is necessary.
B.Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.
C.Create an AWS managed prefix list. Use an AWS Security Hub policy to enforce the security group update
across the organization. Use an AWS Lambda function to update the prefix list automatically when the CIDR
ranges change.
D.Create security groups in a central administrative AWS account. Create an AWS Firewall Manager common
security group policy for the whole organization. Select the previously created security groups as primary
groups in the policy.
Answer: B
Explanation:
Create a VPC customer managed prefix list that contains the list of CIDRs. Use AWS Resource Access
Manager (AWS RAM) to share the prefix list across the organization. Use the prefix list in the security groups
across the organization.
Reference:
https://docs.aws.amazon.com/vpc/latest/userguide/managed-prefix-lists.html
Which solution will meet these requirements with the LEAST latency? (Choose two.)
Answer: AE
Explanation:
https://aws.amazon.com/fsx/netapp-
ontap/features/#:~:text=Amazon%20FSx%20for%20NetApp%20ONTAP%20provides%20access%20to%20shared%20
"Amazon FSx for NetApp ONTAP provides access to shared file storage over all versions of the Network File
System (NFS) and Server Message Block (SMB) protocols, and also supports multi-protocol access (i.e.
concurrent NFS and SMB access) to the same data."
Which AWS service should a solutions architect use to meet these requirements?
Answer: C
Explanation:
Answer: D
Explanation:
Configure a scheduled scaling policy for the Auto Scaling group to launch new instances before peak hours.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon DynamoDB with connection pooling with a target group configuration for the database. Change
the applications to use the DynamoDB endpoint.
B.Use Amazon RDS Proxy with a target group for the database. Change the applications to use the RDS Proxy
endpoint.
C.Use a custom proxy that runs on Amazon EC2 as an intermediary to the database. Change the applications to
use the custom proxy endpoint.
D.Use an AWS Lambda function to provide connection pooling with a target group configuration for the
database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon Relational Database
Service (RDS) that makes applications more resilient to database failures. Many applications, including those
built on modern serverless architectures, can have a large number of open connections to the database server
and may open and close database connections at a high rate, exhausting database memory and compute
resources. Amazon RDS Proxy allows applications to pool and share connections established with the
database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora
and RDS databases are reduced by up to 66%
Which solution will meet these requirements with the LEAST operational overhead?
A.Use logs in Amazon CloudWatch Logs to monitor the storage utilization of Amazon EBS. Use Amazon EBS
Elastic Volumes to reduce the size of the EBS volumes.
B.Use a custom script to monitor space usage. Use Amazon EBS Elastic Volumes to reduce the size of the EBS
volumes.
C.Delete all expired and unused snapshots to reduce snapshot costs.
D.Delete all nonessential snapshots. Use Amazon Data Lifecycle Manager to create and manage the snapshots
according to the company's snapshot policy requirements.
Answer: D
Explanation:
This option involves managing snapshots efficiently to optimize costs with minimal operational
overhead.Delete all nonessential snapshots: This reduces costs by eliminating unnecessary snapshot
storage.Use Amazon Data Lifecycle Manager (DLM): DLM can automate the creation and deletion of
snapshots based on defined policies. This reduces operational overhead by automating snapshot management
according to the company's snapshot policy requirements.
A.Create a new AWS Key Management Service (AWS KMS) customer managed key to encrypt both the S3
bucket and the RDS for MySQL database. Ensure that the KMS key policy includes encrypt and decrypt
permissions for the ECS task execution role.
B.Create an AWS Key Management Service (AWS KMS) AWS managed key to encrypt both the S3 bucket and
the RDS for MySQL database. Ensure that the S3 bucket policy specifies the ECS task execution role as a user.
C.Create an S3 bucket policy that restricts bucket access to the ECS task execution role. Create a VPC
endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow access from only the
subnets that the ECS cluster will generate tasks in.
D.Create a VPC endpoint for Amazon RDS for MySQL. Update the RDS for MySQL security group to allow
access from only the subnets that the ECS cluster will generate tasks in. Create a VPC endpoint for Amazon S3.
Update the S3 bucket policy to allow access from only the S3 VPC endpoint.
Answer: D
Explanation:
1. Option D is the most comprehensive solution as it leverages VPC endpoints for both Amazon RDS and
Amazon S3, along with proper network-level controls to restrict access to only the necessary resources from
the ECS cluster.
2. Create a VPC endpoint for Amazon RDS for MySQL: This ensures that the ECS cluster can access the RDS
database directly within the same Virtual Private Cloud (VPC), without having to go over the internet. By
updating the security group to allow access only from the specific subnets that the ECS cluster will generate
tasks in, you limit access to only the authorized entities.Create a VPC endpoint for Amazon S3: This allows the
ECS cluster to access the S3 bucket directly within the same VPC. By updating the S3 bucket policy to allow
access only from the S3 VPC endpoint, you restrict access to the designated VPC, ensuring that only
authorized resources can access the S3 bucket.
The company wants to migrate the application to AWS to improve latency. The company also wants to scale the
application automatically when application demand increases. The company will use AWS Elastic Beanstalk for
application deployment.
A.Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode.
Configure the environment to scale based on requests.
B.Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment
to scale based on requests.
C.Configure an Elastic Beanstalk environment to use compute optimized instances. Configure the environment
to scale on a schedule.
D.Configure an Elastic Beanstalk environment to use burstable performance instances in unlimited mode.
Configure the environment to scale on predictive metrics.
Answer: D
Explanation:
In this scenario, the application experiences latency issues during peak hours with a sudden increase in CPU
utilization. Using burstable performance instances in unlimited mode allows the application to burst beyond
the baseline performance when needed. Configuring the environment to scale on predictive metrics enables
proactive scaling based on anticipated increases in demand.
A.Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes.
B.Use AWS CloudFormation to set up the infrastructure. Use AWS Config to track changes.
C.Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.
D.Use AWS CloudFormation to set up the infrastructure. Use AWS Service Catalog to track changes.
Answer: B
Explanation:
Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.
Question: 666 CertyIQ
A startup company is hosting a website for its customers on an Amazon EC2 instance. The website consists of a
stateless Python application and a MySQL database. The website serves only a small amount of traffic. The
company is concerned about the reliability of the instance and needs to migrate to a highly available architecture.
The company cannot modify the application code.
Which combination of actions should a solutions architect take to achieve high availability for the website?
(Choose two.)
Answer: BE
Explanation:
E. Create an Application Load Balancer to distribute traffic to an Auto Scaling group of EC2 instances that are
distributed across two Availability Zones.
A.Create gateway endpoints for Amazon S3. Use the gateway endpoints to securely access the data from the
Region and the on-premises location.
B.Create a gateway in AWS Transit Gateway to access Amazon S3 securely from the Region and the on-
premises location.
C.Create interface endpoints for Amazon S3. Use the interface endpoints to securely access the data from the
Region and the on-premises location.
D.Use an AWS Key Management Service (AWS KMS) key to access the data securely from the Region and the
on-premises location.
Answer: C
Explanation:
A solutions architect needs to design a solution that gives the development team the ability to create resources
only if the application name tag has an approved value.
A.Create an IAM group that has a conditional Allow policy that requires the application name tag to be specified
for resources to be created.
B.Create a cross-account role that has a Deny policy for any resource that has the application name tag.
C.Create a resource group in AWS Resource Groups to validate that the tags are applied to all resources in all
accounts.
D.Create a tag policy in Organizations that has a list of allowed application names.
Answer: D
Explanation:
Create a tag policy in Organizations that has a list of allowed application names.
Reference:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.
B.Use the modify-db-instance command in the AWS CLI to change the password.
C.Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
D.Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password
rotation.
Answer: C
Explanation:
Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html
A.Choose on-demand mode. Update the read and write capacity units appropriately.
B.Choose provisioned mode. Update the read and write capacity units appropriately.
C.Purchase DynamoDB reserved capacity for a 1-year term.
D.Purchase DynamoDB reserved capacity for a 3-year term.
Answer: B
Explanation:
Provisioned Mode (Option B): Provisioned mode allows you to specify the desired read and write capacity
units. Since the workload occurs once a week for 4 hours, you can provision the read and write capacity units
accordingly to handle the expected load during that time. This can be a more cost-effective option than on-
demand pricing for predictable workloads.
The company needs a solution to prevent unusual spending. The solution must monitor costs and notify
responsible stakeholders in the event of unusual spending.
Answer: B
Explanation:
AWS Cost Anomaly Detection (Option B): AWS Cost Anomaly Detection is designed to automatically detect
unusual spending patterns based on machine learning algorithms. It can identify anomalies and send
notifications when it detects unexpected changes in spending. This aligns well with the requirement to
prevent unusual spending and notify stakeholders.
Reference:
https://aws.amazon.com/aws-cost-management/aws-cost-anomaly-detection/
Which solution will meet these requirements with the LEAST operational overhead?
A.Create external tables in a Spark catalog. Configure jobs in AWS Glue to query the data.
B.Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
C.Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
D.Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to
query the data.
Answer: B
Explanation:
AWS Glue with Athena (Option B): AWS Glue is a fully managed extract, transform, and load (ETL) service, and
Athena is a serverless query service that allows you to analyze data directly in Amazon S3 using SQL queries.
By configuring an AWS Glue crawler to crawl the data, you can create a schema for the data, and then use
Athena to query the data directly without the need to load it into a separate database. This minimizes
operational overhead.
A.Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
B.Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to
transition the data to S3 Glacier Deep Archive after 7 days.
C.Create an Amazon FSx File Gateway to increase the company's storage space. Create an Amazon S3
Lifecycle policy to transition the data after 7 days.
D.Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3
Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to
transition the data to S3 Glacier Deep Archive after 7 days.
Reference:
https://docs.aws.amazon.com/filegateway/latest/files3/file-gateway-concepts.html
Question: 674 CertyIQ
A company runs a web application on Amazon EC2 instances in an Auto Scaling group. The application uses a
database that runs on an Amazon RDS for PostgreSQL DB instance. The application performs slowly when traffic
increases. The database experiences a heavy read load during periods of high traffic.
Which actions should a solutions architect take to resolve these performance issues? (Choose two.)
Answer: BD
Explanation:
B. Create a read replica for the DB instance. Configure the application to send read traffic to the read replica.
By creating a read replica, you offload read traffic from the primary DB instance to the replica, distributing the
load and improving overall performance during periods of heavy read traffic. D. Create an Amazon Elasti
Cache cluster. Configure the application to cache query results in the ElastiCache cluster.Amazon
ElastiCache can be used to cache frequently accessed data, reducing the load on the database. This is
particularly effective for read-heavy workloads, as it allows the application to retrieve data from the cache
rather than making repeated database queries.
Which solution will meet these requirements with the LEAST administrative effort?
A.Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the
AWS CLI from the new EC2 instance to delete snapshots.
B.Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
C.Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
D.Lock the EBS snapshots to prevent deletion.
Answer: D
Explanation:
Locking EBS Snapshots (Option D): The "lock" feature in AWS allows you to prevent accidental deletion of
resources, including EBS snapshots. This can be set at the snapshot level, providing a straightforward and
effective way to meet the requirements without changing the administrative rights of the storage
administrator user.
Question: 676 CertyIQ
A company's application uses Network Load Balancers, Auto Scaling groups, Amazon EC2 instances, and
databases that are deployed in an Amazon VPC. The company wants to capture information about traffic to and
from the network interfaces in near real time in its Amazon VPC. The company wants to send the information to
Amazon OpenSearch Service for analysis.
A.Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log
group. Use Amazon Kinesis Data Streams to stream the logs from the log group to OpenSearch Service.
B.Create a log group in Amazon CloudWatch Logs. Configure VPC Flow Logs to send the log data to the log
group. Use Amazon Kinesis Data Firehose to stream the logs from the log group to OpenSearch Service.
C.Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon
Kinesis Data Streams to stream the logs from the trail to OpenSearch Service.
D.Create a trail in AWS CloudTrail. Configure VPC Flow Logs to send the log data to the trail. Use Amazon
Kinesis Data Firehose to stream the logs from the trail to OpenSearch Service.
Answer: B
Explanation:
Amazon CloudWatch Logs and VPC Flow Logs (Option B): VPC Flow Logs capture information about the IP
traffic going to and from network interfaces in a VPC. By configuring VPC Flow Logs to send the log data to a
log group in Amazon CloudWatch Logs, you can then use Amazon Kinesis Data Firehose to stream the logs
from the log group to Amazon OpenSearch Service for analysis. This approach provides near real-time
streaming of logs to the analytics service.
The company needs a dedicated EKS cluster for development work. The company will use the development cluster
infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.
Answer: A
Explanation:
A.Use default server-side encryption with Amazon S3 managed encryption keys (SSE-S3) to store the sensitive
data.
B.Create a customer managed key by using AWS Key Management Service (AWS KMS). Use the new key to
encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
C.Create an AWS managed key by using AWS Key Management Service (AWS KMS). Use the new key to
encrypt the S3 objects by using server-side encryption with AWS KMS keys (SSE-KMS).
D.Download S3 objects to an Amazon EC2 instance. Encrypt the objects by using customer managed keys.
Upload the encrypted objects back into Amazon S3.
Answer: B
Explanation:
SSE-KMS with Customer Managed Key (Option B): This option allows you to create a customer managed key
using AWS KMS. With a customer managed key, you have full control over key lifecycle management,
including the ability to create, rotate, and disable keys with minimal effort. SSE-KMS also integrates with
AWS Identity and Access Management (IAM) for fine-grained access control.
Answer: ACE
Explanation:
A.Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task
for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has
changed.
B.Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to
invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to
the file system and the destination S3 bucket.
C.Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task
for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
D.Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to
routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the
mounted file system.
Answer: A
Explanation:
AWS DataSync (Option A): AWS DataSync is designed for efficient and reliable copying of data between
different storage solutions. By setting up an AWS DataSync task with the transfer mode set to transfer only
data that has changed, you ensure that only the new or modified files are copied. This minimizes data transfer
and operational overhead. Transfer only data that has changed – DataSync copies only the data and metadata
that differs between the source and destination location.Transfer all data – DataSync copies everything in the
source to the destination without comparing differences between the locations.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create a customer managed key. Use the key to encrypt the EBS volumes.
B.Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
C.Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
D.Use an AWS owned key to encrypt the EBS volumes.
Answer: A
Explanation:
Create a customer managed key. Use the key to encrypt the EBS volumes.
Which solution will meet these requirements with the LEAST administrative overhead?
A.Use an IAM policy that allows users to create only encrypted Amazon Elastic Block Store (Amazon EBS)
volumes. Use AWS Config and AWS Systems Manager to automate the detection and remediation of
unencrypted EBS volumes.
B.Use AWS Key Management Service (AWS KMS) to manage access to encrypted Amazon Elastic Block Store
(Amazon EBS) volumes. Use AWS Lambda and Amazon EventBridge to automate the detection and remediation
of unencrypted EBS volumes.
C.Use Amazon Macie to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS
Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
D.Use Amazon inspector to detect unencrypted Amazon Elastic Block Store (Amazon EBS) volumes. Use AWS
Systems Manager Automation rules to automatically encrypt existing and new EBS volumes.
Answer: A
Explanation:
IAM Policy and AWS Config (Option A): By creating an IAM policy that allows users to create only encrypted
EBS volumes, you proactively prevent the creation of unencrypted volumes. Using AWS Config, you can set up
rules to detect noncompliant resources, and AWS Systems Manager Automation can be used for automated
remediation. This approach provides a proactive and automated solution.
A.Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
B.Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
C.Migrate the database to an Amazon RDS Multi-AZ deployment.
D.Migrate the web tier to an AWS Lambda function.
E.Migrate the database to an Amazon DynamoDB table.
Answer: AC
Explanation:
Web Tier Migration (Option A): Migrating the web tier to Amazon EC2 instances in an Auto Scaling group
behind an Application Load Balancer (ALB) provides horizontal scalability, automatic scaling, and improved
resiliency. Auto Scaling helps in managing and maintaining the desired number of EC2 instances based on
demand, and the ALB distributes incoming traffic across multiple instances.Database Migration to Amazon
RDS Multi-AZ (Option C): Migrating the database to Amazon RDS in a Multi-AZ deployment provides high
availability and automatic failover. In a Multi-AZ deployment, Amazon RDS maintains a standby replica in a
different Availability Zone, and in the event of a failure, it automatically promotes the replica to the primary
instance. This enhances the resiliency of the database.
A.Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to an edge location in
Amazon CloudFront.
B.Deploy the applications in AWS Local Zones by extending the company's VPC from eu-central-1 to the chosen
Local Zone.
C.Deploy the applications in eu-central-1. Extend the company’s VPC from eu-central-1 to the regional edge
caches in Amazon CloudFront.
D.Deploy the applications in AWS Wavelength Zones by extending the company’s VPC from eu-central-1 to the
chosen Wavelength Zone.
Answer: B
Explanation:
Option B - AWS Local Zones place AWS compute, storage, database, and other select services closer to end-
users. This would allow the company to deploy applications within geographic proximity to eu-central-1
without being directly in the region, potentially meeting regulatory requirements and achieving low
latency.Whereas Option D - AWS Wavelength Zones are designed to provide developers the ability to build
applications that deliver single-digit millisecond latencies to MOBILE and connected devices. And it's more
focused on 5G Apps and may not be directly relevant to Web Apps hosting.
A.Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside a VPC.
B.Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
C.Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outside a VPC.
D.Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside a VPC.
Answer: B
Explanation:
Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside a VPC.
The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud. The
number of accounts and VPCs will increase during the next year. The network architecture must simplify the
administration of new connections and must provide the ability to scale.
Which solution will meet these requirements with the LEAST administrative overhead?
A.Create a peering connection between the VPCs. Create a VPN connection between the VPCs and the on-
premises locations.
B.Launch an Amazon EC2 instance. On the instance, include VPN software that uses a VPN connection to
connect all VPCs and on-premises locations.
C.Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the
on-premises connections.
D.Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect
the central VPC to other VPCs by using peering connections.
Answer: C
Explanation:
Create a transit gateway. Create VPC attachments for the VPC connections. Create VPN attachments for the
on-premises connections.
Answer: BD
Explanation:
B.Use Amazon SageMaker to train a model by using the historical data in the S3 bucket.
D.Configure an AWS Lambda function with a function URL that uses an Amazon Forecast predictor to create a
prediction based on the inputs.
The permissions will be used by multiple IAM users and must be split between the developer and administrator
teams. Each team requires different permissions. The company wants a solution that includes new users that are
hired on both teams.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create individual users in IAM Identity Center for each account. Create separate developer and administrator
groups in IAM Identity Center. Assign the users to the appropriate groups. Create a custom IAM policy for each
group to set fine-grained permissions.
B.Create individual users in IAM Identity Center for each account. Create separate developer and administrator
groups in IAM Identity Center. Assign the users to the appropriate groups. Attach AWS managed IAM policies
to each user as needed for fine-grained permissions.
C.Create individual users in IAM Identity Center. Create new developer and administrator groups in IAM Identity
Center. Create new permission sets that include the appropriate IAM policies for each group. Assign the new
groups to the appropriate accounts. Assign the new permission sets to the new groups. When new users are
hired, add them to the appropriate group.
D.Create individual users in IAM Identity Center. Create new permission sets that include the appropriate IAM
policies for each user. Assign the users to the appropriate accounts. Grant additional IAM permissions to the
users from within specific accounts. When new users are hired, add them to IAM Identity Center and assign
them to the accounts.
Answer: C
Explanation:
Reference:
https://docs.aws.amazon.com/controltower/latest/userguide/sso.html
A.Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Use Amazon
EventBridge to schedule an AWS Lambda function to run the API calls.
B.Write API calls to describe the EBS volumes and to confirm the EBS volumes are encrypted. Run the API calls
on an AWS Fargate task.
C.Create an AWS Identity and Access Management (IAM) policy that requires the use of tags on EBS volumes.
Use AWS Cost Explorer to display resources that are not properly tagged. Encrypt the untagged resources
manually.
D.Create an AWS Config rule for Amazon EBS to evaluate if a volume is encrypted and to flag the volume if it is
not encrypted.
Answer: D
Explanation:
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS
resources. It can check whether your resources comply with certain conditions (such as being encrypted), and
it can flag or take action on resources that do not comply.
A.Use the S3 bucket access point instead of accessing the S3 bucket directly.
B.Upload the files into multiple S3 buckets.
C.Use S3 multipart uploads.
D.Fetch multiple byte-ranges of an object in parallel.
E.Add a random prefix to each object when uploading the files.
Answer: CD
Explanation:
A.Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block
storage that is mounted to the individual EC2 instances.
B.Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the individual
EC2 instances.
C.Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual
EC2 instances.
D.Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling
group.
E.Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to no-
cache. Use Amazon CloudFront to deliver the content.
Answer: BE
Explanation:
B.Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system on the
individual EC2 instances.
E.Create an Amazon S3 bucket to store the web content. Set the metadata for the Cache-Control header to
no-cache. Use Amazon CloudFront to deliver the content.
Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?
Answer: B
Explanation:
A recent increase in traffic requires the application to be highly available and for the database to be eventually
consistent.
Which solution will meet these requirements with the LEAST operational overhead?
A.Replace the ALB with a Network Load Balancer. Maintain the embedded NoSQL database with its replication
service on the EC2 instances.
B.Replace the ALB with a Network Load Balancer. Migrate the embedded NoSQL database to Amazon
DynamoDB by using AWS Database Migration Service (AWS DMS).
C.Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the embedded
NoSQL database with its replication service on the EC2 instances.
D.Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded
NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
Answer: D
Explanation:
Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded
NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
What should a solutions architect do to ensure that the shopping cart data is preserved at all times?
A.Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for access to
the catalog in Amazon Aurora.
B.Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
C.Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
D.Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog
and shopping cart. Configure automated snapshots.
Answer: B
Explanation:
Configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart
data from the user's session.
Question: 695 CertyIQ
A company is building a microservices-based application that will be deployed on Amazon Elastic Kubernetes
Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the
application is observable to identify performance issues in the future.
A.Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the
microservices.
B.Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters. Configure AWS
X-Ray to trace the requests between the microservices.
C.Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to observe the
microservice interactions.
D.Use AWS Trusted Advisor to understand the performance of the application.
Answer: B
Explanation:
Option B Amazon CloudWatch Container Insights: This service provides monitoring and troubleshooting
capabilities for containerized applications. It collects and aggregates metrics, logs, and events from Amazon
EKS clusters and containers. This helps in monitoring the performance and health of microservices.
All the data is subject to strong regulations and security requirements. The data must be encrypted at rest. Each
customer must be able to access only their data from their AWS account. Company employees must not be able to
access the data.
A.Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In
the private certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
B.Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In the S3 bucket policy, deny decryption of data for all principals except an IAM role that the
customer provides.
C.Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.
D.Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In
the public certificate policy, deny access to the certificate for all principals except an IAM role that the
customer provides.
Answer: C
Explanation:
Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data
server-side. In each KMS key policy, deny decryption of data for all principals except an IAM role that the
customer provides.
A.Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that the DNS record for the
website resolves to the Auto Scaling group identifier.
B.Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the
target group that is associated with the ALEnsure that the DNS record for the website resolves to the ALB.
C.Launch a NAT gateway in a private subnet. Update the route table for the private subnets to add a default
route to the NAT gateway. Attach a public Elastic IP address to the NAT gateway.
D.Ensure that the security group that is attached to the EC2 instance allows HTTP traffic on port 80 and HTTPS
traffic on port 443. Ensure that the DNS record for the website resolves to the public IP address of the EC2
instance.
Answer: B
Explanation:
Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add the EC2 instance to the
target group that is associated with the ALEnsure that the DNS record for the website resolves to the ALB.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create Amazon Elastic Block Store (Amazon EBS) volumes in the same Availability Zones where EKS worker
nodes are placed. Register the volumes in a StorageClass object on an EKS cluster. Use EBS Multi-Attach to
share the data between containers.
B.Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass
object on an EKS cluster. Use the same file system for all containers.
C.Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in a StorageClass object on
an EKS cluster. Use the same volume for all containers.
D.Create Amazon Elastic File System (Amazon EFS) file systems in the same Availability Zones where EKS
worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS
Lambda function to synchronize the data between file systems.
Answer: B
Explanation:
Create an Amazon Elastic File System (Amazon EFS) file system. Register the file system in a StorageClass
object on an EKS cluster. Use the same file system for all containers.
Question: 699 CertyIQ
A company has an application that uses Docker containers in its local data center. The application runs on a
container host that stores persistent data in a volume on the host. The container instances use the stored
persistent data.
The company wants to move the application to a fully managed service because the company does not want to
manage any servers or storage infrastructure.
A.Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes. Create an Amazon Elastic
Block Store (Amazon EBS) volume attached to an Amazon EC2 instance. Use the EBS volume as a persistent
volume mounted in the containers.
B.Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.
C.Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon S3
bucket. Map the S3 bucket as a persistent storage volume mounted in the containers.
D.Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.
Answer: B
Explanation:
Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launch type. Create an Amazon
Elastic File System (Amazon EFS) volume. Add the EFS volume as a persistent storage volume mounted in the
containers.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
A.Create internal Network Load Balancers in front of the application in each Region.
B.Create external Application Load Balancers in front of the application in each Region.
C.Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
D.Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic.
E.Configure Amazon CloudFront to handle the traffic and route requests to the application in each Region
Answer: AC
Explanation:
A.Create internal Network Load Balancers in front of the application in each Region.
C.Create an AWS Global Accelerator accelerator to route traffic to the load balancers in each Region.
Question: 701 CertyIQ
A city has deployed a web application running on Amazon EC2 instances behind an Application Load Balancer
(ALB). The application's users have reported sporadic performance, which appears to be related to DDoS attacks
originating from random IP addresses. The city needs a solution that requires minimal configuration changes and
provides an audit trail for the DDoS sources.
A.Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic from unknown sources.
B.Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.
C.Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.
D.Create an Amazon CloudFront distribution for the application, and set the ALB as the origin. Enable an AWS
WAF web ACL on the distribution, and configure rules to block traffic from unknown sources
Answer: C
Explanation:
Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) to integrate mitigating
controls into the service.
A.Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file
gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
B.Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file
system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster
instances.
C.Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data
into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from
the HPC cluster instances.
D.Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system.
Access the FSx for Lustre file system from the HPC cluster instances.
Answer: D
Explanation:
Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system.
Access the FSx for Lustre file system from the HPC cluster instances.
A.Set up AWS Glue to copy the data from the on-premises servers to Amazon S3.
B.Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
C.Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to Amazon S3.
D.Set up an AWS Direct Connect connection between the on-premises data center and a VPC, and copy the
data to Amazon S3.
Answer: B
Explanation:
Set up an AWS DataSync agent on the on-premises servers, and sync the data to Amazon S3.
A.Configure an Application Load Balancer with the required protocol and ports for the internet traffic. Specify
the EC2 instances as the targets.
B.Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances as the targets.
C.Configure a Network Load Balancer with the required protocol and ports for the internet traffic. Specify the
EC2 instances as the targets.
D.Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet traffic to
both sets of EC2 instances.
Answer: C
Explanation:
Configure a Network Load Balancer with the required protocol and ports for the internet traffic. Specify the
EC2 instances as the targets.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
The company plans to migrate the RDS for MySQL DB instance to an Amazon Aurora PostgreSQL DB cluster. The
company needs a solution that replicates the data changes that happen during the migration to the new database.
A.Use AWS Database Migration Service (AWS DMS) Schema Conversion to transform the database objects.
B.Use AWS Database Migration Service (AWS DMS) Schema Conversion to create an Aurora PostgreSQL read
replica on the RDS for MySQL DB instance.
C.Configure an Aurora MySQL read replica for the RDS for MySQL DB instance.
D.Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the
data.
E.Promote the Aurora PostgreSQL read replica to a standalone Aurora PostgreSQL DB cluster when the replica
lag is zero.
Answer: AD
Explanation:
A.Use AWS Database Migration Service (AWS DMS) Schema Conversion to transform the database objects.
D.Define an AWS Database Migration Service (AWS DMS) task with change data capture (CDC) to migrate the
data.
Which solution will meet these requirements with the LEAST operational overhead?
A.Add functionality to the script to identify the instance that has the fewest active connections. Configure the
script to read from that instance to report the total new entries.
B.Create a read replica of the database. Configure the script to query only the read replica to report the total
new entries.
C.Instruct the development team to manually export the new entries for the day in the database at the end of
each day.
D.Use Amazon ElastiCache to cache the common queries that the script runs against the database.
Answer: B
Explanation:
Create a read replica of the database. Configure the script to query only the read replica to report the total
new entries.
What is the MOST operationally efficient solution that meets these requirements?
A.Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information.
B.Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
C.Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the
relevant information.
D.Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log
information.
Answer: B
Explanation:
Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
A.Create public NAT gateways in the same private subnets as the EC2 instances.
B.Create private NAT gateways in the same private subnets as the EC2 instances.
C.Create public NAT gateways in public subnets in the same VPCs as the EC2 instances.
D.Create private NAT gateways in public subnets in the same VPCs as the EC2 instances.
Answer: C
Explanation:
C.Public NAT GW in Public Subnet to have access to internet. Private NAT GW is used for VPC or on-prem
the correct is C, because D would require more than just private NAT gateway.Private – Instances in private
subnets can connect to other VPCs or your on-premises network through a private NAT gateway. You can
route traffic from the NAT gateway through a transit gateway or a virtual private gateway. You cannot
associate an elastic IP address with a private NAT gateway. You can attach an internet gateway to a VPC with
a private NAT gateway, but if you route traffic from the private NAT gateway to the internet gateway, the
internet gateway drops the traffic.
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
Which solutions to deploy the SCP will meet these requirements? (Choose two.)
Answer: BE
Explanation:
B.Attach the SCP to the three nonproduction Organizations member accounts.
E.Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member
accounts into the new OU.
Answer: A
Explanation:
A solutions architect needs to make the application architecture more scalable and highly available.
Which solution will meet these requirements with the LEAST downtime?
A.Create an Amazon EventBridge rule that has the Aurora cluster as a source. Create an AWS Lambda function
to log the state change events of the Aurora cluster. Add the Lambda function as a target for the EventBridge
rule. Add additional reader nodes to fail over to.
B.Modify the Aurora cluster and activate the zero-downtime restart (ZDR) feature. Use Database Activity
Streams on the cluster to track the cluster status.
C.Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for the
Aurora cluster.
D.Create an Amazon ElastiCache for Redis cache. Replicate data from the Aurora cluster to Redis by using AWS
Database Migration Service (AWS DMS) with a write-around approach.
Answer: C
Explanation:
Add additional reader instances to the Aurora cluster. Create an Amazon RDS Proxy target group for the
Aurora cluster.
Question: 712 CertyIQ
A company is designing a web application on AWS. The application will use a VPN connection between the
company’s existing data centers and the company's VPCs.
The company uses Amazon Route 53 as its DNS service. The application must use private DNS records to
communicate with the on-premises services from a VPC.
Which solution will meet these requirements in the MOST secure manner?
A.Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.
B.Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.
C.Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC.
D.Create a Route 53 public hosted zone. Create a record for each service to allow service communication
Answer: A
Explanation:
Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate the resolver rule with the
VPC.
A.Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed
items.
B.Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3
location in DynamoDB.
C.Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos
older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags
to keep track of metadata.
D.Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older
than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in
Amazon OpenSearch Service.
Answer: B
Explanation:
Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3
location in DynamoDB.
A.Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnectionCount
CloudWatch metrics.
B.Use the least outstanding requests algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.
C.Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch
metrics.
D.Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime
CloudWatch metrics.
Answer: B
Explanation:
Use the least outstanding requests algorithm based on the RequestCountPerTarget and
ActiveConnectionCount CloudWatch metrics.
Which solution will meet these requirements with the MOST operational efficiency?
A.Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a coverage
threshold to send notifications to the appropriate email message recipients.
B.Create a Lambda function that runs a coverage report against the Savings Plans. Use Amazon Simple Email
Service (Amazon SES) to email the report to the appropriate email message recipients.
C.Create an AWS Budgets report for the Savings Plans budget. Set the frequency to daily.
D.Create a Savings Plans alert subscription. Enable all notification options. Enter an email address to receive
notifications.
Answer: A
Explanation:
Create a daily budget for the Savings Plans by using AWS Budgets. Configure the budget with a coverage
threshold to send notifications to the appropriate email message recipients.
Reference:
https://docs.aws.amazon.com/savingsplans/latest/userguide/sp-usingBudgets.html
A solutions architect needs to redesign the data ingestion solution to be publicly available over the internet. The
data in transit must also be encrypted.
Which solution will meet these requirements with the MOST operational efficiency?
A.Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.
B.Create a new VPC that has public subnets. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.
C.Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALB security group
inbound rule to allow inbound traffic from the VPC CIDR block for HTTPS protocol.
D.Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLB listener for HTTPS
communication over the internet.
Answer: A
Explanation:
Configure public subnets in the existing VPC. Deploy an MSK cluster in the public subnets. Update the MSK
cluster security settings to enable mutual TLS authentication.
The company already has an AWS account that has connectivity to the on-premises network. The new application
on AWS must support integration with the existing ERP system. The new application must be secure and resilient
and must use the SFTP protocol to process orders from the ERP system immediately.
A.Create an AWS Transfer Family SFTP internet-facing server in two Availability Zones. Use Amazon S3
storage. Create an AWS Lambda function to process order files. Use S3 Event Notifications to send
s3:ObjectCreated:* events to the Lambda function.
B.Create an AWS Transfer Family SFTP internet-facing server in one Availability Zone. Use Amazon Elastic File
System (Amazon EFS) storage. Create an AWS Lambda function to process order files. Use a Transfer Family
managed workflow to invoke the Lambda function.
C.Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon Elastic File
System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use
Amazon EventBridge Scheduler to invoke the state machine to periodically check Amazon EFS for order files.
D.Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon S3 storage.
Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the
Lambda function.
Answer: D
Explanation:
D looks more secure over existing on-prem to AWS connection-Transfer Family SFTP internal server in two
Availability Zones.-Use Amazon S3 storage. -Use a Transfer Family managed workflow to invoke the Lambda
function"
Question: 718 CertyIQ
A company’s applications use Apache Hadoop and Apache Spark to process data on premises. The existing
infrastructure is not scalable and is complex to manage.
A solutions architect must design a scalable solution that reduces operational complexity. The solution must keep
the data processing on premises.
A.Use AWS Site-to-Site VPN to access the on-premises Hadoop Distributed File System (HDFS) data and
application. Use an Amazon EMR cluster to process the data.
B.Use AWS DataSync to connect to the on-premises Hadoop Distributed File System (HDFS) cluster. Create an
Amazon EMR cluster to process the data.
C.Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS
Outposts. Use the EMR clusters to process the data.
D.Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Create an Amazon EMR cluster to
process the data.
Answer: C
Explanation:
Migrate the Apache Hadoop application and the Apache Spark application to Amazon EMR clusters on AWS
Outposts. Use the EMR clusters to process the data.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an Amazon Elastic File System (Amazon EFS) volume that uses EFS Intelligent-Tiering. Use AWS
DataSync to migrate the data to the EFS volume.
B.Create an Amazon FSx for ONTAP instance. Create an FSx for ONTAP file system with a root volume that
uses the auto tiering policy. Migrate the data to the FSx for ONTAP volume.
C.Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by using an
AWS Storage Gateway Amazon S3 File Gateway.
D.Create an Amazon FSx for OpenZFS file system. Migrate the data to the new volume.
Answer: C
Explanation:
Create an Amazon S3 bucket that uses S3 Intelligent-Tiering. Migrate the data to the S3 bucket by using an
AWS Storage Gateway Amazon S3 File Gateway.
Each time the company patches a software module, the application experiences downtime. Report generation
must restart from the beginning after any interruptions. The company wants to redesign the application so that the
application can be flexible, scalable, and gradually improved. The company wants to minimize application
downtime.
A.Run the application on AWS Lambda as a single function with maximum provisioned concurrency.
B.Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation
strategy.
C.Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling.
D.Run the application on AWS Elastic Beanstalk as a single application environment with an all-at-once
deployment strategy.
Answer: C
Explanation:
Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling.
The company selected one component of the web application to test as a microservice. The component supports
hundreds of requests each second. The company wants to create and test the microservice on an AWS solution
that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal
operational support.
A.Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating system.
B.Use an AWS Elastic Beanstalk web server environment that has high availability configured.
C.Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2
instances.
D.Use an AWS Lambda function that runs custom developed code.
Answer: C
Explanation:
Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2
instances.
Which solution will meet these requirements with the LEAST amount of operational overhead?
A.Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn on the
transit gateway's route propagation feature.
B.Create a Direct Connect gateway. Recreate the private VIFs to use the new gateway. Associate each VPC by
creating new virtual private gateways.
C.Create a transit VPConnect the Direct Connect connection to the transit VPCreate a peering connection
between all other VPCs in the Region. Update the route tables.
D.Create AWS Site-to-Site VPN connections from on premises to each VPC. Ensure that both VPN tunnels are
UP for each connection. Turn on the route propagation feature.
Answer: A
Explanation:
Create a transit gateway, and associate the Direct Connect connection with a new transit VIF. Turn on the
transit gateway's route propagation feature.
A.Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the
new IAM role to the EC2 instances and the existing IAM role.
B.Create an IAM user. Attach the AmazonSSMManagedInstanceCore policy to the IAM user. Configure Systems
Manager to use the IAM user to manage the EC2 instances.
C.Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
D.Remove the existing policies from the existing IAM role. Add the AmazonSSMManagedInstanceCore policy to
the existing IAM role.
Answer: A
Explanation:
Create a new IAM role. Attach the AmazonSSMManagedInstanceCore policy to the new IAM role. Attach the
new IAM role to the EC2 instances and the existing IAM role.
Which solution will resolve this issue with the LEAST administrative overhead?
Answer: B
Explanation:
Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Answer: B
Explanation:
The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During
peak usage periods, the database is unable to maintain the performance that is needed for reading and writing
updates. The game's user base is increasing rapidly.
What should a solutions architect do to improve the performance of the data tier?
A.Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
B.Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
C.Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use
DAX.
D.Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use
Redis.
Answer: D
Explanation:
Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use
Redis.
Question: 727 CertyIQ
A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator
accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the
company's operations. The company wants to prevent this type of disruption in the future.
Which solution will meet this requirement with the LEAST operational overhead?
A.Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS
Lambda function to automatically restore deleted DynamoDB tables.
B.Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
C.Configure deletion protection on the DynamoDB tables.
D.Enable point-in-time recovery on the DynamoDB tables.
Answer: C
Explanation:
A.Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the
workload.
B.Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while
retaining copies of frequently accessed data subsets locally.
C.Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
D.Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway
to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to
Amazon S3.
Answer: C
Explanation:
Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to
asynchronously back up point-in-time snapshots of the data to Amazon S3.
The company needs to make an automated scaling plan that will analyze each resource's daily and weekly
historical workload trends. The configuration must scale resources appropriately according to both the forecast
and live changes in utilization.
Which scaling strategy should a solutions architect recommend to meet these requirements?
A.Implement dynamic scaling with step scaling based on average CPU utilization from the EC2 instances.
B.Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking
C.Create an automated scheduled scaling action based on the traffic patterns of the web application.
D.Set up a simple scaling policy. Increase the cooldown period based on the EC2 instance startup time.
Answer: B
Explanation:
Enable predictive scaling to forecast and scale. Configure dynamic scaling with target tracking.
The company adds a read replica, which reduces the DB cluster usage for a short period of time. However, the load
continues to increase. The operations that cause the increase in DB cluster usage are all repeated read statements
that are related to delivery details. The company needs to alleviate the effect of repeated reads on the DB cluster.
A.Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
B.Add an additional read replica to the DB cluster.
C.Configure Aurora Auto Scaling for the Aurora read replicas.
D.Modify the DB cluster to have multiple writer instances.
Answer: A
Explanation:
Implement an Amazon ElastiCache for Redis cluster between the application and the DB cluster.
Answer: C
Explanation:
Which solution will meet these requirements with the LEAST operational overhead?
A.Use security groups and network ACLs to secure the database and application servers.
B.Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
C.Use AWS Network Firewall to protect the application and the database.
D.Use different database accounts in the application code for different functions. Avoid granting excessive
privileges to the database users.
Answer: B
Explanation:
Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
Which solution will meet these requirements in the MOST operationally efficient way?
A.Attach service control policies (SCPs) to the root of the organization to identity the failed login attempts.
B.Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
C.Publish the Aurora general logs to a log group in Amazon CloudWatch Logs. Export the log data to a central
Amazon S3 bucket.
D.Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket.
Answer: B
Explanation:
Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the
organization.
A.Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in eu-west-2.
B.Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu-west-2.
C.Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to
send and receive data between the data centers and each VPC.
D.Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the virtual
private gateways of the VPCs in each Region to the Direct Connect gateway.
Answer: D
Explanation:
Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the virtual
private gateways of the VPCs in each Region to the Direct Connect gateway.
A.Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS
Lambda. Store the processed updates in Amazon DynamoDB.
B.Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2
instances set up for Auto Scaling. Store the processed updates in Amazon Redshift.
C.Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS
Lambda function to the SNS topic to process the updates. Store the processed updates in a SQL database
running on Amazon EC2.
D.Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use a fleet of Amazon EC2
instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates in an
Amazon RDS Multi-AZ DB instance.
Answer: A
Explanation:
Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams with AWS
Lambda. Store the processed updates in Amazon DynamoDB.
A.Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the
centralized S3 bucket.
B.Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2.
Use this S3 bucket for log analysis.
C.Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to
another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
D.Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3
buckets (s3:ObjectCreated:* event). Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log
analysis.
Answer: B
Explanation:
Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2.
Use this S3 bucket for log analysis.
The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in the ap-southeast-1 Region.
The company wants to replicate the data to the new S3 buckets. The company needs to minimize latency for
developers who upload videos and students who stream videos near eu-west-2 and ap-southeast-1.
Which combination of steps will meet these requirements with the FEWEST changes to the application? (Choose
two.)
A.Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way
replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
B.Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way
replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
C.Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
D.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
E.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming and uploads.
Answer: CE
Explanation:
C.Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
E.Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of
the Multi-Region Access Point for video streaming and uploads.
Users access content often in the first minutes after the content is posted. New content quickly replaces older
content, and then the older content disappears. The local nature of the news means that users consume 90% of
the content within the AWS Region where it is uploaded.
Which solution will optimize the user experience by providing the LOWEST latency for content uploads?
A.Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
B.Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
C.Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon
S3.
D.Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of
Amazon CloudFront.
Answer: B
Explanation:
Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
The company wants to add a service that can send messages received from the API Gateway REST API to multiple
target Lambda functions for processing. The service must offer message filtering that gives the target Lambda
functions the ability to receive only the messages the functions need.
Which solution will meet these requirements with the LEAST operational overhead?
A.Send the requests from the API Gateway REST API to an Amazon Simple Notification Service (Amazon SNS)
topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target
Lambda functions to poll the different SQS queues.
B.Send the requests from the API Gateway REST API to Amazon EventBridge. Configure EventBridge to invoke
the target Lambda functions.
C.Send the requests from the API Gateway REST API to Amazon Managed Streaming for Apache Kafka
(Amazon MSK). Configure Amazon MSK to publish the messages to the target Lambda functions.
D.Send the requests from the API Gateway REST API to multiple Amazon Simple Queue Service (Amazon SQS)
queues. Configure the target Lambda functions to poll the different SQS queues.
Answer: A
Explanation:
Send the requests from the API Gateway REST API to an Amazon Simple Notification Service (Amazon SNS)
topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic. Configure the target
Lambda functions to poll the different SQS queues.
A.Create a list of unencrypted objects by filtering an Amazon S3 Inventory report. Configure an S3 Batch
Operations job to encrypt the objects from the list with a server-side encryption with a customer-provided key
(SSE-C). Configure the S3 default encryption feature to use a server-side encryption with a customer-provided
key (SSE-C).
B.Use S3 Storage Lens metrics to identify unencrypted S3 buckets. Configure the S3 default encryption
feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
C.Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure an AWS
Batch job to encrypt the objects from the list with a server-side encryption with AWS KMS keys (SSE-KMS).
Configure the S3 default encryption feature to use a server-side encryption with AWS KMS keys (SSE-KMS).
D.Create a list of unencrypted objects by filtering the AWS usage report for Amazon S3. Configure the S3
default encryption feature to use a server-side encryption with a customer-provided key (SSE-C).
Answer: A
Explanation:
Create a list of unencrypted objects by filtering an Amazon S3 Inventory report. Configure an S3 Batch
Operations job to encrypt the objects from the list with a server-side encryption with a customer-provided key
(SSE-C). Configure the S3 default encryption feature to use a server-side encryption with a customer-
provided key (SSE-C).
Reference:
https://aws.amazon.com/blogs/storage/encrypting-objects-with-amazon-s3-batch-operations/
What should a solutions architect do to rapidly migrate the DNS hosting service?
A.Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.
B.Create an Amazon Route 53 private hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.
C.Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory
Service for Microsoft Active Directory for the domain records.
D.Create an Amazon Route 53 Resolver inbound endpoint in the VPC. Specify the IP addresses that the
provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the
domain to the IP addresses that are specified in the inbound endpoint.
Answer: A
Explanation:
Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the
domain records hosted by the previous provider.
A.Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store
and retrieve the credentials.
B.Use AWS Lambda to store and manage the application configuration. Use AWS Systems Manager Parameter
Store to store and retrieve the credentials.
C.Use an encrypted application configuration file. Store the file in Amazon S3 for the application configuration.
Create another S3 file to store and retrieve the credentials.
D.Use AWS AppConfig to store and manage the application configuration. Use Amazon RDS to store and
retrieve the credentials.
Answer: A
Explanation:
Use AWS AppConfig to store and manage the application configuration. Use AWS Secrets Manager to store
and retrieve the credentials.
Answer: D
Explanation:
Download AWS-provided root certificates. Provide the certificates in all connections to the RDS instance.
Answer: C
Explanation:
An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
A.Create IAM users for daily administrative tasks. Disable the root user.
B.Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
C.Generate an access key for the root user. Use the access key for daily administration tasks instead of the
AWS Management Console.
D.Provide the root user credentials to the most senior solutions architect. Have the solutions architect use the
root user for daily administration tasks.
Answer: B
Explanation:
Create IAM users for daily administrative tasks. Enable multi-factor authentication on the root user.
Which combination of network solutions will meet these requirements? (Choose two.)
Answer: AC
Explanation:
A. Enable and configure enhanced networking on each EC2 instance. Enhanced networking provides higher
bandwidth, higher packet per second (PPS) performance, and consistently lower inter-instance latencies.
C. Run the EC2 instances in a cluster placement group. A cluster placement group is a logical grouping of
instances within a single Availability Zone. This configuration is recommended for applications that need low
network latency, high network throughput, or both.
What should a solutions architect do to meet these requirements with the LEAST operational overhead?
Answer: C
Explanation:
Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server.
A.Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation
template provided by the monitoring account in each AWS account to share the data with the monitoring
account.
B.Set up service control policies (SCPs) to provide access to CloudWatch in the monitoring account under the
Organizations root organizational unit (OU).
C.Configure a new IAM user in the monitoring account. In each AWS account, configure an IAM policy to have
access to query and visualize the CloudWatch data in the account. Attach the new IAM policy to the new IAM
user.
D.Create a new IAM user in the monitoring account. Create cross-account IAM policies in each AWS account.
Attach the IAM policies to the new IAM user.
Answer: A
Explanation:
Enable CloudWatch cross-account observability for the monitoring account. Deploy an AWS CloudFormation
template provided by the monitoring account in each AWS account to share the data with the monitoring
account.
A.Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B.Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C.Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP
address.
D.Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP
address.
Answer: B
Explanation:
Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
A.Create IAM users for the employees in the required AWS accounts. Connect IAM users to the existing IdP.
Configure federated authentication for the IAM users.
B.Set up AWS account root users with user email addresses and passwords that are synchronized from the
existing IdP.
C.Configure AWS IAM Identity Center (AWS Single Sign-On). Connect IAM Identity Center to the existing IdP.
Provision users and groups from the existing IdP.
D.Use AWS Resource Access Manager (AWS RAM) to share access to the AWS accounts with the users in the
existing IdP.
Answer: C
Explanation:
Configure AWS IAM Identity Center (AWS Single Sign-On). Connect IAM Identity Center to the existing IdP.
Provision users and groups from the existing IdP.
The solutions architect has created an IAM user for each of the five designated employees and has created an IAM
user group.
A.Attach the AdministratorAccess resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
B.Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
C.Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
D.Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
Answer: C
Explanation:
Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five
designated employee IAM users in the IAM user group.
The company needs a solution that requires the least amount of infrastructure management. The solution must
guarantee exactly-once delivery for application messaging.
Answer: AD
Explanation:
D.Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the messaging component between the
compute layers.
A.Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) file system for storage. Use
an Amazon EC2 instance in an Auto Scaling group with a scheduled scaling policy to run the batch operation.
B.Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic Block Store
(Amazon EBS) volume for storage. Use an Auto Scaling group with the minimum number of instances and
desired number of instances set to 1.
C.Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an Amazon Elastic File System
(Amazon EFS) file system for storage. Use an Auto Scaling group with the minimum number of instances and
desired number of instances set to 1.
D.Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch
files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling group
with a scheduled scaling policy to run the batch operation.
Answer: D
Explanation:
Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify the application to pull the batch
files from Amazon S3 to an Amazon EC2 instance for processing. Use an EC2 instance in an Auto Scaling
group with a scheduled scaling policy to run the batch operation.
A.Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs.
Create an accelerator using AWS Global Accelerator and register the NLBs as endpoints.
B.Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Deploy AWS WAF on the
ALBs. Create an accelerator using AWS Global Accelerator and register the ALBs as endpoints.
C.Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. Deploy AWS WAF on the NLBs.
Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to
route requests to the NLBs.
D.Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon
CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to
the ALBs. Deploy AWS WAF on the CloudFront distribution.
Answer: D
Explanation:
Put the EC2 instances behind Application Load Balancers (ALBs) in each Region. Create an Amazon
CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to
the ALBs. Deploy AWS WAF on the CloudFront distribution.
Answer: B
Explanation:
The company needs a solution to provide samples of the conversations to an external service provider for quality
control. The external service provider needs to randomly pick sample conversations up to the most recent
conversation. The company must not share the customer PII with the external service provider. The solution must
scale when the number of customer conversations increases.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when the
function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
B.Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the PII from the
files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access the
bucket that does not contain the PII.
B. Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the PII from the
files, and allows the external service provider to download new versions of the files that have the PII redacted.
D.Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the files that
does not contain PII. Configure the Lambda function to store the non-PII data in the DynamoDB table when a
new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.
Answer: A
Explanation:
Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the PII when the
function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
Answer: C
Explanation:
Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
Question: 758 CertyIQ
A company wants to deploy its containerized application workloads to a VPC across three Availability Zones. The
company needs a solution that is highly available across Availability Zones. The solution must require minimal
changes to the application.
Which solution will meet these requirements with the LEAST operational overhead?
A.Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling to use
target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with an
Availability Zone attribute.
B.Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. Configure Application Auto
Scaling to use target tracking scaling. Set the minimum capacity to 3.
C.Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spread placement group. Configure an
Auto Scaling group to use target tracking scaling. Set the minimum capacity to 3.
D.Use an AWS Lambda function. Configure the Lambda function to connect to a VPC. Configure Application
Auto Scaling to use Lambda as a scalable target. Set the minimum capacity to 3.
Answer: A
Explanation:
Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS Service Auto Scaling to use
target tracking scaling. Set the minimum capacity to 3. Set the task placement strategy type to spread with
an Availability Zone attribute.
The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There
is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The
company wants to minimize hosting service costs based on demand.
A.Store all media content in Amazon S3. Use S3 Lifecycle policies to move media data into the Infrequent
Access tier when the demand for a movie decreases.
B.Store newer movie video files in S3 Standard. Store older movie video files in S3 Standard-infrequent Access
(S3 Standard-IA). When a user orders an older movie, retrieve the video file by using standard retrieval.
C.Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible
Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
D.Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval.
When a user orders an older movie, retrieve the video file by using bulk retrieval.
Answer: C
Explanation:
Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible
Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
Which solution meets these requirements with the LEAST operational overhead?
A.Create an AWS Lambda function that uses the Docker container image with an Amazon S3 mounted volume
that has more than 50 GB of space.
B.Create an AWS Lambda function that uses the Docker container image with an Amazon Elastic Block Store
(Amazon EBS) volume that has more than 50 GB of space.
C.Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type.
Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.
D.Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the Amazon EC2 launch type
with an Amazon Elastic Block Store (Amazon EBS) volume that has more than 50 GB of space. Create a task
definition for the container image. Create a service with that task definition.
Answer: C
Explanation:
Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWS Fargate launch type.
Create a task definition for the container image with an Amazon Elastic File System (Amazon EFS) volume.
Create a service with that task definition.
A.Enable AWS IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP.
B.Create an IAM policy that uses AWS credentials, and integrate the policy into LDAP.
C.Set up a process that rotates the IAM credentials whenever LDAP credentials are updated.
D.Develop an on-premises custom identity broker application or process that uses AWS Security Token Service
(AWS STS) to get short-lived credentials.
Answer: D
Explanation:
Develop an on-premises custom identity broker application or process that uses AWS Security Token Service
(AWS STS) to get short-lived credentials.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate
AWS account.
B.Copy all AMIs to another AWS account periodically.
C.Create a retention rule in Recycle Bin.
D.Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
Answer: C
Explanation:
Reference:
https://aws.amazon.com/about-aws/whats-new/2022/02/amazon-ec2-recycle-bin-machine-images/
What is the MOST cost-effective mechanism to move this data and meet the migration deadline?
Answer: B
Explanation:
The company needs to migrate the application by making the fewest possible changes to the architecture. The
company also needs a database solution that can restore data to a specific point in time.
Which solution will meet these requirements with the LEAST operational overhead?
A.Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.
B.Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances
in private subnets. Migrate the database tier to Amazon Aurora MySQL in private subnets.
C.Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the application tier to EC2 instances
in private subnets. Migrate the database tier to Amazon RDS for MySQL in private subnets.
D.Migrate the web tier and the application tier to Amazon EC2 instances in public subnets. Migrate the
database tier to Amazon Aurora MySQL in public subnets.
Answer: A
Explanation:
Migrate the web tier and the application tier to Amazon EC2 instances in private subnets. Migrate the
database tier to Amazon RDS for MySQL in private subnets.
A.Create an instance profile that provides the other company access to the SQS queue.
B.Create an IAM policy that provides the other company access to the SQS queue.
C.Create an SQS access policy that provides the other company access to the SQS queue.
D.Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company
access to the SQS queue.
Answer: C
Explanation:
Create an SQS access policy that provides the other company access to the SQS queue.
The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private
subnet and access the internet through a NAT gateway that is deployed in a public subnet.
A.Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection IAM
permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2
instances.
B.Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the
developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the
corporate network. Instruct the developers to set up another VPN connection for access when they work
remotely.
C.Create a bastion host in the public subnet of the VPConfigure the security groups and SSH keys of the
bastion host to only allow connections and SSH authentication from the developers’ corporate and remote
networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2
instances.
D.Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2
instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.
Answer: D
Explanation:
Attach the AmazonSSMManagedInstanceCore IAM policy to an IAM role that is associated with the EC2
instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2
instances.
Which storage solution should a solutions architect recommend to meet these requirements?
A.Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing basis.
B.Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage. Migrate the
data to the Storage Gateway appliance.
C.Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the
target storage. Migrate the data to the Storage Gateway appliance.
D.Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate data to
an Amazon Elastic File System (Amazon EFS) file system.
Answer: C
Explanation:
Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the
target storage. Migrate the data to the Storage Gateway appliance.
Which solution meets these requirements with the LEAST operational overhead?
Answer: A
Explanation:
A.Configure AWS CloudTrail trails to log S3 API calls. Use AWS AppSync to process the files.
B.Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to
process the files.
C.Configure Amazon Kinesis Data Streams to process and send data to Amazon S3. Invoke an AWS Lambda
function to process the files.
D.Configure an Amazon Simple Notification Service (Amazon SNS) topic to process the files uploaded to
Amazon S3. Invoke an AWS Lambda function to process the files.
Answer: B
Explanation:
Configure an object-created event notification within the S3 bucket to invoke an AWS Lambda function to
process the files.
The production instances show constant usage because of customers in different time zones. The company uses
nonproduction instances only during business hours on weekdays. The company does not use the nonproduction
instances on the weekends. The company wants to optimize the costs to run its application on AWS.
A.Use On-Demand Instances for the production instances. Use Dedicated Hosts for the nonproduction
instances on weekends only.
B.Use Reserved Instances for the production instances and the nonproduction instances. Shut down the
nonproduction instances when not in use.
C.Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction
instances. Shut down the nonproduction instances when not in use.
D.Use Dedicated Hosts for the production instances. Use EC2 Instance Savings Plans for the nonproduction
instances.
Answer: C
Explanation:
Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction
instances. Shut down the nonproduction instances when not in use.
The company must capture the changes that occur to the source database during the migration to Aurora
PostgreSQL.
Which solution will meet these requirements?
A.Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
B.Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL
by using the Aurora PostgreSQL aws_s3 extension.
C.Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the
ongoing changes.
D.Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora
PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
Answer: C
Explanation:
Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL
schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the
ongoing changes.
The solution must scale in and out appropriately according to demand on the individual container services. The
solution also must not result in additional operational overhead or infrastructure to manage.
A.Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B.Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
C.Provision an Amazon API Gateway API. Connect the API to AWS Lambda to run the containers.
D.Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
E.Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
Answer: AB
Explanation:
A.Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
B.Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
A.Create an Auto Scaling group that is large enough to handle peak traffic load. Stop half of the Amazon EC2
instances. Configure the Auto Scaling group to use the stopped instances to scale out when traffic increases.
B.Create an Auto Scaling group for the website. Set the minimum size of the Auto Scaling group so that it can
handle high traffic volumes without the need to scale out.
C.Use Amazon CloudFront and Amazon ElastiCache to cache dynamic content with an Auto Scaling group set
as the origin. Configure the Auto Scaling group with the instances necessary to populate CloudFront and
ElastiCache. Scale in after the cache is fully populated.
D.Configure an Auto Scaling group to scale out as traffic increases. Create a launch template to start new
instances from a preconfigured Amazon Machine Image (AMI).
Answer: D
Explanation:
Configure an Auto Scaling group to scale out as traffic increases. Create a launch template to start new
instances from a preconfigured Amazon Machine Image (AMI).
What should the solutions architect do to meet these requirements with the LEAST operational overhead?
A.Write an AWS Lambda script that monitors security groups for SSH being open to 0.0.0.0/0 addresses and
creates a notification every time it finds one.
B.Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service
(Amazon SNS) notification when a noncompliant rule is created.
C.Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon
Simple Notification Service (Amazon SNS) topic to generate a notification every time the role is assumed by a
user.
D.Configure a service control policy (SCP) that prevents non-administrative users from creating or editing
security groups. Create a notification in the ticketing system when a user requests a rule that needs
administrator permissions.
Answer: B
Explanation:
Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service
(Amazon SNS) notification when a noncompliant rule is created.
A company has deployed an application in an AWS account. The application consists of microservices that run on
AWS Lambda and Amazon Elastic Kubernetes Service (Amazon EKS). A separate team supports each microservice.
The company has multiple AWS accounts and wants to give each team its own account for its microservices.
A solutions architect needs to design a solution that will provide service-to-service communication over HTTPS
(port 443). The solution also must provide a service registry for service discovery.
Which solution will meet these requirements with the LEAST administrative overhead?
A.Create an inspection VPC. Deploy an AWS Network Firewall firewall to the inspection VPC. Attach the
inspection VPC to a new transit gateway. Route VPC-to-VPC traffic to the inspection VPC. Apply firewall rules
to allow only HTTPS communication.
B.Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS
listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.
C.Create a Network Load Balancer (NLB) with an HTTPS listener and target groups for each microservice.
Create an AWS PrivateLink endpoint service for each microservice. Create an interface VPC endpoint in each
VPC that needs to consume that microservice.
D.Create peering connections between VPCs that contain microservices. Create a prefix list for each service
that requires a connection to a client. Create route tables to route traffic to the appropriate VPC. Create
security groups to allow only HTTPS communication.
Answer: B
Explanation:
Create a VPC Lattice service network. Associate the microservices with the service network. Define HTTPS
listeners for each service. Register microservice compute resources as targets. Identify VPCs that need to
communicate with the services. Associate those VPCs with the service network.
Answer: C
Explanation:
A.Add the development team's OU Amazon Resource Name (ARN) to the launch permission list for the AMIs.
B.Add the Organizations root Amazon Resource Name (ARN) to the launch permission list for the AMIs.
C.Update the key policy to allow the development team's OU to use the AWS KMS keys that are used to
decrypt the snapshots.
D.Add the development team’s account Amazon Resource Name (ARN) to the launch permission list for the
AMIs.
E.Recreate the AWS KMS key. Add a key policy to allow the Organizations root Amazon Resource Name (ARN)
to use the AWS KMS key.
Answer: AC
Explanation:
A.Add the development team's OU Amazon Resource Name (ARN) to the launch permission list for the AMIs.
C.Update the key policy to allow the development team's OU to use the AWS KMS keys that are used to
decrypt the snapshots.
The company needs to perform a one-time migration of a large amount of data from its offices to Amazon S3. The
company must complete the migration within 4 weeks.
A.Establish a new 10 Gbps AWS Direct Connect connection to each office. Transfer the data to Amazon S3.
B.Use multiple AWS Snowball Edge storage-optimized devices to store and transfer the data to Amazon S3.
C.Use an AWS Snowmobile to store and transfer the data to Amazon S3.
D.Set up an AWS Storage Gateway Volume Gateway to transfer the data to Amazon S3.
Answer: B
Explanation:
Use multiple AWS Snowball Edge storage-optimized devices to store and transfer the data to Amazon S3.
A.Mount the EFS file system in read-only mode from within the EC2 instances.
B.Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to the
IAM roles that are attached to the EC2 instances.
C.Create an identity policy for the EFS file system that denies the elasticfilesystem:ClientWrite action on the
EFS file system.
D.Create an EFS access point for each application. Use Portable Operating System Interface (POSIX) file
permissions to allow read-only access to files in the root directory.
Answer: B
Explanation:
Create a resource policy for the EFS file system that denies the elasticfilesystem:ClientWrite action to the
IAM roles that are attached to the EC2 instances.
A.Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the
appropriate IAM policies to the role for the permissions that the vendor requires.
B.Create an IAM user in the company’s account with a password that meets the password complexity
requirements. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.
C.Create an IAM group in the company’s account. Add the automated tool’s IAM user from the vendor account
to the group. Attach the appropriate IAM policies to the group for the permissions that the vendor requires.
D.Create an IAM user in the company’s account that has a permission boundary that allows the vendor’s
account. Attach the appropriate IAM policies to the user for the permissions that the vendor requires.
Answer: A
Explanation:
Create an IAM role in the company’s account to delegate access to the vendor’s IAM role. Attach the
appropriate IAM policies to the role for the permissions that the vendor requires.
A.Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an
alert threshold to receive notification when spending exceeds 60% of the budget.
B.Use AWS Cost Explorer forecasts to determine resource owners. Use AWS Cost Anomaly Detection to create
alert threshold notifications when spending exceeds 60% of the budget.
C.Use cost allocation tags on AWS resources to label owners. Use AWS Support API on AWS Trusted Advisor to
create alert threshold notifications when spending exceeds 60% of the budget.
D.Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgets in AWS Budgets. Add
an alert threshold to receive notification when spending exceeds 60% of the budget.
Answer: A
Explanation:
Use cost allocation tags on AWS resources to label owners. Create usage budgets in AWS Budgets. Add an
alert threshold to receive notification when spending exceeds 60% of the budget.
The company has created a VPC and has configured an AWS Site-to-Site VPN connection to the company's office.
A solutions architect must design a secure architecture for the web application.
A.Deploy the web application on Amazon EC2 instances in public subnets behind a public Application Load
Balancer (ALB). Attach an internet gateway to the VPC. Set the inbound source of the ALB's security group to
0.0.0.0/0.
B.Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the
inbound source of the ALB's security group to the company's office network CIDR block.
C.Deploy the web application on Amazon EC2 instances in public subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in private subnets. Attach an internet gateway to the VPSet the
outbound destination of the ALB’s security group to the company's office network CIDR block.
D.Deploy the web application on Amazon EC2 instances in private subnets behind a public Application Load
Balancer (ALB). Attach an internet gateway to the VPC. Set the outbound destination of the ALB’s security
group to 0.0.0.0/0.
Answer: B
Explanation:
Deploy the web application on Amazon EC2 instances in private subnets behind an internal Application Load
Balancer (ALB). Deploy NAT gateways in public subnets. Attach an internet gateway to the VPC. Set the
inbound source of the ALB's security group to the company's office network CIDR block.
A.Copy the records from the application into an Amazon Redshift cluster.
B.Copy the records from the application into an Amazon Neptune cluster.
C.Copy the records from the application into an Amazon Timestream database.
D.Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.
Answer: D
Explanation:
Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.
A.Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket. Invoke
other Lambda functions at regularly scheduled intervals.
B.Use Amazon Athena to process the data. Use Amazon EventBridge Scheduler to invoke Athena on a regular
internal.
C.Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run the DataBrew
data preparation jobs.
D.Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at midnight.
Answer: C
Explanation:
Use AWS Glue DataBrew to process the data. Use an AWS Step Functions state machine to run the DataBrew
data preparation jobs.
The architecture must ensure that the application does not process duplicate payments.
A.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon S3 bucket. Configure the
S3 bucket with an event notification to invoke another Lambda function to process the due payments.
B.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) queue. Configure another Lambda function to poll the SQS queue and to process the due
payments.
C.Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and to process the due
payments.
D.Use Lambda to retrieve all due payments. Store the due payments in an Amazon DynamoDB table. Configure
streams on the DynamoDB table to invoke another Lambda function to process the due payments.
Answer: C
Explanation:
Use Lambda to retrieve all due payments. Publish the due payments to an Amazon Simple Queue Service
(Amazon SQS) FIFO queue. Configure another Lambda function to poll the FIFO queue and to process the due
payments.
Answer: B
Explanation:
Set the home AWS Region in AWS Migration Hub. Use AWS Application Discovery Service to collect data
about the on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
A.Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS Security
Hub and AWS Control Tower Account Factory in the environment.
B.Deploy an AWS Control Tower environment in a dedicated Organizations member account. Enable AWS
Security Hub and AWS Control Tower Account Factory in the environment.
C.Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC
to self-service provision Amazon GuardDuty in the MALZ.
D.Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ). Submit an RFC
to self-service provision AWS Security Hub in the MALZ.
Answer: A
Explanation:
Deploy an AWS Control Tower environment in the Organizations management account. Enable AWS Security
Hub and AWS Control Tower Account Factory in the environment.
A.Create an Amazon Aurora MySQL database. Migrate the data from the S3 bucket into Aurora by using AWS
Database Migration Service (AWS DMS). Issue SQL statements to the Aurora database.
B.Create an Amazon Redshift cluster. Use Redshift Spectrum to run SQL statements directly on the data in the
S3 bucket.
C.Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to
run SQL statements directly on the data in the S3 bucket.
D.Create an Amazon EMR cluster. Use Apache Spark SQL to run SQL statements directly on the data in the S3
bucket.
Answer: C
Explanation:
Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket. Use Amazon Athena to
run SQL statements directly on the data in the S3 bucket.
A.Use AWS Control Tower proactive controls to block deployment of EC2 instances with public IP addresses
and inline policies with elevated access or “*”.
B.Use AWS Control Tower detective controls to block deployment of EC2 instances with public IP addresses
and inline policies with elevated access or “*”.
C.Use AWS Config to create rules for EC2 and IAM compliance. Configure the rules to run an AWS Systems
Manager Session Manager automation to delete a resource when it is not compliant.
D.Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead
to noncompliance.
Answer: D
Explanation:
Use a service control policy (SCP) to block actions for the EC2 instances and IAM resources if the actions lead
to noncompliance.
The company needs a solution that will provide high availability and scalability to meet the increased user demand
without rewriting the web application.
Answer: BE
Explanation:
B.Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets.
Answer: D
Explanation:
Create an AWS Key Management Service (AWS KMS) key. Enable encryption helpers on the Lambda
functions to use the KMS key to store and encrypt the environment variables.
Which solution will meet these requirements with the MOST operational efficiency?
A.Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs
with a Cognito authorizer.
B.Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP
APIs with a Cognito authorizer.
C.Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway REST
APIs with a Lambda authorizer.
D.Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs with an IAM
authorizer.
Answer: A
Explanation:
Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs
with a Cognito authorizer.
Question: 793 CertyIQ
A company has a mobile app for customers. The app’s data is sensitive and must be encrypted at rest. The
company uses AWS Key Management Service (AWS KMS).
The company needs a solution that prevents the accidental deletion of KMS keys. The solution must use Amazon
Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to
delete a KMS key.
Which solution will meet these requirements with the LEAST operational overhead?
A.Create an Amazon EventBridge rule that reacts when a user tries to delete a KMS key. Configure an AWS
Config rule that cancels any deletion of a KMS key. Add the AWS Config rule as a target of the EventBridge
rule. Create an SNS topic that notifies the administrators.
B.Create an AWS Lambda function that has custom logic to prevent KMS key deletion. Create an Amazon
CloudWatch alarm that is activated when a user tries to delete a KMS key. Create an Amazon EventBridge rule
that invokes the Lambda function when the DeleteKey operation is performed. Create an SNS topic. Configure
the EventBridge rule to publish an SNS message that notifies the administrators.
C.Create an Amazon EventBridge rule that reacts when the KMS DeleteKey operation is performed. Configure
the rule to initiate an AWS Systems Manager Automation runbook. Configure the runbook to cancel the
deletion of the KMS key. Create an SNS topic. Configure the EventBridge rule to publish an SNS message that
notifies the administrators.
D.Create an AWS CloudTrail trail. Configure the trail to deliver logs to a new Amazon CloudWatch log group.
Create a CloudWatch alarm based on the metric filter for the CloudWatch log group. Configure the alarm to use
Amazon SNS to notify the administrators when the KMS DeleteKey operation is performed.
Answer: C
Explanation:
Reference:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/monitor-and-remediate-scheduled-
deletion-of-aws-kms-keys.html
The program generates multiple reports during the last week of each month. The program takes less than 10
minutes to produce each report. The company rarely uses the program to generate reports outside of the last
week of each month The company wants to generate reports in the least amount of time when the reports are
requested.
A.Run the program by using Amazon EC2 On-Demand Instances. Create an Amazon EventBridge rule to start
the EC2 instances when reports are requested. Run the EC2 instances continuously during the last week of
each month.
B.Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports
are requested.
C.Run the program in Amazon Elastic Container Service (Amazon ECS). Schedule Amazon ECS to run the
program when reports are requested.
D.Run the program by using Amazon EC2 Spot Instances. Create an Amazon EventBndge rule to start the EC2
instances when reports are requested. Run the EC2 instances continuously during the last week of each month.
Answer: B
Explanation:
Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambda function when reports
are requested.
A.Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
B.Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
C.Create an Amazon CloudFront distribution. Configure the viewer protocol policy to be HTTP and HTTPS.
D.Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
E.Create an AWS Elastic Beanstalk deployment to manage the environment.
Answer: BD
Explanation:
B.Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
D.Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
A.Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that the web
application invokes when new photos are uploaded.
B.Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a
Lambda function URL that the web application invokes when new photos are uploaded.
C.Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content.
Associate the function with the web application.
D.Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Create a
Lambda function URL that the web application invokes when new photos are uploaded.
Answer: B
Explanation:
Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda
function URL that the web application invokes when new photos are uploaded.
A.Set up a backup administrator account that the company can use to log in if the company loses the MFA
device.
B.Add multiple MFA devices for the root user account to handle the disaster scenario.
C.Create a new administrator account when the company cannot access the root account.
D.Attach the administrator policy to another IAM user when the company cannot access the root account.
Answer: B
Explanation:
B. Add multiple MFA devices for the root user account to handle the disaster scenario.
By adding multiple MFA devices for the root user account, the company ensures that it can still access the
account even if one MFA device is lost. This approach provides a backup for authentication, addressing the
concern of losing access to the root user account if the MFA device is lost.
The partners want to receive notification of user IDs through an HTTP endpoint when the company gives users
points. Hundreds of vendors are interested in becoming affiliated partners every day. The company wants to
design an architecture that gives the website the ability to add partners rapidly in a scalable way.
Which solution will meet these requirements with the LEAST implementation effort?
A.Create an Amazon Timestream database to keep a list of affiliated partners. Implement an AWS Lambda
function to read the list. Configure the Lambda function to send user IDs to each partner when the company
gives users points.
B.Create an Amazon Simple Notification Service (Amazon SNS) topic. Choose an endpoint protocol. Subscribe
the partners to the topic. Publish user IDs to the topic when the company gives users points.
C.Create an AWS Step Functions state machine. Create a task for every affiliated partner. Invoke the state
machine with user IDs as input when the company gives users points.
D.Create a data stream in Amazon Kinesis Data Streams. Implement producer and consumer applications. Store
a list of affiliated partners in the data stream. Send user IDs when the company gives users points.
Answer: B
Explanation:
SNS is designed for precisely this kind of use case. It allows you to publish messages to a topic, which can
then be delivered to multiple subscribers. Partners can subscribe to the SNS topic using an HTTP endpoint as
the protocol, which meets the requirement to notify partners via an HTTP endpoint. This approach is highly
scalable and requires the least implementation effort because it leverages managed services without the
need for custom logic to manage subscriptions or deliver notifications.