AWS SAA-C03 Dumps Part 2
AWS SAA-C03 Dumps Part 2
A company needs to migrate a legacy application from an on-premises data center to the AWS Cloud because of hardware capacity constraints.
The application runs 24 hours a day, 7 days a week. The application’s database storage continues to grow over time.
A. Migrate the application layer to Amazon EC2 Spot Instances. Migrate the data storage layer to Amazon S3.
B. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon RDS On-Demand Instances.
C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.
D. Migrate the application layer to Amazon EC2 On-Demand Instances. Migrate the data storage layer to Amazon RDS Reserved Instances.
Correct Answer: C
Selected Answer: C
Amazon EC2 Reserved Instances allow for significant cost savings compared to On-Demand instances for long-running, steady-state workloads like
this one. Reserved Instances provide a capacity reservation, so the instances are guaranteed to be available for the duration of the reservation
period.
Amazon Aurora is a highly scalable, cloud-native relational database service that is designed to be compatible with MySQL and PostgreSQL. It can
automatically scale up to meet growing storage requirements, so it can accommodate the application's database storage needs over time. By using
Reserved Instances for Aurora, the cost savings will be significant over the long term.
upvoted 20 times
In this case, it may be more cost-effective to use Amazon RDS On-Demand Instances for the data storage layer. With RDS On-Demand
Instances, you pay only for the capacity you use and you can easily scale up or down the storage as needed.
upvoted 5 times
Selected Answer: B
Aurora has no storage limitation and can scale storage according to need which is what is required here
upvoted 3 times
Answer is C
upvoted 1 times
Selected Answer: C
This option involves migrating the application layer to Amazon EC2 Reserved Instances and migrating the data storage layer to Amazon Aurora
Reserved Instances. Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand Instance pricing, making
them a cost-effective choice for applications that have steady state or predictable usage. Similarly, Amazon Aurora Reserved Instances provide a
significant discount (up to 69%) compared to On-Demand Instance pricing.
upvoted 1 times
To meet the requirements of migrating a legacy application from an on-premises data center to the AWS Cloud in a cost-effective manner, the
most suitable option would be:
C. Migrate the application layer to Amazon EC2 Reserved Instances. Migrate the data storage layer to Amazon Aurora Reserved Instances.
Explanation:
Migrating the application layer to Amazon EC2 Reserved Instances allows you to reserve EC2 capacity in advance, providing cost savings compared
to On-Demand Instances. This is especially beneficial if the application runs 24/7.
Migrating the data storage layer to Amazon Aurora Reserved Instances provides cost optimization for the growing database storage needs.
Amazon Aurora is a fully managed relational database service that offers high performance, scalability, and cost efficiency.
upvoted 1 times
Selected Answer: C
C: With Aurora Serverless v2, each writer and reader has its own current capacity value, measured in ACUs. Aurora Serverless v2 scales a writer or
reader up to a higher capacity when its current capacity is too low to handle the load. It scales the writer or reader down to a lower capacity when
its current capacity is higher than needed.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-
works.scaling
upvoted 1 times
Selected Answer: C
Typically Amazon RDS cost less than Aurora. But here, it's Aurora reserved.
upvoted 1 times
A university research laboratory needs to migrate 30 TB of data from an on-premises Windows file server to Amazon FSx for Windows File Server.
The laboratory has a 1 Gbps network link that many other departments in the university share.
The laboratory wants to implement a data migration service that will maximize the performance of the data transfer. However, the laboratory
needs to be able to control the amount of bandwidth that the service uses to minimize the impact on other departments. The data migration must
A. AWS Snowcone
C. AWS DataSync
Correct Answer: C
Selected Answer: C
AWS DataSync is a data transfer service that can copy large amounts of data between on-premises storage and Amazon FSx for Windows File
Server at high speeds. It allows you to control the amount of bandwidth used during data transfer.
• DataSync uses agents at the source and destination to automatically copy files and file metadata over the network. This optimizes the data
transfer and minimizes the impact on your network bandwidth.
• DataSync allows you to schedule data transfers and configure transfer rates to suit your needs. You can transfer 30 TB within 5 days while
controlling bandwidth usage.
• DataSync can resume interrupted transfers and validate data to ensure integrity. It provides detailed monitoring and reporting on the progress
and performance of data transfers.
upvoted 21 times
Selected Answer: C
As read a little bit, I assume that B (FSx File Gateway) requires a little bit more configuration rather than C (DataSync). From Stephane Maarek
course explanation about DataSync:
An online data transfer service that simplifies, automates, and accelerates copying large amounts of data between on-premises storage systems
and AWS Storage services, as well as between AWS Storage services.
You can use AWS DataSync to migrate data located on-premises, at the edge, or in other clouds to Amazon S3, Amazon EFS, Amazon FSx for
Windows File Server, Amazon FSx for Lustre, Amazon FSx for OpenZFS, and Amazon FSx for NetApp ONTAP.
upvoted 11 times
Selected Answer: C
Even if we allocate 60% of total bandwidth for the transfer, that would take 5d2h. Considering that " many other departments in the university
share", that wouldn't be feasible.
Ref. https://expedient.com/knowledgebase/tools-and-calculators/file-transfer-time-calculator/
On the other hand, snowcone isn't also a great option, because "you will receive the Snowcone device in approximately 4-6 days".
Ref.
https://aws.amazon.com/snowcone/faqs/#:~:text=You%20will%20receive%20the%20Snowcone,console%20for%20each%20Snowcone%20device.
upvoted 1 times
Selected Answer: A
Snow cone can support up to 8TB for HDD and 15TB for each SSD devices. Shipped within 4-6 days. Data migration can begin on next 5 days.
Does not use any amount of bandwidth and impact the production network. Device came with 1G and 10G Base-T Ethernet port. That's the
Maximum performance in data transfer. defined in the question.
upvoted 2 times
Selected Answer: C
C. AWS DataSync
upvoted 1 times
Selected Answer: C
https://aws.amazon.com/datasync/features/
upvoted 1 times
Selected Answer: C
"Amazon FSx File Gateway" is for storing data, not for migrating. So the answer should be C.
upvoted 2 times
Snowcone to small and delivertime to long. With DataSync you can set bandwidth limits - so this is fine solution.
upvoted 3 times
Voting C
upvoted 1 times
Bhawesh 1 year, 7 months ago
Selected Answer: C
C. - DataSync is Correct.
A. Snowcone is incorrect. The question says data migration must take place within the next 5 days.AWS says: If you order, you will receive the
Snowcone device in approximately 4-6 days.
upvoted 2 times
Selected Answer: C
DataSync can be used to migrate data between on-premises Windows file servers and Amazon FSx for Windows File Server with its compatibility
for Windows file systems.
The laboratory needs to migrate a large amount of data (30 TB) within a relatively short timeframe (5 days) and limit the impact on other
departments' network traffic. Therefore, AWS DataSync can meet these requirements by providing fast and efficient data transfer with network
throttling capability to control bandwidth usage.
upvoted 4 times
https://aws.amazon.com/datasync/
upvoted 2 times
Question #302 Topic 1
A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures
video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the
B. Use AWS DataSync to replicate the video files across AW'S Regions in other S3 buckets.
C. Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
D. Deploy an Auto Sealing group of Amazon EC2 instances in Local Zones for content delivery and caching.
E. Deploy an Auto Scaling group of Amazon EC2 instances to convert the video files to more appropriate formats.
Correct Answer: A
Selected Answer: C
A&C is correct
upvoted 3 times
A and C
upvoted 2 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: C
Correct answer: AC
upvoted 2 times
Selected Answer: A
Selected Answer: C
a and c
upvoted 2 times
Selected Answer: C
A company is launching a new application deployed on an Amazon Elastic Container Service (Amazon ECS) cluster and is using the Fargate
launch type for ECS tasks. The company is monitoring CPU and memory usage because it is expecting high traffic to the application upon its
launch. However, the company wants to reduce costs when utilization decreases.
A. Use Amazon EC2 Auto Scaling to scale at certain periods based on previous traffic patterns.
B. Use an AWS Lambda function to scale Amazon ECS based on metric breaches that trigger an Amazon CloudWatch alarm.
C. Use Amazon EC2 Auto Scaling with simple scaling policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
D. Use AWS Application Auto Scaling with target tracking policies to scale when ECS metric breaches trigger an Amazon CloudWatch alarm.
Correct Answer: D
Selected Answer: D
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 3 times
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 1 times
This is running on Fargate, so EC2 scaling (A and C) is out. Lambda (B) is too complex.
upvoted 3 times
Selected Answer: D
should be D
upvoted 1 times
Selected Answer: D
https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html
upvoted 4 times
Selected Answer: D
Answer is D
upvoted 2 times
A company recently created a disaster recovery site in a different AWS Region. The company needs to transfer large amounts of data back and
forth between NFS file systems in the two Regions on a periodic basis.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: A
Selected Answer: A
AWS DataSync is a fully managed data transfer service that simplifies moving large amounts of data between on-premises storage systems and
AWS services. It can also transfer data between different AWS services, including different AWS Regions. DataSync provides a simple, scalable, and
automated solution to transfer data, and it minimizes the operational overhead because it is fully managed by AWS.
upvoted 16 times
AWS DataSync is a fully managed data transfer service that simplifies moving large amounts of data between on-premises storage systems and
AWS services. It can also transfer data between different AWS services, including different AWS Regions. DataSync provides a simple, scalable, and
automated solution to transfer data, and it minimizes the operational overhead because it is fully managed by AWS.
upvoted 1 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
• AWS DataSync is a data transfer service optimized for moving large amounts of data between NFS file systems. It can automatically copy files and
metadata between your NFS file systems in different AWS Regions.
• DataSync requires minimal setup and management. You deploy a source and destination agent, provide the source and destination locations, and
DataSync handles the actual data transfer efficiently in the background.
• DataSync can schedule and monitor data transfers to keep source and destination in sync with minimal overhead. It resumes interrupted transfers
and validates data integrity.
• DataSync optimizes data transfer performance across AWS's network infrastructure. It can achieve high throughput with minimal impact to your
operations.
upvoted 2 times
A only
upvoted 1 times
Aaaaaa
upvoted 1 times
A company is designing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to use
A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the application server to
C. Create an Amazon FSx for Windows File Server file system. Attach the file system to the origin server. Connect the application server to the
file system.
D. Create an Amazon S3 bucket. Assign an IAM role to the application to grant access to the S3 bucket. Mount the S3 bucket to the
application server.
Correct Answer: C
Selected Answer: C
Selected Answer: C
• Amazon FSx for Windows File Server provides a fully managed native Windows file system that can be accessed using the industry-standard SMB
protocol. This allows Windows clients like the gaming application to directly access file data.
• FSx for Windows File Server handles time-consuming file system administration tasks like provisioning, setup, maintenance, file share
management, backups, security, and software patching - reducing operational overhead.
• FSx for Windows File Server supports high file system throughput, IOPS, and consistent low latencies required for performance-sensitive
workloads. This makes it suitable for a gaming application.
• The file system can be directly attached to EC2 instances, providing a performant shared storage solution for the gaming servers.
upvoted 4 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
AWS FSx for Windows File Server is a fully managed native Microsoft Windows file system that is accessible through the SMB protocol. It provides
features such as file system backups, integrated with Amazon S3, and Active Directory integration for user authentication and access control. This
solution allows for the use of SMB clients to access the data and is fully managed, eliminating the need for the company to manage the underlying
infrastructure.
upvoted 2 times
C for me
upvoted 1 times
Question #306 Topic 1
A company wants to run an in-memory database for a latency-sensitive application that runs on Amazon EC2 instances. The application
processes more than 100,000 transactions each minute and requires high network throughput. A solutions architect needs to provide a cost-
A. Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when
B. Launch all EC2 instances in different Availability Zones within the same AWS Region. Specify a placement group with partition strategy
C. Deploy an Auto Scaling group to launch EC2 instances in different Availability Zones based on a network utilization target.
D. Deploy an Auto Scaling group with a step scaling policy to launch EC2 instances in different Availability Zones.
Correct Answer: A
Selected Answer: A
Reasons:
• Launching instances within a single AZ and using a cluster placement group provides the lowest network latency and highest bandwidth between
instances. This maximizes performance for an in-memory database and high-throughput application.
• Communications between instances in the same AZ and placement group are free, minimizing data transfer charges. Inter-AZ and public IP traffic
can incur charges.
• A cluster placement group enables the instances to be placed close together within the AZ, allowing the high network throughput required.
Partition groups span AZs, reducing bandwidth.
• Auto Scaling across zones could launch instances in AZs that increase data transfer charges. It may reduce network throughput, impacting
performance.
upvoted 18 times
Selected Answer: A
Apart from the fact that BCD distribute the instances across AZ which is bad for inter-node network latency, I think the following article is really
useful in understanding A:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 1 times
Selected Answer: A
Cluster placement group packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency
network performance.
upvoted 4 times
Selected Answer: A
Launch all EC2 instances in the same Availability Zone within the same AWS Region. Specify a placement group with cluster strategy when
launching EC2 instances
upvoted 1 times
Selected Answer: A
Cluster - have low latency if its in same AZ and same region so Answer is "A"
upvoted 2 times
As all the autoscaling nodes will also be on the same availability zones, (as per Placement groups with Cluster mode), this would provide the low-
latency network performance
Reference is below:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 3 times
Selected Answer: A
Selected Answer: A
A placement group is a logical grouping of instances within a single Availability Zone, and it provides low-latency network connectivity between
instances. By launching all EC2 instances in the same Availability Zone and specifying a placement group with cluster strategy, the application can
take advantage of the high network throughput and low latency network connectivity that placement groups provide.
upvoted 1 times
Selected Answer: A
Cluster placement groups improves throughput between the instances which means less EC2 instances would be needed thus reducing costs.
upvoted 1 times
A company that primarily runs its application servers on premises has decided to migrate to AWS. The company wants to minimize its need to
scale its Internet Small Computer Systems Interface (iSCSI) storage on premises. The company wants only its recently accessed data to remain
stored locally.
Which AWS solution should the company use to meet these requirements?
Correct Answer: D
Selected Answer: D
AWS Storage Gateway Volume Gateway provides two configurations for connecting to iSCSI storage, namely, stored volumes and cached volumes.
The stored volume configuration stores the entire data set on-premises and asynchronously backs up the data to AWS. The cached volume
configuration stores recently accessed data on-premises, and the remaining data is stored in Amazon S3.
Since the company wants only its recently accessed data to remain stored locally, the cached volume configuration would be the most appropriate
It allows the company to keep frequently accessed data on-premises and reduce the need for scaling its iSCSI storage while still providing access to
all data through the AWS cloud. This configuration also provides low-latency access to frequently accessed data and cost-effective off-site backups
for less frequently accessed data.
upvoted 40 times
Selected Answer: D
https://docs.amazonaws.cn/en_us/storagegateway/latest/vgw/StorageGatewayConcepts.html#storage-gateway-cached-concepts
upvoted 8 times
Selected Answer: D
Frequently accessed data = AWS Storage Gateway Volume Gateway cached volumes
upvoted 3 times
The best AWS solution to meet the requirements is to use AWS Storage Gateway cached volumes (option D).
Selected Answer: D
• Volume Gateway cached volumes store entire datasets on S3, while keeping a portion of recently accessed data on your local storage as a cache.
This meets the goal of minimizing on-premises storage needs while keeping hot data local.
• The cache provides low-latency access to your frequently accessed data, while long-term retention of the entire dataset is provided durable and
cost-effective in S3.
• You get virtually unlimited storage on S3 for your infrequently accessed data, while controlling the amount of local storage used for cache. This
simplifies on-premises storage scaling.
• Volume Gateway cached volumes support iSCSI connections from on-premises application servers, allowing a seamless migration experience.
Servers access local cache and S3 storage volumes as iSCSI LUNs.
upvoted 6 times
Selected Answer: D
I vote D
upvoted 1 times
Selected Answer: D
A company has multiple AWS accounts that use consolidated billing. The company runs several active high performance Amazon RDS for Oracle
On-Demand DB instances for 90 days. The company’s finance team has access to AWS Trusted Advisor in the consolidated billing account and all
The finance team needs to use the appropriate AWS account to access the Trusted Advisor check recommendations for RDS. The finance team
must review the appropriate Trusted Advisor check to reduce RDS costs.
Which combination of steps should the finance team take to meet these requirements? (Choose two.)
A. Use the Trusted Advisor recommendations from the account where the RDS instances are running.
B. Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time.
C. Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization.
D. Review the Trusted Advisor check for Amazon RDS Idle DB Instances.
E. Review the Trusted Advisor check for Amazon Redshift Reserved Node Optimization.
Correct Answer: BD
Selected Answer: BD
B&D
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 18 times
Selected Answer: BC
The answer is either BC or BD, depending on how you interpret "The company runs several active... instances for 90 days for 90 days."
D: it assumes the instances will only run for 90 days, so reserved instances can't be the answer, since it requires 1-3 years utilization.
C: it assumes there is no idle instances since they've been active for the last 90 days.
upvoted 1 times
C: Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization. This check can help identify cost savings opportunities for
RDS by identifying instances that can be covered by Reserved Instances. This can result in significant savings on RDS costs.
upvoted 1 times
Selected Answer: BD
"you can reserve a DB instance for a one- or three-year term". We only have data for 90 days. I feel it too risky to commit for 1/3 year(s) without
information on future usage. If we knew that we expected the same usage pattern for the next 1,2,3 years, Id agree with C.
upvoted 3 times
Selected Answer: BC
B) Use the Trusted Advisor recommendations from the consolidated billing account to see all RDS instance checks at the same time. This option
allows the finance team to see all RDS instance checks across all AWS accounts in one place. Since the company uses consolidated billing, this
account will have access to all of the AWS accounts' Trusted Advisor recommendations.
C) Review the Trusted Advisor check for Amazon RDS Reserved Instance Optimization. This check can help identify cost savings opportunities for
RDS by identifying instances that can be covered by Reserved Instances. This can result in significant savings on RDS costs.
upvoted 1 times
Selected Answer: BC
https://aws.amazon.com/premiumsupport/knowledge-center/trusted-advisor-cost-optimization/
upvoted 1 times
Insights: The company runs several active high performance Amazon RDS for Oracle On-Demand DB instances for 90 days
So it's clear that this company need to check the configuration of any Amazon Relational Database Service (Amazon RDS) for any database (DB)
instances that appear to be idle.
upvoted 1 times
AWS Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS
environment. (...) Recommendations are based on the previous calendar month's hour-by-hour usage aggregated across all consolidated billing
accounts.
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/aws-trusted-advisor.html
Amazon EC2 Reserved Instance Optimization: An important part of using AWS involves balancing your Reserved Instance (RI) purchase against you
On-Demand Instance usage. This check provides recommendations on which RIs will help reduce the costs incurred from using On-Demand
Instances. We create these recommendations by analyzing your On-Demand usage for the past 30 days. We then categorizing the usage into
eligible categories for reservations.
https://docs.aws.amazon.com/awssupport/latest/user/cost-optimization-checks.html#amazon-ec2-reserved-instances-optimization
upvoted 1 times
Selected Answer: BC
If you're choosing D for the idle instances, Amazon RDS Reserved Instance Optimization Trusted Advisor check includes recommendations related
to underutilized and idle RDS instances. It helps identify instances that are not fully utilized and provides recommendations on how to optimize
costs, such as resizing or terminating unused instances, or purchasing reserved instances to match usage patterns more efficiently.
upvoted 1 times
Selected Answer: BC
Reserved Instances can be shared across accounts, and that is the reason why we need to check the consolidated bill.
upvoted 2 times
Selected Answer: BC
BC
we don't want to check Idle instances because the instances were active for last 90 days.
Idle means it was inactive for at least 7 days.
upvoted 3 times
Selected Answer: BD
BD
we don't want to check Idle instances because the instances were active for last 90 days.
Idle means it was inactive for at least 7 days.
upvoted 1 times
Selected Answer: BC
Reserved Instance Optimization "checks your usage of RDS and provides recommendations on purchase of Reserved Instances to help reduce cost
incurred from using RDS On-Demand." In other words, it is not about optimizing reserved instances (as many here think), it about optimizing on-
demand instances by converting them to reserved ones.
"Idle DB Instances" check is about databases that have "not had a connection for a prolonged period of time", which we know is not the case here.
upvoted 4 times
Selected Answer: AD
why no one considers AD. C is not the option since reserved instance is considered in case of long-term usage while it is 90 days here. But B is
using consolidated billing which covers the high level billing overview of cost but not that specific for RDS running instance. should we only need
to use Trust advisor for accounts where RDS is running?
upvoted 1 times
Can someone explain why so many people say it’s D and not C? It’s very clear that 90 days means reserved instances.
upvoted 1 times
A solutions architect needs to optimize storage costs. The solutions architect must identify any Amazon S3 buckets that are no longer being
Which solution will accomplish this goal with the LEAST operational overhead?
A. Analyze bucket access patterns by using the S3 Storage Lens dashboard for advanced activity metrics.
B. Analyze bucket access patterns by using the S3 dashboard in the AWS Management Console.
C. Turn on the Amazon CloudWatch BucketSizeBytes metric for buckets. Analyze bucket access patterns by using the metrics data with
Amazon Athena.
D. Turn on AWS CloudTrail for S3 object monitoring. Analyze bucket access patterns by using CloudTrail logs that are integrated with Amazon
CloudWatch Logs.
Correct Answer: A
Selected Answer: A
S3 Storage Lens is a fully managed S3 storage analytics solution that provides a comprehensive view of object storage usage, activity trends, and
recommendations to optimize costs. Storage Lens allows you to analyze object access patterns across all of your S3 buckets and generate detailed
metrics and reports.
upvoted 22 times
Selected Answer: A
S3 Storage Lens includes an interactive dashboard which you can find in the S3 console. The dashboard gives you the ability to perform filtering
and drill-down into your metrics to really understand how your storage is being used. The metrics are organized into categories like data
protection and cost efficiency, to allow you to easily find relevant metrics.
upvoted 1 times
Selected Answer: A
A
S3 Storage Lens is the first cloud storage analytics solution to provide a single view of object storage usage and activity across hundreds, or even
thousands, of accounts in an organization, with drill-downs to generate insights at multiple aggregation levels.
upvoted 2 times
Selected Answer: A
Selected Answer: D
A missed turning on monitoring. It can also help you learn about your customer base and understand your Amazon S3 bill. By default, Amazon S3
doesn't collect server access logs. When you enable logging, Amazon S3 delivers access logs for a source bucket to a target bucket that you
choose.
I could not find that S3 storage Lens examples online showing using Lens to identify idle S3 buckets. Instead I find using S3 Access Logging. Hmm.
upvoted 3 times
Selected Answer: A
S3 Storage Lens is a cloud-storage analytics feature that provides you with 29+ usage and activity metrics, including object count, size, age, and
access patterns. This data can help you understand how your data is being used and identify areas where you can optimize your storage costs.
The S3 Storage Lens dashboard provides an interactive view of your storage usage and activity trends. This makes it easy to identify buckets that
are no longer being accessed or are rarely accessed.
The S3 Storage Lens dashboard is a fully managed service, so there is no need to set up or manage any additional infrastructure.
upvoted 1 times
Selected Answer: A
The S3 Storage Lens dashboard provides visibility into storage metrics and activity patterns to help optimize storage costs. It shows metrics like
objects added, objects deleted, storage consumed, and requests. It can filter by bucket, prefix, and tag to analyze specific subsets of data
upvoted 3 times
Selected Answer: A
https://aws.amazon.com/blogs/aws/s3-storage-lens/
upvoted 4 times
Selected Answer: A
S3 Storage Lens provides a dashboard with advanced activity metrics that enable the identification of infrequently accessed and unused buckets.
This can help a solutions architect optimize storage costs without incurring additional operational overhead.
upvoted 3 times
Selected Answer: A
A company sells datasets to customers who do research in artificial intelligence and machine learning (AI/ML). The datasets are large, formatted
files that are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase
access to a given dataset. The web application is deployed on multiple Amazon EC2 instances behind an Application Load Balancer. After a
purchase is made, customers receive an S3 signed URL that allows access to the files.
The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers
A. Configure S3 Transfer Acceleration on the existing S3 bucket. Direct customer requests to the S3 Transfer Acceleration endpoint. Continue
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL.
C. Set up a second S3 bucket in the eu-central-1 Region with S3 Cross-Region Replication between the buckets. Direct customer requests to
the closest Region. Continue to use S3 signed URLs for access control.
D. Modify the web application to enable streaming of the datasets to end users. Configure the web application to read the data from the
Correct Answer: B
Selected Answer: B
To reduce the cost associated with data transfers and maintain or improve performance, a solutions architect should use Amazon CloudFront, a
content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high
transfer speeds.
Deploying a CloudFront distribution with the existing S3 bucket as the origin will allow the company to serve the data to customers from edge
locations that are closer to them, reducing data transfer costs and improving performance.
Directing customer requests to the CloudFront URL and switching to CloudFront signed URLs for access control will enable customers to access the
data securely and efficiently.
upvoted 10 times
Selected Answer: B
A: Speeds uploads
C: Increases the cost rather than reducing it
D: Stopped reading after "Modify the web application..."
upvoted 7 times
To reduce the cost associated with data transfers and maintain or improve performance, a solutions architect should use Amazon CloudFront, a
content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency and high
transfer speeds.
upvoted 1 times
Technically both option B and C will work. But because cost is a factor then Amazon CloudFront should be the preferred option.
upvoted 1 times
B.
1. Amazon CloudFront caches content at edge locations -- reducing the need for frequent data transfer from S3 bucket -- thus significantly
lowering data transfer costs (as compared to directly serving data from S3 bucket to customers in different regions)
2. CloudFront delivers content to users from the nearest edge location -- minimizing latency -- improves performance for customers
A - focus on accelerating uploads to S3 which may not necessarily improve the performance needed for serving datasets to customers
C - helps with redundancy and data availability but does not necessarily offer cost savings for data transfer.
D - complex to implement, does not address data transfer cost
upvoted 5 times
Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
upvoted 3 times
Selected Answer: B
B. Deploy an Amazon CloudFront distribution with the existing S3 bucket as the origin. Direct customer requests to the CloudFront URL. Switch to
CloudFront signed URLs for access control.
https://www.examtopics.com/discussions/amazon/view/68990-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #311 Topic 1
A company is using AWS to design a web application that will process insurance quotes. Users will request quotes from the application. Quotes
must be separated by quote type, must be responded to within 24 hours, and must not get lost. The solution must maximize operational efficiency
A. Create multiple Amazon Kinesis data streams based on the quote type. Configure the web application to send messages to the proper data
stream. Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data
stream.
B. Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type. Subscribe the
Lambda function to its associated SNS topic. Configure the application to publish requests for quotes to the appropriate SNS topic.
C. Create a single Amazon Simple Notification Service (Amazon SNS) topic. Subscribe Amazon Simple Queue Service (Amazon SQS) queues
to the SNS topic. Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type. Configure each
D. Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon OpenSearch
Service cluster. Configure the application to send messages to the proper delivery stream. Configure each backend group of application
servers to search for the messages from OpenSearch Service and process them accordingly.
Correct Answer: C
Selected Answer: C
Quote types need to be separated: SNS message filtering can be used to publish messages to the appropriate SQS queue based on the quote type
ensuring that quotes are separated by type.
Quotes must be responded to within 24 hours and must not get lost: SQS provides reliable and scalable queuing for messages, ensuring that
quotes will not get lost and can be processed in a timely manner. Additionally, each backend application server can use its own SQS queue,
ensuring that quotes are processed efficiently without any delay.
Operational efficiency and minimizing maintenance: Using a single SNS topic and multiple SQS queues is a scalable and cost-effective approach,
which can help to maximize operational efficiency and minimize maintenance. Additionally, SNS and SQS are fully managed services, which means
that the company will not need to worry about maintenance tasks such as software updates, hardware upgrades, or scaling the infrastructure.
upvoted 17 times
Selected Answer: C
Explanation:
Amazon SNS (Simple Notification Service) allows for the creation of a single topic to which multiple subscribers can be attached. In this scenario,
each quote type can be considered a subscriber. Amazon SQS (Simple Queue Service) queues can be subscribed to the SNS topic, and SNS
message filtering can be used to direct messages to the appropriate SQS queue based on the quote type. This setup ensures that quotes are
separated by quote type and that they are not lost. Each backend application server can then poll its own SQS queue to retrieve and process
messages. This architecture is efficient, scalable, and requires minimal maintenance, as it leverages managed AWS services without the need for
complex custom code or infrastructure setup.
upvoted 3 times
Selected Answer: C
I originally went for D due to searching requirements but Open Search is for analytics and logs and nothing to do with data coming from streams
as in this question.
upvoted 1 times
Ruffyit 10 months, 3 weeks ago
Quote types need to be separated: SNS message filtering can be used to publish messages to the appropriate SQS queue based on the quote type
ensuring that quotes are separated by type.
Quotes must be responded to within 24 hours and must not get lost: SQS provides reliable and scalable queuing for messages, ensuring that
quotes will not get lost and can be processed in a timely manner. Additionally, each backend application server can use its own SQS queue,
ensuring that quotes are processed efficiently without any delay.
Operational efficiency and minimizing maintenance: Using a single SNS topic and multiple SQS queues is a scalable and cost-effective approach,
which can help to maximize operational efficiency and minimize maintenance. Additionally, SNS and SQS are fully managed services, which means
that the company will not need to worry about maintenance tasks such as software updates, hardware upgrades, or scaling the
upvoted 1 times
Selected Answer: C
This wrong answers from examtopic are getting me so frustrated. Which one is the correct answer then?
upvoted 5 times
Selected Answer: C
This is the SNS fan-out technique where you will have one SNS service to many SQS services
https://docs.aws.amazon.com/sns/latest/dg/sns-sqs-as-subscriber.html
upvoted 6 times
https://aws.amazon.com/getting-started/hands-on/filter-messages-published-to-topics/
upvoted 7 times
Question #312 Topic 1
A company has an application that runs on several Amazon EC2 instances. Each EC2 instance has multiple Amazon Elastic Block Store (Amazon
EBS) data volumes attached to it. The application’s EC2 instance configuration and data need to be backed up nightly. The application also needs
Which solution will meet these requirements in the MOST operationally efficient way?
A. Write an AWS Lambda function that schedules nightly snapshots of the application’s EBS volumes and copies the snapshots to a different
Region.
B. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EC2
instances as resources.
C. Create a backup plan by using AWS Backup to perform nightly backups. Copy the backups to another Region. Add the application’s EBS
volumes as resources.
D. Write an AWS Lambda function that schedules nightly snapshots of the application's EBS volumes and copies the snapshots to a different
Availability Zone.
Correct Answer: B
B is answer so the requirement is "The application’s EC2 instance configuration and data need to be backed up nightly" so we need "add the
application’s EC2 instances as resources". This option will backup both EC2 configuration and data
upvoted 19 times
Selected Answer: B
https://aws.amazon.com/vi/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores
all parameters from the original EC2 instance except for two
upvoted 15 times
Selected Answer: B
Question says: " The application’s EC2 instance configuration and data need to be backed up", thus C is not correct, B is
upvoted 2 times
As part of configuring a backup plan you need to enable (opt-in) resource types that will be protected by the backup plan. For this case EC2.
https://aws.amazon.com/getting-started/hands-on/amazon-ec2-backup-and-restore-using-aws-
backup/#:~:text=the%20services%20used%20with-,AWS%20Backup,-a.%20In%20the%20navigation
upvoted 1 times
B is the most appropriate solution because it allows you to create a backup plan to automate the backup process of EC2 instances and EBS
volumes, and copy backups to another region. Additionally, you can add the application's EC2 instances as resources to ensure their configuration
and data are backed up nightly.
upvoted 1 times
Selected Answer: B
AWS KB states if you select the EC2 instance , associated EBS's will be auto covered .
https://aws.amazon.com/blogs/aws/aws-backup-ec2-instances-efs-single-file-restore-and-cross-region-backup/
upvoted 2 times
Selected Answer: B
B is the most appropriate solution because it allows you to create a backup plan to automate the backup process of EC2 instances and EBS
volumes, and copy backups to another region. Additionally, you can add the application's EC2 instances as resources to ensure their configuration
and data are backed up nightly.
A and D involve writing custom Lambda functions to automate the snapshot process, which can be complex and require more maintenance effort.
Moreover, these options do not provide an integrated solution for managing backups and recovery, and copying snapshots to another region.
Option C involves creating a backup plan with AWS Backup to perform backups for EBS volumes only. This approach would not back up the EC2
instances and their configuration
upvoted 2 times
Selected Answer: C
The application’s EC2 instance configuration and data are stored on EBS volume right?
upvoted 2 times
Selected Answer: B
Use AWS Backup to create a backup plan that includes the EC2 instances, Amazon EBS snapshots, and any other resources needed for recovery.
The backup plan can be configured to run on a nightly schedule.
upvoted 1 times
The application’s EC2 instance configuration and data need to be backed up nightly >> B
upvoted 1 times
A company is building a mobile app on AWS. The company wants to expand its reach to millions of users. The company needs to build a platform
so that authorized users can watch the company’s content on their mobile devices.
A. Publish content to a public Amazon S3 bucket. Use AWS Key Management Service (AWS KMS) keys to stream content.
B. Set up IPsec VPN between the mobile app and the AWS environment to stream content.
D. Set up AWS Client VPN between the mobile app and the AWS environment to stream content.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Amazon CloudFront is a content delivery network (CDN) that securely delivers data, videos, applications, and APIs to customers globally with low
latency and high transfer speeds. CloudFront supports signed URLs that provide authorized access to your content. This feature allows the
company to control who can access their content and for how long, providing a secure and scalable solution for millions of users.
upvoted 6 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
C is correct.
upvoted 2 times
A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the
database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type
Correct Answer: B
Selected Answer: B
With Aurora Serverless for MySQL, you don't need to select a particular instance type, as the service automatically scales up or down based on the
application's needs.
upvoted 8 times
Selected Answer: B
Selected Answer: B
without selecting a particular instance type = Amazon Aurora Serverless for MySQL
upvoted 1 times
Selected Answer: B
Selected Answer: B
Bbbbbbb
upvoted 1 times
Selected Answer: B
https://aws.amazon.com/rds/aurora/serverless/
upvoted 1 times
LuckyAro 1 year, 7 months ago
Selected Answer: B
Amazon Aurora Serverless for MySQL is a fully managed, auto-scaling relational database service that scales up or down automatically based on
the application demand. This service provides all the capabilities of Amazon Aurora, such as high availability, durability, and security, without
requiring the customer to provision any database instances.
With Amazon Aurora Serverless for MySQL, the sales team can enjoy minimal downtime since the database is designed to automatically scale to
accommodate the increased traffic. Additionally, the service allows the customer to pay only for the capacity used, making it cost-effective for
infrequent access patterns.
Amazon RDS for MySQL could also be an option, but it requires the customer to select an instance type, and the database administrator would
need to monitor and adjust the instance size manually to accommodate the increasing traffic.
upvoted 2 times
A company experienced a breach that affected several applications in its on-premises data center. The attacker took advantage of vulnerabilities
in the custom applications that were running on the servers. The company is now migrating its applications to run on Amazon EC2 instances. The
company wants to implement a solution that actively scans for vulnerabilities on the EC2 instances and sends a report that details the findings.
A. Deploy AWS Shield to scan the EC2 instances for vulnerabilities. Create an AWS Lambda function to log any findings to AWS CloudTrail.
B. Deploy Amazon Macie and AWS Lambda functions to scan the EC2 instances for vulnerabilities. Log any findings to AWS CloudTrail.
C. Turn on Amazon GuardDuty. Deploy the GuardDuty agents to the EC2 instances. Configure an AWS Lambda function to automate the
D. Turn on Amazon Inspector. Deploy the Amazon Inspector agent to the EC2 instances. Configure an AWS Lambda function to automate the
Correct Answer: D
Selected Answer: D
Selected Answer: D
Selected Answer: D
Amazon Inspector:
• Performs active vulnerability scans of EC2 instances. It looks for software vulnerabilities, unintended network accessibility, and other security
issues.
• Requires installing an agent on EC2 instances to perform scans. The agent must be deployed to each instance.
• Provides scheduled scan reports detailing any findings of security risks or vulnerabilities. These reports can be used to patch or remediate issues.
• Is best suited for proactively detecting security weaknesses and misconfigurations in your AWS environment.
upvoted 3 times
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances.
It is a kind of automated security assessment service that checks the network exposure of your EC2 or latest security state for applications running
into your EC2 instance. It has ability to auto discover your AWS workload and continuously scan for the open loophole or vulnerability.
upvoted 1 times
Amazon Inspector is a vulnerability scanning tool that you can use to identify potential security issues within your EC2 instances. Guard Duty
continuously monitors your entire AWS account via Cloud Trail, Flow Logs, DNS Logs as Input.
upvoted 1 times
Selected Answer: C
:) C is the correct
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times
https://cloudkatha.com/amazon-guardduty-vs-inspector-which-one-should-you-use/
upvoted 1 times
Selected Answer: D
Amazon Inspector is a security assessment service that helps to identify security vulnerabilities and compliance issues in applications deployed on
Amazon EC2 instances. It can be used to assess the security of applications that are deployed on Amazon EC2 instances, including those that are
custom-built.
To use Amazon Inspector, the Amazon Inspector agent must be installed on the EC2 instances that need to be assessed. The agent collects data
about the instances and sends it to Amazon Inspector for analysis. Amazon Inspector then generates a report that details any security
vulnerabilities that were found and provides guidance on how to remediate them.
By configuring an AWS Lambda function, the company can automate the generation and distribution of reports that detail the findings. This means
that reports can be generated and distributed as soon as vulnerabilities are detected, allowing the company to take action quickly.
upvoted 1 times
Amazon Inspector
upvoted 2 times
Selected Answer: D
I think D
upvoted 1 times
Selected Answer: D
Ddddddd
upvoted 1 times
Question #316 Topic 1
A company uses an Amazon EC2 instance to run a script to poll for and process messages in an Amazon Simple Queue Service (Amazon SQS)
queue. The company wants to reduce operational costs while maintaining its ability to process a growing number of messages that are added to
the queue.
B. Use Amazon EventBridge to turn off the EC2 instance when the instance is underutilized.
C. Migrate the script on the EC2 instance to an AWS Lambda function with the appropriate runtime.
D. Use AWS Systems Manager Run Command to run the script on demand.
Correct Answer: C
Selected Answer: C
By migrating the script to AWS Lambda, the company can take advantage of the auto-scaling feature of the service. AWS Lambda will automatically
scale resources to match the size of the workload. This means that the company will not have to worry about provisioning or managing instances
as the number of messages increases, resulting in lower operational costs
upvoted 12 times
Selected Answer: C
Selected Answer: C
Lambda costs money only when it's processing, not when idle
upvoted 3 times
Selected Answer: C
AWS Lambda is a serverless compute service that allows you to run your code without provisioning or managing servers. By migrating the script to
an AWS Lambda function, you can eliminate the need to maintain an EC2 instance, reducing operational costs. Additionally, Lambda automatically
scales to handle the increasing number of messages in the SQS queue.
upvoted 1 times
Selected Answer: C
It Should be C.
Lambda allows you to execute code without provisioning or managing servers, so it is ideal for running scripts that poll for and process messages
in an Amazon SQS queue. The scaling of the Lambda function is automatic, and you only pay for the actual time it takes to process the messages.
upvoted 3 times
Bhawesh 1 year, 7 months ago
Selected Answer: D
A company uses a legacy application to produce data in CSV format. The legacy application stores the output data in Amazon S3. The company is
deploying a new commercial off-the-shelf (COTS) application that can perform complex SQL queries to analyze data that is stored in Amazon
Redshift and Amazon S3 only. However, the COTS application cannot process the .csv files that the legacy application produces.
The company cannot update the legacy application to produce data in another format. The company needs to implement a solution so that the
COTS application can use the data that the legacy application produces.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS Glue extract, transform, and load (ETL) job that runs on a schedule. Configure the ETL job to process the .csv files and store
B. Develop a Python script that runs on Amazon EC2 instances to convert the .csv files to .sql files. Invoke the Python script on a cron
C. Create an AWS Lambda function and an Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda
function to perform an extract, transform, and load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
D. Use Amazon EventBridge to launch an Amazon EMR cluster on a weekly schedule. Configure the EMR cluster to perform an extract,
transform, and load (ETL) job to process the .csv files and store the processed data in an Amazon Redshift table.
Correct Answer: A
Selected Answer: A
I believe these kind of questions are there to indoctrinate us into acknowledging how blessed we are to have managed services like AWS Glue
when you look at other horrible and painful options
upvoted 15 times
Selected Answer: A
A, AWS Glue is a fully managed ETL service that can extract data from various sources, transform it into the required format, and load it into a
target data store. In this case, the ETL job can be configured to read the CSV files from Amazon S3, transform the data into a format that can be
loaded into Amazon Redshift, and load it into an Amazon Redshift table.
B requires the development of a custom script to convert the CSV files to SQL files, which could be time-consuming and introduce additional
operational overhead. C, while using serverless technology, requires the additional use of DynamoDB to store the processed data, which may not
be necessary if the data is only needed in Amazon Redshift. D, while an option, is not the most efficient solution as it requires the creation of an
EMR cluster, which can be costly and complex to manage.
upvoted 6 times
Selected Answer: A
A-ETL is serverless & best suited with the requirement who primary job is ETL
B-Usage of Ec2 adds operational overhead & incur costs
C-DynamoDB(NoSql) does suit the requirement as company is performing SQL queries
D-EMR adds operational overhead & incur costs
upvoted 1 times
TariqKipkemei 11 months, 4 weeks ago
Selected Answer: A
Selected Answer: A
Selected Answer: A
Glue is server less and has less operational head than EMR so A.
upvoted 1 times
o meet the requirement with the least operational overhead, a serverless approach should be used. Among the options provided, option C
provides a serverless solution using AWS Lambda, S3, and DynamoDB. Therefore, the solution should be to create an AWS Lambda function and an
Amazon DynamoDB table. Use an S3 event to invoke the Lambda function. Configure the Lambda function to perform an extract, transform, and
load (ETL) job to process the .csv files and store the processed data in the DynamoDB table.
Option A is also a valid solution, but it may involve more operational overhead than Option C. With Option A, you would need to set up and
manage an AWS Glue job, which would require more setup time than creating an AWS Lambda function. Additionally, AWS Glue jobs have a
minimum execution time of 10 minutes, which may not be necessary or desirable for this use case. However, if the data processing is particularly
complex or requires a lot of data transformation, AWS Glue may be a more appropriate solution.
upvoted 1 times
Selected Answer: A
A would be the best solution as it involves the least operational overhead. With this solution, an AWS Glue ETL job is created to process the .csv
files and store the processed data directly in Amazon Redshift. This is a serverless approach that does not require any infrastructure to be
provisioned, configured, or maintained. AWS Glue provides a fully managed, pay-as-you-go ETL service that can be easily configured to process
data from S3 and load it into Amazon Redshift. This approach allows the legacy application to continue to produce data in the CSV format that it
currently uses, while providing the new COTS application with the ability to analyze the data using complex SQL queries.
upvoted 3 times
A company recently migrated its entire IT environment to the AWS Cloud. The company discovers that users are provisioning oversized Amazon
EC2 instances and modifying security group rules without using the appropriate change control process. A solutions architect must devise a
Which actions should the solutions architect take to meet these requirements? (Choose two.)
D. Enable AWS Config and create rules for auditing and compliance purposes.
Correct Answer: AD
Selected Answer: AD
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken through
the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company can track user
activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS resources in
your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config, the company can
automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift from compliance.
Options B, C, and E are not directly relevant to the requirement of tracking and auditing inventory and configuration changes.
upvoted 12 times
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken through
the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company can track user
activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS resources in
your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config, the company can
automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift from compliance.
Options B, C, and E are not directly relevant to the requirement of tracking and auditing inventory and configuration changes.
upvoted 1 times
Selected Answer: AD
A. Enable AWS CloudTrail and use it for auditing. CloudTrail provides event history of your AWS account activity, including actions taken through
the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs and APIs. By enabling CloudTrail, the company can track user
activity and changes to AWS resources, and monitor compliance with internal policies and external regulations.
D. Enable AWS Config and create rules for auditing and compliance purposes. AWS Config provides a detailed inventory of the AWS resources in
your account, and continuously records changes to the configurations of those resources. By creating rules in AWS Config, the company can
automate the evaluation of resource configurations against desired state, and receive alerts when configurations drift from compliance.
upvoted 1 times
I am gonna go with CD
AWS Cloudtrail is already enabled so no need to enable it and for the auding we are gonna use AWS config Answer D
Selected Answer: AD
Selected Answer: AD
Yes A and D
upvoted 1 times
A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage
the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a
Which solution will meet this requirement with the LEAST amount of administrative overhead?
A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.
Correct Answer: A
Answer is A
Using AWS Systems Manager Session Manager to connect to the EC2 instances is a secure option as it eliminates the need for inbound SSH ports
and removes the requirement to manage SSH keys manually. It also provides a complete audit trail of user activity. This solution requires no
additional software to be installed on the EC2 instances.
upvoted 9 times
Selected Answer: A
A - Systems Manager Session Manager has EXACTLY that purpose, 'providing secure access to EC2 instances'
B - STS can generate temporary IAM credentials or access keys but NOT SSH keys
C - Does not 'remove all shared keys' as requested
D - Cognito is not meant for internal users, and whole setup is complex
upvoted 5 times
Selected Answer: A
B - Querying is just a feature of Redshift but primarily it's a Data Warehouse - the question says nothing that historical data would have to be
stored or accessed or analyzed
upvoted 1 times
STS can generate short-lived credentials that provide temporary access to the EC2 instances for administering them.
The credentials can be generated on-demand each time access is needed, eliminating the risks of using permanent shared SSH keys.
No infrastructure like bastion hosts needs to be maintained.
The on-premises administrators can use the familiar SSH tools with the temporary keys.
upvoted 1 times
Session Manager provides secure and auditable node management without the need to open inbound ports, maintain bastion hosts, or manage
SSH keys.
upvoted 1 times
Selected Answer: B
STS can generate short-lived credentials that provide temporary access to the EC2 instances for administering them.
The credentials can be generated on-demand each time access is needed, eliminating the risks of using permanent shared SSH keys.
No infrastructure like bastion hosts needs to be maintained.
The on-premises administrators can use the familiar SSH tools with the temporary keys.
upvoted 1 times
Selected Answer: B
Using AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand is a secure and efficient way to provide access to the EC2
instances without the need for shared SSH keys. STS is a fully managed service that can be used to generate temporary security credentials,
allowing systems administrators to connect to the EC2 instances without having to share SSH keys. The temporary credentials can be generated on
demand, reducing the administrative overhead associated with managing SSH access
upvoted 1 times
AWS Systems Manager Session Manager provides secure shell access to EC2 instances without the need for SSH keys. It meets the security
requirement to remove shared SSH keys while minimizing administrative overhead.
upvoted 1 times
Information Security experts who want to monitor and track managed node access and activity, close down inbound ports on managed
nodes, or allow connections to managed nodes that don't have a public IP address.
Administrators who want to grant and revoke access from a single location, and who want to provide one solution to users for Linux, macOS
and Windows Server managed nodes.
Users who want to connect to a managed node with just one click from the browser or AWS CLI without having to provide SSH keys.
upvoted 2 times
You guys seriously don't want to go to SMSM for Avery Single EC2. You have to create solution not used services for one time access. Bastion will
give you option to manage 1000s EC2 machines from 1. Plus you can use Ansible from it.
upvoted 2 times
The most secure way is definitely session manager therefore answer A is correct imho.
upvoted 3 times
Selected Answer: A
I vote a
upvoted 1 times
Selected Answer: A
AWS Systems Manager Session Manager provides secure and auditable instance management without the need for any inbound connections or
open ports. It allows you to manage your instances through an interactive one-click browser-based shell or through the AWS CLI. This means that
you don't have to manage any SSH keys, and you don't have to worry about securing access to your instances as access is controlled through IAM
policies.
upvoted 4 times
Selected Answer: A
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html
upvoted 2 times
Selected Answer: A
Answer must be A
upvoted 2 times
Answer is A
upvoted 2 times
Question #320 Topic 1
A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion
rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company’s data science team wants to query
Which solution provides near-real-time data querying that is scalable with minimal data loss?
A. Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data.
B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data.
C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use
D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to
Correct Answer: A
Selected Answer: A
A: is the solution for the company's requirements. Publishing data to Amazon Kinesis Data Streams can support ingestion rates as high as 1 MB/s
and provide real-time data processing. Kinesis Data Analytics can query the ingested data in real-time with low latency, and the solution can scale
as needed to accommodate increases in ingestion rates or querying needs. This solution also ensures minimal data loss in the event of an EC2
instance reboot since Kinesis Data Streams has a persistent data store for up to 7 days by default.
upvoted 14 times
Selected Answer: B
The fact they specifically mention "near real-time" twice tells me the correct answer is KDF. On top of which its easier to setup and maintain. KDS is
really only needed if you need real-time. Also using redshift will mean permanent data retention. The data in A could be lost after a year. Redshift
queries are slow but you're still querying near real-time data
upvoted 6 times
Selected Answer: B
https://aws.amazon.com/pm/kinesis/?gclid=CjwKCAjwvIWzBhAlEiwAHHWgvRQuJmBubZDnO2GasDWwc2iBapfVD6GBeIgj2JV6qkldm-
K_CmMzmxoCdCwQAvD_BwE&trk=ee1218b7-7c10-4762-97df-
274836a44566&sc_channel=ps&ef_id=CjwKCAjwvIWzBhAlEiwAHHWgvRQuJmBubZDnO2GasDWwc2iBapfVD6GBeIgj2JV6qkldm-
K_CmMzmxoCdCwQAvD_BwE:G:s&s_kwcid=AL!4422!3!651510255264!p!!g!!kinesis%20stream!19836376690!149589222920
upvoted 1 times
Recent changes to Redshift actually make B correct as well, but A is also correct.
upvoted 2 times
Option B mentions Kinesis Data Firehose (now just Firehose), so this won't work.
[1]https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-streaming-ingestion.html
upvoted 1 times
B. Kinesis Data Firehose with Redshift: While Redshift is scalable, it doesn't offer real-time querying capabilities. Data needs to be loaded into
Redshift from Firehose, introducing latency.
C. EC2 instance store with Kinesis Data Firehose and S3: Storing data in an EC2 instance store is not persistent and data will be lost during reboots.
EBS volumes are more appropriate for persistent storage, but the architecture becomes more complex.
D. EBS volume with ElastiCache and Redis: While ElastiCache offers fast in-memory storage, it's not designed for high-volume data ingestion like 1
MB/s. It might struggle with scalability and persistence.
upvoted 2 times
Read the question: near real-time querying of data.... it is more about real-time data query once the data is ingested, It does not mention how long
time the data needs to be stored. A is better option. B introduces delay of data buffer before it can be queried in redshift
upvoted 1 times
Selected Answer: B
A is not correct because Kinesis can only store data up to 1 year. The solution need to support querying ALL data instead of "recent" data.
upvoted 3 times
Selected Answer: A
Publish data to Amazon Kinesis Data Streams, Use Kinesis Data Analytics to query the data
upvoted 2 times
Selected Answer: A
• Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored
redundantly across shards.
• Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.
• Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.
• Scale seamlessly to handle varying ingestion and query rates.
upvoted 3 times
Selected Answer: A
Selected Answer: B
Amazon Kinesis Data Firehose can deliver data in real-time to Amazon Redshift, making it immediately available for queries. Amazon Redshift, on
the other hand, is a powerful data analytics service that allows fast and scalable querying of large volumes of data.
upvoted 2 times
Selected Answer: A
• Provide near-real-time data ingestion into Kinesis Data Streams with the ability to handle the 1 MB/s ingestion rate. Data would be stored
redundantly across shards.
• Enable near-real-time querying of the data using Kinesis Data Analytics. SQL queries can be run directly against the Kinesis data stream.
• Minimize data loss since data is replicated across shards. If an EC2 instance is rebooted, the data stream is still accessible.
• Scale seamlessly to handle varying ingestion and query rates.
upvoted 2 times
Reason Kruasan gave "Redshift would lack real-time capabilities." This is not true. Redshift could do real-time. evidence
https://aws.amazon.com/blogs/big-data/real-time-analytics-with-amazon-redshift-streaming-ingestion/
upvoted 1 times
Selected Answer: A
Answer is A
upvoted 1 times
Question #321 Topic 1
What should a solutions architect do to ensure that all objects uploaded to an Amazon S3 bucket are encrypted?
A. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set.
B. Update the bucket policy to deny if the PutObject does not have an s3:x-amz-acl header set to private.
C. Update the bucket policy to deny if the PutObject does not have an aws:SecureTransport header set to true.
D. Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set.
Correct Answer: D
Selected Answer: D
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/#:~:text=Solution%20overview
upvoted 12 times
Selected Answer: D
The x-amz-server-side-encryption header is used to specify the encryption method that should be used to encrypt objects uploaded to an Amazon
S3 bucket. By updating the bucket policy to deny if the PutObject does not have this header set, the solutions architect can ensure that all objects
uploaded to the bucket are encrypted.
upvoted 5 times
Selected Answer: D
Related reading because (as of Jan 2023) S3 buckets have encryption enabled by default.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
"If you require your data uploads to be encrypted using only Amazon S3 managed keys, you can use the following bucket policy. For example, the
following bucket policy denies permissions to upload an object unless the request includes the x-amz-server-side-encryption header to request
server-side encryption:"
upvoted 2 times
Selected Answer: D
To encrypt an object at the time of upload, you need to add a header called x-amz-server-side-encryption to the request to tell S3 to encrypt the
object using SSE-C, SSE-S3, or SSE-KMS. The following code example shows a Put request using SSE-S3.
https://aws.amazon.com/blogs/security/how-to-prevent-uploads-of-unencrypted-objects-to-amazon-s3/
upvoted 1 times
Selected Answer: D
I vote d
upvoted 1 times
Selected Answer: D
To ensure that all objects uploaded to an Amazon S3 bucket are encrypted, the solutions architect should update the bucket policy to deny any
PutObject requests that do not have an x-amz-server-side-encryption header set. This will prevent any objects from being uploaded to the bucket
unless they are encrypted using server-side encryption.
upvoted 3 times
Answer is D
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/amazon-s3-policy-keys.html
upvoted 1 times
Question #322 Topic 1
A solutions architect is designing a multi-tier application for a company. The application's users upload images from a mobile device. The
application generates a thumbnail of each image and returns a message to the user to confirm that the image was uploaded successfully.
The thumbnail generation can take up to 60 seconds, but the company wants to provide a faster response time to its users to notify them that the
original image was received. The solutions architect must design the application to asynchronously dispatch requests to the different application
tiers.
A. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to
B. Create an AWS Step Functions workflow. Configure Step Functions to handle the orchestration between the application tiers and alert the
C. Create an Amazon Simple Queue Service (Amazon SQS) message queue. As images are uploaded, place a message on the SQS queue for
thumbnail generation. Alert the user through an application message that the image was received.
D. Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application
to generate the thumbnail after the image upload is complete. Use a second subscription to message the user's mobile app by way of a push
Correct Answer: C
Selected Answer: C
I've noticed there are a lot of questions about decoupling services and SQS is almost always the answer.
upvoted 27 times
D
SNS fan out
upvoted 12 times
They don't look like real answers from the official exam...
upvoted 1 times
Selected Answer: C
Safe answer is C but B is so badly worded that it can mean anything to confuse people. Step functions to use tiers. What if on of the step is to
inform to the user and move on to next step. Anyway, I'll chose C for the exam as it is cleaner.
upvoted 1 times
Selected Answer: C
SQS is a fully managed message queuing service that can be used to decouple different parts of an application.
upvoted 1 times
Selected Answer: C
Answers B and D alert the user when thumbnail generation is complete. Answer C alerts the user through an application message that the image
was received.
upvoted 4 times
Selected Answer: C
Creating an Amazon Simple Queue Service (SQS) message queue and placing messages on the queue for thumbnail generation can help separate
the image upload and thumbnail generation processes.
upvoted 1 times
A looks like the best way , but its essentially replacing the mentioned app , that's not the ask
upvoted 1 times
Selected Answer: C
Use a custom AWS Lambda function to generate the thumbnail and alert the user. Lambda functions are well-suited for short-lived, stateless
operations like generating thumbnails, and they can be triggered by various events, including image uploads. By using Lambda, the application can
quickly confirm that the image was uploaded successfully and then asynchronously generate the thumbnail. When the thumbnail is generated, the
Lambda function can send a message to the user to confirm that the thumbnail is ready.
C proposes to use an Amazon Simple Queue Service (Amazon SQS) message queue to process image uploads and generate thumbnails. SQS can
help decouple the image upload process from the thumbnail generation process, which is helpful for asynchronous processing. However, it may
not be the most suitable option for quickly alerting the user that the image was received, as the user may have to wait until the thumbnail is
generated before receiving a notification.
upvoted 2 times
A company’s facility has badge readers at every entrance throughout the building. When badges are scanned, the readers send a message over
A solutions architect must design a system to process these messages from the sensors. The solution must be highly available, and the results
A. Launch an Amazon EC2 instance to serve as the HTTPS endpoint and to process the messages. Configure the EC2 instance to save the
B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda function to process the
C. Use Amazon Route 53 to direct incoming sensor messages to an AWS Lambda function. Configure the Lambda function to process the
D. Create a gateway VPC endpoint for Amazon S3. Configure a Site-to-Site VPN connection from the facility network to the VPC so that sensor
Correct Answer: B
Selected Answer: B
- Option A would not provide high availability. A single EC2 instance is a single point of failure.
- Option B provides a scalable, highly available solution using serverless services. API Gateway and Lambda can scale automatically, and DynamoDB
provides a durable data store.
- Option C would expose the Lambda function directly to the public Internet, which is not a recommended architecture. API Gateway provides an
abstraction layer and additional features like access control.
- Option D requires configuring a VPN to AWS which adds complexity. It also saves the raw sensor data to S3, rather than processing it and storing
the results.
upvoted 18 times
Selected Answer: B
Selected Answer: B
The correct answer is B. Create an HTTPS endpoint in Amazon API Gateway. Configure the API Gateway endpoint to invoke an AWS Lambda
function to process the messages and save the results to an Amazon DynamoDB table.
API Gateway is a highly scalable and available service that can be used to create and expose RESTful APIs.
Lambda is a serverless compute service that can be used to process events and data.
DynamoDB is a NoSQL database that can be used to store data in a scalable and highly available way.
upvoted 3 times
I vote B
upvoted 1 times
Deploy Amazon API Gateway as an HTTPS endpoint and AWS Lambda to process and save the messages to an Amazon DynamoDB table. This
option provides a highly available and scalable solution that can easily handle large amounts of data. It also integrates with other AWS services,
making it easier to analyze and visualize the data for the security team.
upvoted 3 times
Selected Answer: B
B is Correct
upvoted 3 times
Question #324 Topic 1
A company wants to implement a disaster recovery plan for its primary on-premises file storage volume. The file storage volume is mounted from
an Internet Small Computer Systems Interface (iSCSI) device on a local storage server. The file storage volume holds hundreds of terabytes (TB)
of data.
The company wants to ensure that end users retain immediate access to all file types from the on-premises systems without experiencing latency.
Which solution will meet these requirements with the LEAST amount of change to the company's existing infrastructure?
A. Provision an Amazon S3 File Gateway as a virtual machine (VM) that is hosted on premises. Set the local cache to 10 TB. Modify existing
applications to access the files through the NFS protocol. To recover from a disaster, provision an Amazon EC2 instance and mount the S3
B. Provision an AWS Storage Gateway tape gateway. Use a data backup solution to back up all existing data to a virtual tape library. Configure
the data backup solution to run nightly after the initial backup is complete. To recover from a disaster, provision an Amazon EC2 instance and
restore the data to an Amazon Elastic Block Store (Amazon EBS) volume from the volumes in the virtual tape library.
C. Provision an AWS Storage Gateway Volume Gateway cached volume. Set the local cache to 10 TB. Mount the Volume Gateway cached
volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure scheduled snapshots of the storage
volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS) volume and attach the EBS volume to
D. Provision an AWS Storage Gateway Volume Gateway stored volume with the same amount of disk space as the existing file storage volume.
Mount the Volume Gateway stored volume to the existing file server by using iSCSI, and copy all files to the storage volume. Configure
scheduled snapshots of the storage volume. To recover from a disaster, restore a snapshot to an Amazon Elastic Block Store (Amazon EBS)
Correct Answer: D
Bad question. No RTO/RPO, so impossible to properly answer. They probably want to hear option D.
Depending on RPO, option B is also an adequate solution (data remains immediately accessible without experiencing latency via existing
infrastructure, backup to cloud for DR). Also, this option requires LESS changes to existing infra than A. Only argument against B is that VTLs are
usually used for legacy DR solutions, not for new ones, where object storage such as S3 is usually supported natively.
upvoted 1 times
Selected Answer: D
A,B are wrong types of gateways for hundreds of TB of data that needs immediate access on-prem. C limits to 10TB. D provides access to all the
files.
upvoted 1 times
"Immediate access to all file types from the on-premises systems without experiencing latency" requirement is not met by C. Also the solution is
meant for DR purposes, the primary storage for the data should remain on premises.
upvoted 3 times
From chatGPT4
Considering the requirements of minimal infrastructure change, immediate file access, and low-latency, Option C: Provisioning an AWS Storage
Gateway Volume Gateway (cached volume) with a 10 TB local cache, seems to be the most fitting solution. This setup aligns with the existing iSCSI
setup and provides a local cache for low-latency access, while also configuring scheduled snapshots for disaster recovery. In the event of a disaster
restoring a snapshot to an Amazon EBS volume and attaching it to an Amazon EC2 instance as described in this option would align with the
recovery objective.
upvoted 1 times
Selected Answer: D
End users retain immediate access to all file types = Volume Gateway stored volume
upvoted 2 times
Selected Answer: D
dddddddd
upvoted 2 times
Selected Answer: D
Correct answer is Volume Gateway Stored which keeps all data on premises.
To have immediate access to the data. Cached is for frequently accessed data only.
upvoted 2 times
Selected Answer: D
Selected Answer: D
In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access.
In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up
to AWS.
Reference: https://aws.amazon.com/storagegateway/faqs/
Good luck.
upvoted 2 times
Selected Answer: D
It is stated the company wants to keep the data locally and have DR plan in cloud. It points directly to the volume gateway
upvoted 1 times
Selected Answer: D
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "
Selected Answer: C
all file types, NOT all files. Volume mode can not cache 100TBs.
upvoted 3 times
Stored volumes can range from 1 GiB to 16 TiB in size and must be rounded to the nearest GiB. Each gateway configured for stored volumes ca
support up to 32 volumes and a total volume storage of 512 TiB (0.5 PiB).
upvoted 1 times
"The company wants to ensure that end users retain immediate access to all file types from the on-premises systems "
A company is hosting a web application from an Amazon S3 bucket. The application uses Amazon Cognito as an identity provider to authenticate
users and return a JSON Web Token (JWT) that provides access to protected resources that are stored in another S3 bucket.
Upon deployment of the application, users report errors and are unable to access the protected content. A solutions architect must resolve this
issue by providing proper permissions so that users can access the protected content.
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
B. Update the S3 ACL to allow the application to access the protected content.
C. Redeploy the application to Amazon S3 to prevent eventually consistent reads in the S3 bucket from affecting the ability of users to access
D. Update the Amazon Cognito pool to use custom attribute mappings within the identity pool and grant users the proper permissions to
Correct Answer: A
Selected Answer: A
To resolve the issue and provide proper permissions for users to access the protected content, the recommended solution is:
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
Explanation:
Amazon Cognito provides authentication and user management services for web and mobile applications.
In this scenario, the application is using Amazon Cognito as an identity provider to authenticate users and obtain JSON Web Tokens (JWTs).
The JWTs are used to access protected resources stored in another S3 bucket.
To grant users access to the protected content, the proper IAM role needs to be assumed by the identity pool in Amazon Cognito.
By updating the Amazon Cognito identity pool with the appropriate IAM role, users will be authorized to access the protected content in the S3
bucket.
upvoted 11 times
Option C is incorrect because redeploying the application to Amazon S3 will not resolve the issue related to user access permissions.
Option D is incorrect because updating custom attribute mappings in Amazon Cognito will not directly grant users the proper permissions to
access the protected content.
upvoted 10 times
Selected Answer: A
A is the best solution as it directly addresses the issue of permissions and grants authenticated users the necessary IAM role to access the
protected content.
A suggests updating the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content. This is a valid solution,
as it would grant authenticated users the necessary permissions to access the protected content.
upvoted 5 times
Selected Answer: A
IAM role is assinged to IAM users or groups or assumed by AWS service. So IAM role is given to AWS Cognito service which provides temporary
AWS credentials to authenticated users. so technically When a user is authenticated by Cognito, they receive temporary credentials based on the
IAM role tied to the Cognito identity pool. If this IAM role has permissions to access certain S3 buckets or objects, the authenticated user will be
able to access those resources as allowed by the role. This service is used under the hood by Cognito to provide these temporary credentials. The
credentials are limited in time and scope based on the permissions defined in the IAM role.
upvoted 1 times
A. Update the Amazon Cognito identity pool to assume the proper IAM role for access to the protected content.
upvoted 2 times
Services access other services via IAM Roles. Hence why updating AWS Cognito identity pool to assume proper IAM Role is the right solution.
upvoted 1 times
Amazon Cognito identity pools assign your authenticated users a set of temporary, limited-privilege credentials to access your AWS resources. The
permissions for each user are controlled through IAM roles that you create. https://docs.aws.amazon.com/cognito/latest/developerguide/role-
based-access-control.html
upvoted 2 times
A makes no sense - Cognito is not accessing the S3 resource. It just returns the JWT token that will be attached to the S3 request.
D is the right answer, using custom attributes that are added to the JWT and used to grant permissions in S3. See
https://docs.aws.amazon.com/cognito/latest/developerguide/using-attributes-for-access-control-policy-example.html for an example.
upvoted 2 times
Selected Answer: A
Selected Answer: A
Answer is A
upvoted 2 times
Question #326 Topic 1
An image hosting company uploads its large assets to Amazon S3 Standard buckets. The company uses multipart upload in parallel by using S3
APIs and overwrites if the same object is uploaded again. For the first 30 days after upload, the objects will be accessed frequently. The objects
will be used less frequently after 30 days, but the access patterns for each object will be inconsistent. The company must optimize its S3 storage
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
E. Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Correct Answer: AB
AB
A : Access Pattern for each object inconsistent, Infrequent Access
B : Deleting Incomplete Multipart Uploads to Lower Amazon S3 Costs
upvoted 23 times
Selected Answer: AB
Selected Answer: AB
AB for sure
upvoted 1 times
Because A & D address the main ask, there's no mention of cost optimization.
upvoted 1 times
Selected Answer: AC
Because A & C address the main ask, there's no mention of cost optimization.
upvoted 1 times
NayeraB 7 months, 2 weeks ago
Not C ':D, I meant to say A&D. Added another vote for that one.
upvoted 1 times
Selected Answer: AB
A as the access pattern for each object is inconsistent so let AWS AWS do the handling.
B deals with multi-part duplication issues and saves money by deleting incomplete uploads
C No mention of deleted object so this is a distractor
D The objects will be accessed in unpredictable pattern so can't use this
E Not HA compliant
upvoted 1 times
C is nonsense
E does not meet the "high availability and resiliency" requirement
B is obvious (incomplete multipart uploads consume space -> cost money)
The tricky part is A vs. D. However, 'inconsistent access patterns' are the primary use case for Intelligent-Tiering. There are probably objects that wi
never be accessed and that would be moved to Glacier Instant Retrieval by Intelligent-Tiering, thus the overall cost would be lower than with D.
upvoted 3 times
I wouldnt go with D since " the access patterns for each object will be inconsistent.", so we cannot move all assets to IA
upvoted 1 times
incosistent access pattern brings more sense to use Intelligent-Tiering after 30 days which also covers infrequent access.
upvoted 1 times
Selected Answer: AB
Option A has not been mentioned for resiliency in S3, check the page: https://docs.aws.amazon.com/AmazonS3/latest/userguide/disaster-
recovery-resiliency.html
Therefore, I am with B & D choices.
upvoted 1 times
Selected Answer: AB
Explanation:
A. Moving assets to S3 Intelligent-Tiering after 30 days: This storage class automatically analyzes the access patterns of objects and moves them
between frequent access and infrequent access tiers. Since the objects will be accessed frequently for the first 30 days, storing them in the frequen
access tier during that period optimizes performance. After 30 days, when the access patterns become inconsistent, S3 Intelligent-Tiering will
automatically move the objects to the infrequent access tier, reducing storage costs.
B. Configuring an S3 Lifecycle policy to clean up incomplete multipart uploads: Multipart uploads are used for large objects, and incomplete
multipart uploads can consume storage space if not cleaned up. By configuring an S3 Lifecycle policy to clean up incomplete multipart uploads,
unnecessary storage costs can be avoided.
upvoted 1 times
Selected Answer: AD
AD.
B makes no sense because multipart uploads overwrite objects that are already uploaded. The question never says this is a problem.
upvoted 1 times
Selected Answer: AB
the following two actions to optimize S3 storage costs while maintaining high availability and resiliency of stored assets:
A. Move assets to S3 Intelligent-Tiering after 30 days. This will automatically move objects between two access tiers based on changing access
patterns and save costs by reducing the number of objects stored in the expensive tier.
B. Configure an S3 Lifecycle policy to clean up incomplete multipart uploads. This will help to reduce storage costs by removing incomplete
multipart uploads that are no longer needed.
upvoted 2 times
Question #327 Topic 1
A solutions architect must secure a VPC network that hosts Amazon EC2 instances. The EC2 instances contain highly sensitive data and run in a
private subnet. According to company policy, the EC2 instances that run in the VPC can access only approved third-party software repositories on
the internet for software product updates that use the third party’s URL. Other internet traffic must be blocked.
A. Update the route table for the private subnet to route the outbound traffic to an AWS Network Firewall firewall. Configure domain list rule
groups.
B. Set up an AWS WAF web ACL. Create a custom set of rules that filter traffic requests based on source and destination IP address range
sets.
C. Implement strict inbound security group rules. Configure an outbound rule that allows traffic only to the authorized software repositories on
D. Configure an Application Load Balancer (ALB) in front of the EC2 instances. Direct all outbound traffic to the ALB. Use a URL-based rule
listener in the ALB’s target group for outbound access to the internet.
Correct Answer: A
Selected Answer: A
Correct Answer A. Send the outbound connection from EC2 to Network Firewall. In Network Firewall, create stateful outbound rules to allow certain
domains for software patch download and deny all other domains.
https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-examples.html#suricata-example-domain-filtering
upvoted 14 times
Selected Answer: A
Can't use URLs in outbound rule of security groups. URL Filtering screams Firewall.
upvoted 10 times
Selected Answer: A
Security Groups operate at the transport layer (Layer 4) of the OSI model and are primarily concerned with controlling traffic based on IP addresses
ports, and protocols. They do not have the capability to inspect or filter traffic based on URLs.
The solution to restrict outbound internet traffic based on specific URLs typically involves using a proxy or firewall that can inspect the application
layer (Layer 7) of the OSI model, where URL information is available.
AWS Network Firewall operates at the network and application layers, allowing for more granular control, including the ability to inspect and filter
traffic based on domain names or URLs.
By configuring domain list rule groups in AWS Network Firewall, you can specify which URLs are allowed for outbound traffic.
This option is more aligned with the requirement of allowing access to approved third-party software repositories based on their URLs.
upvoted 3 times
Selected Answer: A
https://aws.amazon.com/network-firewall/features/
"Web filtering:
AWS Network Firewall supports inbound and outbound web filtering for unencrypted web traffic. For encrypted web traffic, Server Name Indication
(SNI) is used for blocking access to specific sites. SNI is an extension to Transport Layer Security (TLS) that remains unencrypted in the traffic flow
and indicates the destination hostname a client is attempting to access over HTTPS. In addition, **AWS Network Firewall can filter fully qualified
domain names (FQDN).**"
Always use an AWS product if the advertisement meets the use case.
upvoted 2 times
AWS network firewall is stateful, providing control and visibility to Layer 3-7 network traffic, thus cover the application too
upvoted 1 times
Selected Answer: A
Just tried on the console to set up an outbound rule, and URLs cannot be used as a destination. I will opt for A.
upvoted 1 times
Selected Answer: C
Highly sensitive EC2 instances in private subnet that can access only approved URLs
Other internet access must be blocked
Security groups act as a firewall at the instance level and can control both inbound and outbound traffic.
upvoted 2 times
Selected Answer: A
We can't specifu URL in outbound rule of security group. Create free tier AWS account and test it.
upvoted 2 times
Selected Answer: C
CCCCCCCCCCC
upvoted 1 times
Option A is not the best solution as it involves the use of AWS Network Firewall, which may introduce additional operational overhead. While
domain list rule groups can be used to block all internet traffic except for the approved third-party software repositories, this solution is more
complex than necessary for this scenario.
upvoted 2 times
Selected Answer: C
In the security group, only allow inbound traffic originating from the VPC. Then only allow outbound traffic with a whitelisted IP address. The
question asks about blocking EC2 instances, which is best for security groups since those are at the EC2 instance level. A network firewall is at the
VPC level, which is not what the question is asking to protect.
upvoted 1 times
A company is hosting a three-tier ecommerce application in the AWS Cloud. The company hosts the website on Amazon S3 and integrates the
website with an API that handles sales requests. The company hosts the API on three Amazon EC2 instances behind an Application Load
Balancer (ALB). The API consists of static and dynamic front-end content along with backend workers that process sales requests
asynchronously.
The company is expecting a significant and sudden increase in the number of sales requests during events for the launch of new products.
What should a solutions architect recommend to ensure that all the requests are processed successfully?
A. Add an Amazon CloudFront distribution for the dynamic content. Increase the number of EC2 instances to handle the increase in traffic.
B. Add an Amazon CloudFront distribution for the static content. Place the EC2 instances in an Auto Scaling group to launch new instances
C. Add an Amazon CloudFront distribution for the dynamic content. Add an Amazon ElastiCache instance in front of the ALB to reduce traffic
D. Add an Amazon CloudFront distribution for the static content. Add an Amazon Simple Queue Service (Amazon SQS) queue to receive
requests from the website for later processing by the EC2 instances.
Correct Answer: D
Selected Answer: B
The auto-scaling would increase the rate at which sales requests are "processed", whereas a SQS will ensure messages don't get lost. If you were at
a fast food restaurant with a long line with 3 cash registers, would you want more cash registers or longer ropes to handle longer lines? Same
concept here.
upvoted 21 times
Selected Answer: D
B doesn't fit because Auto Scaling alone does not guarantee that all requests will be processed successfully, which the question clearly asks for.
Selected Answer: B
Important question to answer D. Can you connect the website with SQS directly? How do you control access to who can put messages to SQS? I
have never seen such a situation it has to be at least behind API gateway. So that conclusion brings me to answer B, application also can process
async everything without SQS.
upvoted 1 times
I chose D because I love SQS! These questions are hammering SQS in every solution as a "protagonist" that saves the day.
AC are clearly useless
B can work but D is better because of SQS being better than EC2 scaling. The other part is that backend workers process the request
asynchronously therefore a queue is better.
upvoted 3 times
Selected Answer: D
I picked B before I read D option. Read the question again, it concerns:asynchronous processing of sales requests, Option D seems to align more
closely with the requirements. So the requirement is ensuring all requests are processed successfully which means no request would be missed. So
D is better option
upvoted 3 times
D is correct.
upvoted 2 times
D is correct.
upvoted 1 times
Selected Answer: D
An SQS queue acts as a buffer between the frontend (website) and backend (API). Web requests can dump messages into the queue at a high
throughput, then the queue handles delivering those messages to the API at a controlled rate that it can sustain. This prevents the API from being
overwhelmed.
upvoted 2 times
D make sens
upvoted 1 times
Selected Answer: D
although i agree with B for better performance. but i choose 'D' as question request to ensure that all the requests are processed successfully.
upvoted 2 times
A security audit reveals that Amazon EC2 instances are not being patched regularly. A solutions architect needs to provide a solution that will run
regular security scans across a large fleet of EC2 instances. The solution should also patch the EC2 instances on a regular schedule and provide a
A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up a cron job on each EC2 instance to patch the instance
on a regular schedule.
B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Session Manager to patch the EC2 instances on a regular schedule.
C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set up an Amazon EventBridge scheduled rule to patch the
D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2 instances for software vulnerabilities. Set up AWS
Systems Manager Patch Manager to patch the EC2 instances on a regular schedule.
Correct Answer: D
Selected Answer: D
Amazon Inspector is a security assessment service that automatically assesses applications for vulnerabilities or deviations from best practices. It
can be used to scan the EC2 instances for software vulnerabilities. AWS Systems Manager Patch Manager can be used to patch the EC2 instances
on a regular schedule. Together, these services can provide a solution that meets the requirements of running regular security scans and patching
EC2 instances on a regular schedule. Additionally, Patch Manager can provide a report of each instance’s patch status.
upvoted 8 times
Selected Answer: D
Selected Answer: D
dddddddddd
upvoted 1 times
Selected Answer: D
Amazon Inspector is a security assessment service that helps improve the security and compliance of applications deployed on Amazon Web
Services (AWS). It automatically assesses applications for vulnerabilities or deviations from best practices. Amazon Inspector can be used to identify
security issues and recommend fixes for them. It is an ideal solution for running regular security scans across a large fleet of EC2 instances.
AWS Systems Manager Patch Manager is a service that helps you automate the process of patching Windows and Linux instances. It provides a
simple, automated way to patch your instances with the latest security patches and updates. Patch Manager helps you maintain compliance with
security policies and regulations by providing detailed reports on the patch status of your instances.
upvoted 4 times
Selected Answer: D
http://webcache.googleusercontent.com/search?q=cache:FbFTc6XKycwJ:https://medium.com/aws-architech/use-case-aws-inspector-vs-guardduty
3662bf80767a&hl=vi&gl=kr&strip=1&vwsrc=0
upvoted 2 times
A company is planning to store data on Amazon RDS DB instances. The company must encrypt the data at rest.
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.
B. Create an encryption key. Store the key in AWS Secrets Manager. Use the key to encrypt the DB instances.
C. Generate a certificate in AWS Certificate Manager (ACM). Enable SSL/TLS on the DB instances by using the certificate.
D. Generate a certificate in AWS Identity and Access Management (IAM). Enable SSL/TLS on the DB instances by using the certificate.
Correct Answer: A
Selected Answer: A
A: Enable encryption
B: KMS is for storage and doesn't directly integrate to DB without further work
C and D are for data encryption in transit not at rest
upvoted 2 times
Selected Answer: A
KMS only generates and manages encryption keys. That's it. That's all it does. It's a fundamental service that you as well as other AWS Services (like
Secrets Manager) use it to encrypt or decrypt.
Key Management Service. Secrets Manager is for database connection strings.
upvoted 3 times
upvoted 3 times
Secrets Manager stores actual secrets like passwords, pass phrases, and anything else you want encrypted. SM uses KMS to encrypt its secrets, i
would be circular to get an encryption key from KMS to use SM to encrypt the encryption key.
upvoted 4 times
ANSWER - A
upvoted 1 times
A for sure
upvoted 1 times
Selected Answer: A
A is the correct solution to meet the requirement of encrypting the data at rest.
To encrypt data at rest in Amazon RDS, you can use the encryption feature of Amazon RDS, which uses AWS Key Management Service (AWS KMS).
With this feature, Amazon RDS encrypts each database instance with a unique key. This key is stored securely by AWS KMS. You can manage your
own keys or use the default AWS-managed keys. When you enable encryption for a DB instance, Amazon RDS encrypts the underlying storage,
including the automated backups, read replicas, and snapshots.
upvoted 3 times
AWS Key Management Service (KMS) is used to manage the keys used to encrypt and decrypt the data.
upvoted 1 times
Option A
upvoted 1 times
Amazon RDS provides multiple options for encrypting data at rest. AWS Key Management Service (KMS) is used to manage the keys used to
encrypt and decrypt the data. Therefore, a solution architect should create a key in AWS KMS and enable encryption for the DB instances to encryp
the data at rest.
upvoted 1 times
A. Create a key in AWS Key Management Service (AWS KMS). Enable encryption for the DB instances.
https://www.examtopics.com/discussions/amazon/view/80753-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #331 Topic 1
A company must migrate 20 TB of data from a data center to the AWS Cloud within 30 days. The company’s network bandwidth is limited to 15
Correct Answer: A
Selected Answer: A
Selected Answer: A
Honestly, the company has bigger problem with that slow connection :)
30 days is the first clue so you can get snowball shipped and sent back (5 days each way)
upvoted 5 times
Selected Answer: A
Selected Answer: A
I wont try to think to much about it, AWS Snowball was designed for this
upvoted 3 times
Selected Answer: A
° 15 Mbps bandwidth with 70% max utilization limits the effective bandwidth to 10.5 Mbps or 1.31 MB/s.
° 20 TB of data at 1.31 MB/s would take approximately 193 days to transfer over the network. ° This far exceeds the 30 day requirement.
° AWS Snowball provides a physical storage device that can be shipped to the data center. Up to 80 TB can be loaded onto a Snowball device and
shipped back to AWS.
This allows the 20 TB of data to be transferred much faster by shipping rather than over the limited network bandwidth.
° Snowball uses tamper-resistant enclosures and 256-bit encryption to keep the data secure during transit.
° The data can be imported into Amazon S3 or Amazon Glacier once the Snowball is received by AWS.
upvoted 4 times
That's how much you can transfer with a 10 Mbps link (roughly 70% of the 15 Mbps connection).
With a consistent connection of 8~ Mbps, and 30 days, you can upload 20 TB of data.
Aws snowball
upvoted 2 times
Selected Answer: A
AWS Snowball
upvoted 1 times
Selected Answer: A
Option a
upvoted 1 times
Selected Answer: A
option A
upvoted 3 times
Question #332 Topic 1
A company needs to provide its employees with secure access to confidential and sensitive files. The company wants to ensure that the files can
be accessed only by authorized users. The files must be downloaded securely to the employees’ devices.
The files are stored in an on-premises Windows file server. However, due to an increase in remote usage, the file server is running out of capacity.
A. Migrate the file server to an Amazon EC2 instance in a public subnet. Configure the security group to limit inbound traffic to the employees’
IP addresses.
B. Migrate the files to an Amazon FSx for Windows File Server file system. Integrate the Amazon FSx file system with the on-premises Active
C. Migrate the files to Amazon S3, and create a private VPC endpoint. Create a signed URL to allow download.
D. Migrate the files to Amazon S3, and create a public VPC endpoint. Allow employees to sign on with AWS IAM Identity Center (AWS Single
Sign-On).
Correct Answer: B
Selected Answer: B
This solution addresses the need for secure access to confidential and sensitive files, as well as the increase in remote usage. Migrating the files to
Amazon FSx for Windows File Server provides a scalable, fully managed file storage solution in the AWS Cloud that is accessible from on-premises
and cloud environments. Integration with the on-premises Active Directory allows for a consistent user experience and centralized access control.
AWS Client VPN provides a secure and managed VPN solution that can be used by employees to access the files securely.
upvoted 7 times
Selected Answer: B
My money is on B, but it's still not mentioned that the customer used an on-prem Active Directory.
upvoted 2 times
C has "signed URL", everyone who has the URL could download. Plus, only B ensure the "must be downloaded securely" part by using VPN.
upvoted 4 times
Selected Answer: B
Windows file server = Amazon FSx for Windows File Server file system
Files can be accessed only by authorized users = On-premises Active Directory
upvoted 2 times
Selected Answer: C
"Signed URL to allow download" would allow everyone who has the URL to download the files, but we must "ensure that the files can be
accessed only by authorized users". Plus, the "private VPC endpoint" is not really of use here, it's still S3 and the users are not in AWS.
upvoted 3 times
Selected Answer: B
B is the correct answer
upvoted 1 times
B is the best solution for the given requirements. It provides a secure way for employees to access confidential and sensitive files from anywhere
using AWS Client VPN. The Amazon FSx for Windows File Server file system is designed to provide native support for Windows file system features
such as NTFS permissions, Active Directory integration, and Distributed File System (DFS). This means that the company can continue to use their
on-premises Active Directory to manage user access to files.
upvoted 3 times
A company’s application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto
Scaling group across multiple Availability Zones. On the first day of every month at midnight, the application becomes much slower when the
month-end financial calculation batch runs. This causes the CPU utilization of the EC2 instances to immediately peak to 100%, which disrupts the
application.
What should a solutions architect recommend to ensure the application is able to handle the workload and avoid downtime?
B. Configure an EC2 Auto Scaling simple scaling policy based on CPU utilization.
C. Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule.
D. Configure Amazon ElastiCache to remove some of the workload from the EC2 instances.
Correct Answer: C
Selected Answer: C
'On the first day of every month at midnight' = Scheduled scaling policy
upvoted 3 times
Selected Answer: C
By configuring a scheduled scaling policy, the EC2 Auto Scaling group can proactively launch additional EC2 instances before the CPU utilization
peaks to 100%. This will ensure that the application can handle the workload during the month-end financial calculation batch, and avoid any
disruption or downtime.
Configuring a simple scaling policy based on CPU utilization or adding Amazon CloudFront distribution or Amazon ElastiCache will not directly
address the issue of handling the monthly peak workload.
upvoted 3 times
If the scaling were based on CPU or memory, it requires a certain amount of time above that threshhold, 5 minutes for example. That would mean
the CPU would be at 100% for five minutes.
upvoted 2 times
Selected Answer: C
C: Configure an EC2 Auto Scaling scheduled scaling policy based on the monthly schedule is the best option because it allows for the proactive
scaling of the EC2 instances before the monthly batch run begins. This will ensure that the application is able to handle the increased workload
without experiencing downtime. The scheduled scaling policy can be configured to increase the number of instances in the Auto Scaling group a
few hours before the batch run and then decrease the number of instances after the batch run is complete. This will ensure that the resources are
available when needed and not wasted when not needed.
The most appropriate solution to handle the increased workload during the monthly batch run and avoid downtime would be to configure an EC2
Auto Scaling scheduled scaling policy based on the monthly schedule.
upvoted 2 times
To set up a scheduled scaling policy in EC2 Auto Scaling, you need to specify the following:
Start time and date: The date and time when the scaling event should begin.
Desired capacity: The number of instances that you want to have running after the scaling event.
Recurrence: The frequency with which the scaling event should occur. This can be a one-time event or a recurring event, such as daily or weekly
upvoted 1 times
A company wants to give a customer the ability to use on-premises Microsoft Active Directory to download files that are stored in Amazon S3. The
Which solution will meet these requirements with the LEAST operational overhead and no changes to the customer’s application?
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.
B. Set up AWS Database Migration Service (AWS DMS) to synchronize the on-premises client with Amazon S3. Configure integrated Active
Directory authentication.
C. Set up AWS DataSync to synchronize between the on-premises location and the S3 location by using AWS IAM Identity Center (AWS Single
Sign-On).
D. Set up a Windows Amazon EC2 instance with SFTP to connect the on-premises client with Amazon S3. Integrate AWS Identity and Access
Management (IAM).
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
just A
upvoted 1 times
Selected Answer: A
https://aws.amazon.com/vi/blogs/architecture/managed-file-transfer-using-aws-transfer-family-and-amazon-s3/
upvoted 2 times
Selected Answer: A
A. Set up AWS Transfer Family with SFTP for Amazon S3. Configure integrated Active Directory authentication.
https://docs.aws.amazon.com/transfer/latest/userguide/directory-services-users.html
upvoted 3 times
Question #335 Topic 1
A company is experiencing sudden increases in demand. The company needs to provision large Amazon EC2 instances from an Amazon Machine
Image (AMI). The instances will run in an Auto Scaling group. The company needs a solution that provides minimum initialization latency to meet
the demand.
A. Use the aws ec2 register-image command to create an AMI from a snapshot. Use AWS Step Functions to replace the AMI in the Auto
Scaling group.
B. Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot. Provision an AMI by using the snapshot. Replace
the AMI in the Auto Scaling group with the new AMI.
C. Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager (Amazon DLM). Create an AWS Lambda function that
D. Use Amazon EventBridge to invoke AWS Backup lifecycle policies that provision AMIs. Configure Auto Scaling group capacity limits as an
Correct Answer: B
Selected Answer: B
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to
quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the
initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace
the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched
from the updated AMI and are able to meet the increased demand quickly.
upvoted 11 times
Selected Answer: B
The question wording is pretty weird but the only thing of value is latency during initialisation which makes B the correct option.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
Selected Answer: B
Selected Answer: B
"Fast snapshot restore" = pre-warmed snapshot
AMI from such a snapshot is pre-warmed AMI
upvoted 3 times
Selected Answer: D
Amazon Data Lifecycle Manager (DLM) is a feature of Amazon EBS that automates the creation, retention, and deletion of snapshots, which are
used to back up your Amazon EBS volumes. With DLM, you can protect your data by implementing a backup strategy that aligns with your
business requirements.
You can create lifecycle policies to automate snapshot management. Each policy includes a schedule of when to create snapshots, a retention rule
with a defined period to retain each snapshot, and a set of Amazon EBS volumes to assign to the policy.
This service helps simplify the management of your backups, ensure compliance, and reduce costs.
upvoted 1 times
b is correct
upvoted 1 times
Amazon EBS Fast Snapshot Restore: This feature allows you to quickly create new EBS volumes (and subsequently AMIs) from snapshots. Fast
Snapshot Restore optimizes the initialization process by pre-warming the snapshots, reducing the time it takes to create volumes from those
snapshots.
Provision an AMI using the snapshot: By using fast snapshot restore, you can efficiently provision an AMI from the pre-warmed snapshot,
minimizing the initialization latency.
Replace the AMI in the Auto Scaling group: This allows you to update the instances in the Auto Scaling group with the new AMI efficiently,
ensuring that the new instances are launched with minimal delay.
upvoted 1 times
Option C (Enable AMI creation and define lifecycle rules in Amazon Data Lifecycle Manager, create a Lambda function): While Amazon DLM can
help manage the lifecycle of your AMIs, it might not provide the same level of speed and responsiveness needed for sudden increases in
demand.
Option D (Use Amazon EventBridge and AWS Backup): AWS Backup is primarily designed for backup and recovery, and it might not be as
optimized for quickly provisioning instances in response to sudden demand spikes. EventBridge can be used for event-driven architectures, but
in this context, it might introduce unnecessary complexity.
upvoted 1 times
Enable Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot. Provision an AMI by using the snapshot. Replace the AMI in
the Auto Scaling group with the new AMI
upvoted 1 times
Selected Answer: B
° Need to launch large EC2 instances quickly from an AMI in an Auto Scaling group
° Looking to minimize instance initialization latency
upvoted 2 times
B most def
upvoted 1 times
Selected Answer: B
B: "EBS fast snapshot restore": minimizes initialization latency. This is a good choice.
upvoted 2 times
Selected Answer: B
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-fast-snapshot-restore.html
upvoted 2 times
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows for rapid restoration of EBS volumes from snapshots
This reduces the time required to create an AMI from a snapshot, which is useful for quickly provisioning large Amazon EC2 instances.
Provisioning an AMI by using the fast snapshot restore feature is a fast and efficient way to create an AMI. Once the AMI is created, it can be
replaced in the Auto Scaling group without any downtime or disruption to running instances.
upvoted 1 times
Enabling Amazon Elastic Block Store (Amazon EBS) fast snapshot restore on a snapshot allows you to
quickly create a new Amazon Machine Image (AMI) from a snapshot, which can help reduce the
initialization latency when provisioning new instances. Once the AMI is provisioned, you can replace
the AMI in the Auto Scaling group with the new AMI. This will ensure that new instances are launched from the updated AMI and are able to meet
the increased demand quickly.
upvoted 1 times
Question #336 Topic 1
A company hosts a multi-tier web application that uses an Amazon Aurora MySQL DB cluster for storage. The application tier is hosted on
Amazon EC2 instances. The company’s IT security guidelines mandate that the database credentials be encrypted and rotated every 14 days.
What should a solutions architect do to meet this requirement with the LEAST operational effort?
A. Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the
KMS key with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days.
B. Create two parameters in AWS Systems Manager Parameter Store: one for the user name as a string parameter and one that uses the
SecureString type for the password. Select AWS Key Management Service (AWS KMS) encryption for the password parameter, and load these
parameters in the application tier. Implement an AWS Lambda function that rotates the password every 14 days.
C. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon Elastic File System (Amazon
EFS) file system. Mount the EFS file system in all EC2 instances of the application tier. Restrict the access to the file on the file system so that
the application can read the file and that only super users can modify the file. Implement an AWS Lambda function that rotates the key in
Aurora every 14 days and writes new credentials into the file.
D. Store a file that contains the credentials in an AWS Key Management Service (AWS KMS) encrypted Amazon S3 bucket that the application
uses to load the credentials. Download the file to the application regularly to ensure that the correct credentials are used. Implement an AWS
Lambda function that rotates the Aurora credentials every 14 days and uploads these credentials to the file in the S3 bucket.
Correct Answer: A
Selected Answer: A
Create a new AWS Key Management Service (AWS KMS) encryption key. Use AWS Secrets Manager to create a new secret that uses the KMS key
with the appropriate credentials. Associate the secret with the Aurora DB cluster. Configure a custom rotation period of 14 days
upvoted 2 times
AWS Secrets Manager allows you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
With this service, you can automate the rotation of secrets, such as database credentials, on a schedule that you choose. The solution allows you to
create a new secret with the appropriate credentials and associate it with the Aurora DB cluster. You can then configure a custom rotation period o
14 days to ensure that the credentials are automatically rotated every two weeks, as required by the IT security guidelines. This approach requires
the least amount of operational effort as it allows you to manage secrets centrally without modifying your application code or infrastructure.
upvoted 4 times
A: AWS Secrets Manager. Simply this supported rotate feature, and secure to store credentials instead of EFS or S3.
upvoted 1 times
Voting A
upvoted 1 times
Selected Answer: A
A proposes to create a new AWS KMS encryption key and use AWS Secrets Manager to create a new secret that uses the KMS key with the
appropriate credentials. Then, the secret will be associated with the Aurora DB cluster, and a custom rotation period of 14 days will be configured.
AWS Secrets Manager will automate the process of rotating the database credentials, which will reduce the operational effort required to meet the
IT security guidelines.
upvoted 1 times
A company has deployed a web application on AWS. The company hosts the backend database on Amazon RDS for MySQL with a primary DB
instance and five read replicas to support scaling needs. The read replicas must lag no more than 1 second behind the primary DB instance. The
As traffic on the website increases, the replicas experience additional lag during periods of peak load. A solutions architect must reduce the
replication lag as much as possible. The solutions architect must minimize changes to the application code and must minimize ongoing
operational overhead.
A. Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace
B. Deploy an Amazon ElastiCache for Redis cluster in front of the database. Modify the application to check the cache before the application
queries the database. Replace the stored procedures with AWS Lambda functions.
C. Migrate the database to a MySQL database that runs on Amazon EC2 instances. Choose large, compute optimized EC2 instances for all
D. Migrate the database to Amazon DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput,
and configure on-demand capacity scaling. Replace the stored procedures with DynamoDB streams.
Correct Answer: A
Selected Answer: A
Using Cache required huge changes in the application. Several things need to change to use cache in front of the DB in the application. So, option
B is not correct.
Aurora will help to reduce replication lag for read replica
upvoted 11 times
Selected Answer: A
AWS Aurora and Native Functions are least application changes while providing better performance and minimum latency.
https://aws.amazon.com/rds/aurora/faqs/
B, C, D require lots of changes to the application so relatively speaking A is least code change and least maintenance/operational overhead.
upvoted 7 times
Selected Answer: A
Selected Answer: A
imho, B is not valid because it involves extra coding and the question specifically mentions no more coding. Therefore, replacing the current db
with another one is not considered as more coding.
upvoted 2 times
Migrate the database to Amazon Aurora MySQL. Replace the read replicas with Aurora Replicas, and configure Aurora Auto Scaling. Replace the
stored procedures with Aurora MySQL native functions
upvoted 1 times
Selected Answer: A
First, Elasticache involves heavy change on application code. The question mentioned that "he solutions architect must minimize changes to the
application code". Therefore B is not suitable and A is more appropriate for the question requirement.
upvoted 2 times
Selected Answer: B
Option B is not the best solution since adding an ElastiCache for Redis cluster does not address the replication lag issue, and the cache may not
have the most up-to-date information. Additionally, replacing the stored procedures with AWS Lambda functions adds additional complexity and
may not improve performance.
upvoted 4 times
Selected Answer: B
Selected Answer: B
By using ElastiCache you avoid a lot of common issues you might encounter. ElastiCache is a database caching solution. ElastiCache Redis per se,
supports failover and Multi-AZ. And Most of all, ElastiCache is well suited to place in front of RDS.
Selected Answer: A
Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 2 times
https://docs.amazonaws.cn/en_us/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.Lambda.html
upvoted 1 times
A solutions architect must create a disaster recovery (DR) plan for a high-volume software as a service (SaaS) platform. All data for the platform
A. Use MySQL binary log replication to an Aurora cluster in the secondary Region. Provision one DB instance for the Aurora cluster in the
secondary Region.
B. Set up an Aurora global database for the DB cluster. When setup is complete, remove the DB instance from the secondary Region.
C. Use AWS Database Migration Service (AWS DMS) to continuously replicate data to an Aurora cluster in the secondary Region. Remove the
D. Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region.
Correct Answer: B
Selected Answer: B
I originally went for D but now I think B is correct. D is active-active cluster so whereas B is active-passive (headless cluster) so it is cheaper than D.
https://aws.amazon.com/blogs/database/achieve-cost-effective-multi-region-resiliency-with-amazon-aurora-global-database-headless-clusters/
upvoted 16 times
Answer - A
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html
-----------------------------------------------------------------------------
Before you begin
Before you can create an Aurora MySQL DB cluster that is a cross-Region read replica, you must turn on binary logging on your source Aurora
MySQL DB cluster. Cross-region replication for Aurora MySQL uses MySQL binary replication to replay changes on the cross-Region read replica D
cluster.
upvoted 9 times
In addition to Aurora Replicas, you have the following options for replication with Aurora MySQL:
You can replicate data across multiple Regions by using an Aurora global database. For details, see High availability across AWS Regions with
Aurora global databases
You can create an Aurora read replica of an Aurora MySQL DB cluster in a different AWS Region, by using MySQL binary log (binlog) replication
Each cluster can have up to five read replicas created this way, each in a different Region.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times
https://aws.amazon.com/rds/aurora/pricing/
upvoted 1 times
Selected Answer: B
Aurora Global Databases offer a cost-effective way to replicate data to a secondary region for disaster recovery. By removing the secondary DB
instance after setup, you only pay for storage and minimal compute resources.
upvoted 2 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.advantages
upvoted 1 times
Selected Answer: D
B is more cost-effective however because this is DR so when the region fails => still need a DB to fail over and if setting up a DB from snapshot at
the time of failure will be risky => D is the answer
upvoted 3 times
"Achieve cost-effective multi-Region resiliency with Amazon Aurora Global Database headless clusters" is exactly the topic here. "A headless
secondary Amazon Aurora database cluster is one without a database instance. This type of configuration can lower expenses for an Aurora global
database."
https://aws.amazon.com/blogs/database/achieve-cost-effective-multi-region-resiliency-with-amazon-aurora-global-database-headless-clusters/
upvoted 6 times
Option B is not the best solution since adding an ElastiCache for Redis cluster does not address the replication lag issue, and the cache may not
have the most up-to-date information. Additionally, replacing the stored procedures with AWS Lambda functions adds additional complexity and
may not improve performance.
upvoted 2 times
Set up an Aurora global database for the DB cluster. Specify a minimum of one DB instance in the secondary Region
upvoted 1 times
D: With Amazon Aurora Global Database, you pay for replicated write I/Os between the primary Region and each secondary Region (in this case 1)
Not A because it achieves the same, would be equally costly and adds overhead.
upvoted 3 times
CCCCCC
upvoted 3 times
Selected Answer: D
I think Amazon is looking for D here. I don' think A is intended because that would require knowledge of MySQL, which isn't what they are testing
us on. Not option C because the question states large volume. If the volume were low, then DMS would be better. This question is not a good
question.
upvoted 3 times
Selected Answer: D
A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance.
Management says the application must be made more secure with the least amount of programming effort.
A. Use AWS Key Management Service (AWS KMS) to create keys. Configure the application to load the database credentials from AWS KMS.
B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret
Manager.
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure
the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the
D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter
Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the
application user in the RDS for MySQL database using Parameter Store.
Correct Answer: C
Selected Answer: C
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the
application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for
MySQL database using Secrets Manager.
https://www.examtopics.com/discussions/amazon/view/46483-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 10 times
Selected Answer: C
A KMS is for encryption keys specifically so this is a long way of doing the credentials storage
B is too much work for rotation
C exactly what secrets manager is designed for
D You can do that if C wasn't an option
upvoted 1 times
Selected Answer: C
Selected Answer: C
Secrets Manager can handle the rotation, so no need for Lambda to rotate the keys.
upvoted 1 times
If you need your DB to store credentials then use AWS Secret Manager. System Manager Paramater Store is for CloudFormation (no rotation)
upvoted 1 times
Selected Answer: C
https://aws.amazon.com/blogs/security/rotate-amazon-rds-database-credentials-automatically-with-aws-secrets-manager/
upvoted 1 times
Selected Answer: C
C is a valid solution for securing the custom application with the least amount of programming effort. It involves creating credentials on the RDS
for MySQL database for the application user and storing them in AWS Secrets Manager. The application can then be configured to load the
database credentials from Secrets Manager. Additionally, the solution includes setting up a credentials rotation schedule for the application user in
the RDS for MySQL database using Secrets Manager, which will automatically rotate the credentials at a specified interval without requiring any
programming effort.
upvoted 3 times
Selected Answer: C
https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html
upvoted 2 times
A media company hosts its website on AWS. The website application’s architecture includes a fleet of Amazon EC2 instances behind an
Application Load Balancer (ALB) and a database that is hosted on Amazon Aurora. The company’s cybersecurity team reports that the application
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.
B. Create an ALB listener rule to reply to SQL injections with a fixed response.
C. Subscribe to AWS Shield Advanced to block all SQL injection attempts automatically.
Correct Answer: A
Selected Answer: A
A. Use AWS WAF in front of the ALB. Associate the appropriate web ACLs with AWS WAF.
Answer - A
https://aws.amazon.com/premiumsupport/knowledge-center/waf-block-common-
attacks/#:~:text=To%20protect%20your%20applications%20against,%2C%20query%20string%2C%20or%20URI.
-----------------------------------------------------------------------------------------------------------------------
Protect against SQL injection and cross-site scripting
To protect your applications against SQL injection and cross-site scripting (XSS) attacks, use the built-in SQL injection and cross-site scripting
engines. Remember that attacks can be performed on different parts of the HTTP request, such as the HTTP header, query string, or URI. Configure
the AWS WAF rules to inspect different parts of the HTTP request against the built-in mitigation engines.
upvoted 7 times
Selected Answer: A
Selected Answer: A
SQL Injection - AWS WAF
DDoS - AWS Shield
upvoted 1 times
Selected Answer: A
Selected Answer: A
AWS WAF is a managed service that protects web applications from common web exploits that could affect application availability, compromise
security, or consume excessive resources. AWS WAF enables customers to create custom rules that block common attack patterns, such as SQL
injection attacks.
By using AWS WAF in front of the ALB and associating the appropriate web ACLs with AWS WAF, the company can protect its website application
from SQL injection attacks. AWS WAF will inspect incoming traffic to the website application and block requests that match the defined SQL
injection patterns in the web ACLs. This will help to prevent SQL injection attacks from reaching the application, thereby improving the overall
security posture of the application.
upvoted 2 times
A company has an Amazon S3 data lake that is governed by AWS Lake Formation. The company wants to create a visualization in Amazon
QuickSight by joining the data in the data lake with operational data that is stored in an Amazon Aurora MySQL database. The company wants to
enforce column-level authorization so that the company’s marketing team can access only a subset of columns in the database.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon EMR to ingest the data directly from the database to the QuickSight SPICE engine. Include only the required columns.
B. Use AWS Glue Studio to ingest the data from the database to the S3 data lake. Attach an IAM policy to the QuickSight users to enforce
C. Use AWS Glue Elastic Views to create a materialized view for the database in Amazon S3. Create an S3 bucket policy to enforce column-
level access control for the QuickSight users. Use Amazon S3 as the data source in QuickSight.
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level
access control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
Correct Answer: D
Selected Answer: D
This solution leverages AWS Lake Formation to ingest data from the Aurora MySQL database into the S3 data lake, while enforcing column-level
access control for QuickSight users. Lake Formation can be used to create and manage the data lake's metadata and enforce security and
governance policies, including column-level access control. This solution then uses Amazon Athena as the data source in QuickSight to query the
data in the S3 data lake. This solution minimizes operational overhead by leveraging AWS services to manage and secure the data, and by using a
standard query service (Amazon Athena) to provide a SQL interface to the data.
upvoted 12 times
Answer - D
https://aws.amazon.com/blogs/big-data/enforce-column-level-authorization-with-amazon-quicksight-and-aws-lake-formation/
upvoted 9 times
Selected Answer: D
https://docs.aws.amazon.com/lake-formation/latest/dg/workflows-about.html
upvoted 1 times
Use a Lake Formation blueprint to ingest data from the Aurora database into the S3 data lake
Leverage Lake Formation to enforce column-level access control for the marketing team
Use Amazon Athena as the data source in QuickSight
The key points:
Using a Lake Formation blueprint to ingest the data from the database to the S3 data lake, using Lake Formation to enforce column-level access
control for the QuickSight users, and using Amazon Athena as the data source in QuickSight. This solution requires the least operational overhead
as it utilizes the features provided by AWS Lake Formation to enforce column-level authorization, which simplifies the process and reduces the
need for additional configuration and maintenance.
upvoted 4 times
Selected Answer: D
D. Use a Lake Formation blueprint to ingest the data from the database to the S3 data lake. Use Lake Formation to enforce column-level access
control for the QuickSight users. Use Amazon Athena as the data source in QuickSight.
https://www.examtopics.com/discussions/amazon/view/80865-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #342 Topic 1
A transaction processing company has weekly scripted batch jobs that run on Amazon EC2 instances. The EC2 instances are in an Auto Scaling
group. The number of transactions can vary, but the baseline CPU utilization that is noted on each run is at least 60%. The company needs to
Currently, engineers complete this task by manually modifying the Auto Scaling group parameters. The company does not have the resources to
analyze the required capacity trends for the Auto Scaling group counts. The company needs an automated way to modify the Auto Scaling group’s
desired capacity.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a dynamic scaling policy for the Auto Scaling group. Configure the policy to scale based on the CPU utilization metric. Set the target
B. Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum
capacity. Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
C. Create a predictive scaling policy for the Auto Scaling group. Configure the policy to scale based on forecast. Set the scaling metric to CPU
utilization. Set the target value for the metric to 60%. In the policy, set the instances to pre-launch 30 minutes before the jobs run.
D. Create an Amazon EventBridge event to invoke an AWS Lambda function when the CPU utilization metric value for the Auto Scaling group
reaches 60%. Configure the Lambda function to increase the Auto Scaling group’s desired capacity and maximum capacity by 20%.
Correct Answer: C
Selected Answer: C
B is NOT correct. the question said "The company does not have the resources to analyze the required capacity trends for the Auto Scaling group
counts.".
answer B said "Set the appropriate desired capacity, minimum capacity, and maximum capacity".
how can someone set desired capacity if he has no resources to analyze the required capacity.
Read carefully Amigo
upvoted 19 times
Selected Answer: B
A scheduled scaling policy allows you to set up specific times for your Auto Scaling group to scale out or scale in. By creating a scheduled scaling
policy for the Auto Scaling group, you can set the appropriate desired capacity, minimum capacity, and maximum capacity, and set the recurrence
to weekly. You can then set the start time to 30 minutes before the batch jobs run, ensuring that the required capacity is provisioned before the
jobs run.
Option C, creating a predictive scaling policy for the Auto Scaling group, is not necessary in this scenario since the company does not have the
resources to analyze the required capacity trends for the Auto Scaling group counts. This would require analyzing the required capacity trends for
the Auto Scaling group counts to determine the appropriate scaling policy.
upvoted 5 times
[Removed] 1 year, 6 months ago
(typo above) C is correct..
upvoted 1 times
Selected Answer: C
B or C.
I think C because the company needs an automated way to modify the autoscaling desired capacity
upvoted 1 times
Selected Answer: C
C per https://docs.aws.amazon.com/autoscaling/ec2/userguide/predictive-scaling-create-policy.html.
B is out because it wants the company to 'set the desired/minimum/maximum capacity' but "the company does not have the resources to analyze
the required capacity".
upvoted 5 times
From GPT4:
mong the provided options, creating a scheduled scaling policy (Option B) is the most direct and efficient way to ensure that the necessary capacit
is provisioned 30 minutes before the weekly batch jobs run, with the least operational overhead. Here's a breakdown of Option B:
B. Create a scheduled scaling policy for the Auto Scaling group. Set the appropriate desired capacity, minimum capacity, and maximum capacity.
Set the recurrence to weekly. Set the start time to 30 minutes before the batch jobs run.
Scheduled scaling allows you to change the desired capacity of your Auto Scaling group based on a schedule. In this case, setting the recurrence to
weekly and adjusting the start time to 30 minutes before the batch jobs run will ensure that the necessary capacity is available when needed,
without requiring manual intervention.
upvoted 5 times
Upon reviewing the question again, it appears that the requirements emphasize the need to provision capacity 30 minutes before the batch
jobs run and the company's constraint of not having resources to analyze capacity trends. In this context, the most suitable solution is C.
Predictive Scaling can use historical data to forecast future capacity needs.
Configuring the policy to scale based on CPU utilization with a target value of 60% aligns with the baseline CPU utilization mentioned in the
scenario.
Setting instances to pre-launch 30 minutes before the jobs run provides the desired capacity just in time.
upvoted 1 times
Predictive scaling: increases the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in traffic flows. If you
have regular patterns of traffic increases use predictive scaling, to help you scale faster by launching capacity in advance of forecasted load. You
don't have to spend time reviewing your application's load patterns and trying to schedule the right amount of capacity using scheduled scaling.
Predictive scaling uses machine learning to predict capacity requirements based on historical data from CloudWatch. The machine learning
algorithm consumes the available historical data and calculates capacity that best fits the historical load pattern, and then continuously learns
based on new data to make future forecasts more accurate.
upvoted 1 times
Selected Answer: C
C is correct!
upvoted 1 times
Selected Answer: C
if the baseline CPU utilization is 60%, then that's enough information needed to determaine you to predict some aspect of the usage in the future.
So key word "predictive" judging by past usage.
upvoted 1 times
Selected Answer: B
B.
you can make a vague estimation according to the resources used; you don't need to make machine-learning models to do that. You only need
common sense.
upvoted 3 times
Selected Answer: C
Use predictive scaling to increase the number of EC2 instances in your Auto Scaling group in advance of daily and weekly patterns in traffic flows.
Cyclical traffic, such as high use of resources during regular business hours and low use of resources during evenings and weekends
Recurring on-and-off workload patterns, such as batch processing, testing, or periodic data analysis
Applications that take a long time to initialize, causing a noticeable latency impact on application performance during scale-out events
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 1 times
The second part of the question invalidates option B, they don't know how to procure requirements and need something to do it for them,
therefore C.
upvoted 1 times
Selected Answer: C
In general, if you have regular patterns of traffic increases and applications that take a long time to initialize, you should consider using predictive
scaling. Predictive scaling can help you scale faster by launching capacity in advance of forecasted load, compared to using only dynamic scaling,
which is reactive in nature.
upvoted 2 times
Selected Answer: C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-predictive-scaling.html
upvoted 3 times
Question #343 Topic 1
A solutions architect is designing a company’s disaster recovery (DR) architecture. The company has a MySQL database that runs on an Amazon
EC2 instance in a private subnet with scheduled backup. The DR design needs to include multiple AWS Regions.
Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the MySQL database to multiple EC2 instances. Configure a standby EC2 instance in the DR Region. Turn on replication.
B. Migrate the MySQL database to Amazon RDS. Use a Multi-AZ deployment. Turn on read replication for the primary DB instance in the
C. Migrate the MySQL database to an Amazon Aurora global database. Host the primary DB cluster in the primary Region. Host the secondary
D. Store the scheduled backup of the MySQL database in an Amazon S3 bucket that is configured for S3 Cross-Region Replication (CRR). Use
Correct Answer: C
Selected Answer: C
C: Migrate MySQL database to an Amazon Aurora global database is the best solution because it requires minimal operational overhead. Aurora is
a managed service that provides automatic failover, so standby instances do not need to be manually configured. The primary DB cluster can be
hosted in the primary Region, and the secondary DB cluster can be hosted in the DR Region. This approach ensures that the data is always availabl
and up-to-date in multiple Regions, without requiring significant manual intervention.
upvoted 7 times
hello friends, question required: The DR design needs to include multiple AWS Regions, but the correct answer is B, how it comes, because the DR
here is on AZ not Different Region so the i would go with D
upvoted 1 times
Amazon Aurora global database can span and replicate DB Servers between multiple AWS Regions. And also compatible with MySQL.
upvoted 1 times
Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
upvoted 1 times
A company has a Java application that uses Amazon Simple Queue Service (Amazon SQS) to parse messages. The application cannot parse
messages that are larger than 256 KB in size. The company wants to implement a solution to give the application the ability to parse messages as
large as 50 MB.
Which solution will meet these requirements with the FEWEST changes to the code?
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
B. Use Amazon EventBridge to post large messages from the application instead of Amazon SQS.
C. Change the limit in Amazon SQS to handle messages that are larger than 256 KB.
D. Store messages that are larger than 256 KB in Amazon Elastic File System (Amazon EFS). Configure Amazon SQS to reference this location
in the messages.
Correct Answer: A
Selected Answer: A
A. Use the Amazon SQS Extended Client Library for Java to host messages that are larger than 256 KB in Amazon S3.
Amazon SQS has a limit of 256 KB for the size of messages. To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for
Java can be used. This library allows messages larger than 256 KB to be stored in Amazon S3 and provides a way to retrieve and process them.
Using this solution, the application code can remain largely unchanged while still being able to process messages up to 50 MB in size.
upvoted 15 times
A
For messages > 256 KB, use Amazon SQS Extended Client Library for Java
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/quotas-messages.html
upvoted 6 times
Selected Answer: A
To send messages larger than 256 KiB, you can use the Amazon SQS Extended Client Library for Java...
upvoted 1 times
Selected Answer: A
The Amazon SQS Extended Client Library for Java enables you to manage Amazon SQS message payloads with Amazon S3. This is especially usefu
for storing and retrieving messages with a message payload size greater than the current SQS limit of 256 KB, up to a maximum of 2 GB.
upvoted 3 times
The SQS Extended Client Library enables storing large payloads in S3 while referenced via SQS. The application code can stay almost entirely
unchanged - it sends/receives SQS messages normally. The library handles transparently routing the large payloads to S3 behind the scenes
upvoted 1 times
Selected Answer: A
Quote "The Amazon SQS Extended Client Library for Java enables you to manage Amazon SQS message payloads with Amazon S3." and "An
extension to the Amazon SQS client that enables sending and receiving messages up to 2GB via Amazon S3." at
https://github.com/awslabs/amazon-sqs-java-extended-client-lib
upvoted 1 times
Selected Answer: A
To handle messages larger than 256 KB, the Amazon SQS Extended Client Library for Java can be used.
upvoted 1 times
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-s3-messages.html
upvoted 1 times
https://github.com/awslabs/amazon-sqs-java-extended-client-lib
upvoted 3 times
Selected Answer: A
To send messages larger than 256 KiB, you can use the Amazon SQS Extended Client Library for Java. This library allows you to send an Amazon
SQS message that contains a reference to a message payload in Amazon S3. The maximum payload size is 2 GB.
upvoted 4 times
Question #345 Topic 1
A company wants to restrict access to the content of one of its main web applications and to protect the content by using authorization
techniques available on AWS. The company wants to implement a serverless architecture and an authentication solution for fewer than 100 users.
The solution needs to integrate with the main web application and serve web content globally. The solution must also scale as the company's user
A. Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application
globally.
B. Use AWS Directory Service for Microsoft Active Directory for authentication. Use AWS Lambda for authorization. Use an Application Load
C. Use Amazon Cognito for authentication. Use AWS Lambda for authorization. Use Amazon S3 Transfer Acceleration to serve the web
application globally.
D. Use AWS Directory Service for Microsoft Active Directory for authentication. Use Lambda@Edge for authorization. Use AWS Elastic
Correct Answer: A
Selected Answer: A
CloudFront=globally
Lambda@edge = Authorization/ Latency
Cognito=Authentication for Web apps
upvoted 13 times
Selected Answer: A
https://aws.amazon.com/blogs/networking-and-content-delivery/external-server-authorization-with-lambdaedge/
upvoted 1 times
Selected Answer: A
Use Amazon Cognito for authentication. Use Lambda@Edge for authorization. Use Amazon CloudFront to serve the web application globally
upvoted 2 times
Selected Answer: A
Amazon Cognito is a serverless authentication service that can be used to easily add user sign-up and authentication to web and mobile apps. It is
a good choice for this scenario because it is scalable and can handle a small number of users without any additional costs.
Lambda@Edge is a serverless compute service that can be used to run code at the edge of the AWS network. It is a good choice for this scenario
because it can be used to perform authorization checks at the edge, which can improve the login latency.
Amazon CloudFront is a content delivery network (CDN) that can be used to serve web content globally. It is a good choice for this scenario
because it can cache web content closer to users, which can improve the performance of the web application.
upvoted 3 times
A is perfect.
upvoted 1 times
Selected Answer: A
Amazon CloudFront is a global content delivery network (CDN) service that can securely deliver web content, videos, and APIs at scale. It integrates
with Cognito for authentication and with Lambda@Edge for authorization, making it an ideal choice for serving web content globally.
Lambda@Edge is a service that lets you run AWS Lambda functions globally closer to users, providing lower latency and faster response times. It
can also handle authorization logic at the edge to secure content in CloudFront. For this scenario, Lambda@Edge can provide authorization for the
web application while leveraging the low-latency benefit of running at the edge.
upvoted 2 times
Selected Answer: A
A company has an aging network-attached storage (NAS) array in its data center. The NAS array presents SMB shares and NFS shares to client
workstations. The company does not want to purchase a new NAS array. The company also does not want to incur the cost of renewing the NAS
array’s support contract. Some of the data is accessed frequently, but much of the data is inactive.
A solutions architect needs to implement a solution that migrates the data to Amazon S3, uses S3 Lifecycle policies, and maintains the same look
and feel for the client workstations. The solutions architect has identified AWS Storage Gateway as part of the solution.
Which type of storage gateway should the solutions architect provision to meet these requirements?
A. Volume Gateway
B. Tape Gateway
Correct Answer: D
Selected Answer: D
Amazon S3 File Gateway provides on-premises applications with access to virtually unlimited cloud storage using NFS and SMB file interfaces. It
seamlessly moves frequently accessed data to a low-latency cache while storing colder data in Amazon S3, using S3 Lifecycle policies to transition
data between storage classes over time.
In this case, the company's aging NAS array can be replaced with an Amazon S3 File Gateway that presents the same NFS and SMB shares to the
client workstations. The data can then be migrated to Amazon S3 and managed using S3 Lifecycle policies
upvoted 15 times
Selected Answer: D
Amazon S3 File Gateway provides a file interface to objects stored in S3. It can be used for a file-based interface with S3, which allows the company
to migrate their NAS array data to S3 while maintaining the same look and feel for client workstations. Amazon S3 File Gateway supports SMB and
NFS protocols, which will allow clients to continue to access the data using these protocols. Additionally, Amazon S3 Lifecycle policies can be used
to automate the movement of data to lower-cost storage tiers, reducing the storage cost of inactive data.
upvoted 6 times
Selected Answer: D
The Amazon S3 File Gateway enables you to store and retrieve objects in Amazon Simple Storage Service (S3) using file protocols such as Network
File System (NFS) and Server Message Block (SMB).
upvoted 3 times
Selected Answer: D
It provides an easy way to lift-and-shift file data from the existing NAS to Amazon S3. The S3 File Gateway presents SMB and NFS file shares that
client workstations can access just like the NAS shares.
Behind the scenes, it moves the file data to S3 storage, storing it durably and cost-effectively.
S3 Lifecycle policies can be used to transition less frequently accessed data to lower-cost S3 storage tiers like S3 Glacier.
From the client workstation perspective, access to files feels seamless and unchanged after migration to S3. The S3 File Gateway handles the
underlying data transfers.
It is a simple, low-cost gateway option tailored for basic file share migration use cases.
upvoted 3 times
Selected Answer: D
- Volume Gateway: https://aws.amazon.com/storagegateway/volume/ (Remove A, related iSCSI)
- Why not choose C? Because need working with Amazon S3. (Answer D, and it is correct answer) https://aws.amazon.com/storagegateway/file/s3/
upvoted 3 times
Selected Answer: D
https://aws.amazon.com/blogs/storage/how-to-create-smb-file-shares-with-aws-storage-gateway-using-hyper-v/
upvoted 2 times
Selected Answer: D
https://aws.amazon.com/about-aws/whats-new/2018/06/aws-storage-gateway-adds-smb-support-to-store-objects-in-amazon-s3/
upvoted 3 times
Question #347 Topic 1
A company has an application that is running on Amazon EC2 instances. A solutions architect has standardized the company on a particular
instance family and various instance sizes based on the current needs of the company.
The company wants to maximize cost savings for the application over the next 3 years. The company needs to be able to change the instance
family and sizes in the next 6 months based on application popularity and usage.
Correct Answer: A
Selected Answer: A
Read Carefully guys , They need to be able to change FAMILY , and although EC2 Savings has a higher discount , its clearly documented as not
allowed >
EC2 Instance Savings Plans provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family in a
chosen AWS Region (for example, M5 in Virginia). These plans automatically apply to usage regardless of size (for example, m5.xlarge, m5.2xlarge,
etc.), OS (for example, Windows, Linux, etc.), and tenancy (Host, Dedicated, Default) within the specified family in a Region.
upvoted 20 times
Selected Answer: A
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/savings-plans.html
Compute Savings Plans provide the most flexibility and help to reduce your costs by up to 66% (just like Convertible RIs). These plans automatically
apply to EC2 instance usage regardless of instance family...
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% (just like Standard RIs) in exchange for commitment to usage of
individual instance families
Instance Savings "locks" you in that instance family which is not desired by the company hence A is the best plan as they can change the instance
family anytime
upvoted 7 times
Selected Answer: A
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% in exchange for commitment to usage of individual instance
families in a region (e.g. M5 usage in N. Virginia). This automatically reduces your cost on the selected instance family in that region regardless of
AZ, size, OS or tenancy. ***EC2 Instance Savings Plans give you the flexibility to change your usage between instances within a family in that
region.*** For example, you can move from c5.xlarge running Windows to c5.2xlarge running Linux and automatically benefit from the Savings
Plans prices.
https://aws.amazon.com/savingsplans/faq/#:~:text=EC2%20Instance%20Savings%20Plans%20give,from%20the%20Savings%20Plans%20prices.
upvoted 3 times
B does not allow changing the instance family, despite all the ChatGPT-based answers claiming the opposite
upvoted 2 times
Selected Answer: A
While EC2 Instance Savings Plans also provide cost savings over On-Demand pricing, they offer less flexibility in terms of changing instance
families. They provide a discount in excha
upvoted 1 times
EC2 Instance Savings Plans is most saving. And it is enough for required flexibility
EC2 Instance Savings Plans provide the lowest prices, offering savings up to 72% (just like Standard RIs) in exchange for commitment to usage of
individual instance families in a Region (for example, M5 usage in N. Virginia). This automatically reduces your cost on the selected instance family
in that region regardless of AZ, size, operating system, or tenancy. EC2 Instance Savings Plans give you the flexibility to change your usage betwee
instances within a family in that Region. For example, you can move from c5.xlarge running Windows to c5.2xlarge running Linux and automatically
benefit from the Savings Plans prices.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-reservation-models/savings-plans.html
upvoted 1 times
Selected Answer: B
The most cost-effective solution that meets the company's requirements would be B. EC2 Instance Savings Plan.
EC2 Instance Savings Plans provide significant cost savings, allowing the company to commit to a consistent amount of usage (measured in $/hour
for a 1- or 3-year term, and in return, receive a discount on the hourly rate for the instances that match the attributes of the plan.
With EC2 Instance Savings Plans, the company can benefit from the flexibility to change the instance family and sizes over the next 3 years, which
aligns with their requirement to adjust based on application popularity and usage.
This option provides the best balance of cost savings and flexibility, making it the most suitable choice for the company's needs.
upvoted 2 times
"EC2 Instance Savings Plans provide savings up to 72 percent off On-Demand, in exchange for a commitment to a specific instance family (!) in
chosen AWS Region ... With an EC2 Instance Savings Plan, you can change your instance size within the instance family (!)".
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 3 times
Selected Answer: A
D is not right. D. Standard Reserved Instances. should be Convertible Reserved Instances if you need additional flexibility, such as the ability to use
different instance families, operating systems.
upvoted 2 times
Selected Answer: B
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 1 times
Selected Answer: A
Savings Plans offer a flexible pricing model that provides savings on AWS usage. You can save up to 72 percent on your AWS compute workloads.
Compute Savings Plans provide lower prices on Amazon EC2 instance usage regardless of instance family, size, OS, tenancy, or AWS Region. This
also applies to AWS Fargate and AWS Lambda usage. SageMaker Savings Plans provide you with lower prices for your Amazon SageMaker instance
usage, regardless of your instance family, size, component, or AWS Region.
https://docs.aws.amazon.com/savingsplans/latest/userguide/what-is-savings-plans.html
upvoted 2 times
A company collects data from a large number of participants who use wearable devices. The company stores the data in an Amazon DynamoDB
table and uses applications to analyze the data. The data workload is constant and predictable. The company wants to stay at or below its
A. Use provisioned mode and DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA). Reserve capacity for the forecasted workload.
B. Use provisioned mode. Specify the read capacity units (RCUs) and write capacity units (WCUs).
C. Use on-demand mode. Set the read capacity units (RCUs) and write capacity units (WCUs) high enough to accommodate changes in the
workload.
D. Use on-demand mode. Specify the read capacity units (RCUs) and write capacity units (WCUs) with reserved capacity.
Correct Answer: B
Selected Answer: B
Selected Answer: B
I think it is not possible to set Read Capacity Units(RCU)/Write Capacity Units(WCU) in on-demand mode.
upvoted 5 times
Selected Answer: B
C and D are impossible because you don't set or specify RCUs and WCUs in on-demand mode.
A is wrong because there is no indication of "infrequent access", and "the data workload is constant", there is no different between the current and
the "forecasted" workload.
upvoted 2 times
The use case is not for Standard-IA which is described here: https://aws.amazon.com/dynamodb/standard-ia/
=> Option B
upvoted 3 times
I rule out A because of this 'Standard-Infrequent Access ', clearly the company uses applications to analyze the data.
The data workload is constant and predictable making provisioned mode the best option.
upvoted 1 times
Selected Answer: A
Option D does not actually allow reserving capacity with on-demand mode.
So option A leverages provisioned mode, Standard-IA, and reserved capacity to meet the requirements in a cost-optimal way.
upvoted 1 times
A is correct!
upvoted 1 times
MrAWSAssociate 1 year, 3 months ago
Sorry, A will not work, since Reserved Capacity can only be used with DynamoDB Standard table class. So, B is right for this case.
upvoted 2 times
Selected Answer: B
예측가능..
upvoted 4 times
"With provisioned capacity you pay for the provision of read and write capacity units for your DynamoDB tables. Whereas with DynamoDB on-
demand you pay per request for the data reads and writes that your application performs on your tables."
upvoted 1 times
Selected Answer: B
The data workload is constant and predictable, then, isn't on-demand mode.
DynamoDB Standard-IA is not necessary in this context
upvoted 1 times
The problem with (A) is: “Standard-Infrequent Access“. In the question, they say the company has to analyze the Data.
That’s why the Correct answer is (B)
upvoted 3 times
Selected Answer: A
workload is constant
upvoted 2 times
Selected Answer: B
A company stores confidential data in an Amazon Aurora PostgreSQL database in the ap-southeast-3 Region. The database is encrypted with an
AWS Key Management Service (AWS KMS) customer managed key. The company was recently acquired and must securely share a backup of the
A. Create a database snapshot. Copy the snapshot to a new unencrypted snapshot. Share the new snapshot with the acquiring company’s
AWS account.
B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring
C. Create a database snapshot that uses a different AWS managed KMS key. Add the acquiring company’s AWS account to the KMS key alias.
D. Create a database snapshot. Download the database snapshot. Upload the database snapshot to an Amazon S3 bucket. Update the S3
bucket policy to allow access from the acquiring company’s AWS account.
Correct Answer: B
Selected Answer: B
A. - "So let me get this straight, with the current company the data is protected and encrypted. However, for the acquiring company the data is
unencrypted? How is that fair?"
C - Wouldn't recommended this option because using a different AWS managed KMS key will not allow the acquiring company's AWS account to
access the encrypted data.
D. - Don't risk it for a biscuit and get fired!!!! - by downloading the database snapshot and uploading it to an Amazon S3 bucket. This will increase
the risk of data leakage or loss of confidentiality during the transfer process.
B - CORRECT
upvoted 13 times
I believe the reason why option C is not the correct answer is that adding the acquiring company's AWS account to the KMS key alias doesn't
directly control access to the encrypted data. KMS key aliases are simply alternative names for KMS keys and do not affect access control. Access to
encrypted data is goverened by KMS key policies, which define who can use the key for encryption and decryption.
upvoted 1 times
Selected Answer: B
Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring company’s
AWS account.
upvoted 1 times
B. Create a database snapshot. Add the acquiring company’s AWS account to the KMS key policy. Share the snapshot with the acquiring company’
AWS account. Most Voted
upvoted 1 times
To securely share a backup of the database with the acquiring company's AWS account in the same Region, a solutions architect should create a
database snapshot, add the acquiring company's AWS account to the AWS KMS key policy, and share the snapshot with the acquiring company's
AWS account.
Option A, creating an unencrypted snapshot, is not recommended as it will compromise the confidentiality of the data. Option C, creating a
snapshot that uses a different AWS managed KMS key, does not provide any additional security and will unnecessarily complicate the solution.
Option D, downloading the database snapshot and uploading it to an S3 bucket, is not secure as it can expose the data during transit.
Therefore, the correct option is B: Create a database snapshot. Add the acquiring company's AWS account to the KMS key policy. Share the
snapshot with the acquiring company's AWS account.
upvoted 1 times
Selected Answer: B
Selected Answer: B
https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html
upvoted 2 times
Then:
Copy and share the DB cluster snapshot
upvoted 2 times
https://aws.amazon.com/premiumsupport/knowledge-center/aurora-share-encrypted-snapshot/
upvoted 2 times
A company uses a 100 GB Amazon RDS for Microsoft SQL Server Single-AZ DB instance in the us-east-1 Region to store customer transactions.
The company needs high availability and automatic recovery for the DB instance.
The company must also run reports on the RDS database several times a year. The report process causes transactions to take longer than usual to
post to the customers’ accounts. The company needs a solution that will improve the performance of the report process.
B. Take a snapshot of the current DB instance. Restore the snapshot to a new RDS deployment in another Availability Zone.
C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica.
Correct Answer: AC
Selected Answer: AC
Create a Multi-AZ deployment, create a read replica of the DB instance in the second Availability Zone, point all requests for reports to the read
replica
upvoted 3 times
A. Modify the DB instance from a Single-AZ DB instance to a Multi-AZ deployment. This will provide high availability and automatic recovery for
the DB instance. If the primary DB instance fails, the standby DB instance will automatically become the primary DB instance. This will ensure that
the database is always available.
C. Create a read replica of the DB instance in a different Availability Zone. Point all requests for reports to the read replica. This will improve the
performance of the report process by offloading the read traffic from the primary DB instance to the read replica. The read replica is a fully
synchronized copy of the primary DB instance, so the reports will be accurate.
upvoted 3 times
Selected Answer: AC
A and C.
upvoted 2 times
Selected Answer: AC
Selected Answer: AC
https://medium.com/awesome-cloud/aws-difference-between-multi-az-and-read-replicas-in-amazon-rds-60fe848ef53a
upvoted 3 times
A company is moving its data management application to AWS. The company wants to transition to an event-driven architecture. The architecture
needs to be more distributed and to use serverless concepts while performing the different aspects of the workflow. The company also wants to
A. Build out the workflow in AWS Glue. Use AWS Glue to invoke AWS Lambda functions to process the workflow steps.
B. Build out the workflow in AWS Step Functions. Deploy the application on Amazon EC2 instances. Use Step Functions to invoke the workflow
C. Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow
steps.
D. Build out the workflow in AWS Step Functions. Use Step Functions to create a state machine. Use the state machine to invoke AWS Lambda
Correct Answer: D
Selected Answer: D
This is why I’m voting D…..QUESTION ASKED FOR IT TO: use serverless concepts while performing the different aspects of the workflow. Is option D
utilizing Serverless concepts?
upvoted 11 times
Selected Answer: D
While considering this requirement: The architecture needs to be more distributed and to use serverless concepts while performing the different
aspects of the workflow
And checking the following link : https://aws.amazon.com/step-functions/?nc1=h_ls, Answer D is the best for this use case
upvoted 2 times
Selected Answer: D
One of the use cases for step functions is to Automate extract, transform, and load (ETL) processes.
https://aws.amazon.com/step-functions/#:~:text=for%20modern%20applications.-,Use%20cases,-Automate%20extract%2C%20transform
upvoted 1 times
Selected Answer: D
Selected Answer: D
Step Functions is based on state machines and tasks. A state machine is a workflow. A task is a state in a workflow that represents a single unit of
work that another AWS service performs. Each step in a workflow is a state.
Depending on your use case, you can have Step Functions call AWS services, such as Lambda, to perform tasks.
https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
upvoted 2 times
Selected Answer: C
There are two main types of routers used in event-driven architectures: event buses and event topics. At AWS, we offer Amazon EventBridge to
build event buses and Amazon Simple Notification Service (SNS) to build event topics. https://aws.amazon.com/event-driven-architecture/
upvoted 1 times
Selected Answer: D
Selected Answer: D
Distrubuted****
upvoted 1 times
Selected Answer: C
Selected Answer: D
A company is designing the network for an online multi-player game. The game uses the UDP networking protocol and will be deployed in eight
AWS Regions. The network architecture needs to minimize latency and packet loss to give end users a high-quality gaming experience.
A. Setup a transit gateway in each Region. Create inter-Region peering attachments between each transit gateway.
B. Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
C. Set up Amazon CloudFront with UDP turned on. Configure an origin in each Region.
D. Set up a VPC peering mesh between each Region. Turn on UDP for each VPC.
Correct Answer: B
Selected Answer: B
Selected Answer: B
Set up AWS Global Accelerator with UDP listeners and endpoint groups in each Region.
upvoted 2 times
Selected Answer: B
Connect to up to 10 regions within the AWS global network using the AWS Global Accelerator.
upvoted 1 times
A: AWS Global Accelerator is a networking service that helps you improve the availability and performance of the applications that you offer to you
global users. AWS Global Accelerator is easy to set up, configure, and manage. It provides static IP addresses that provide a fixed entry point to
your applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and Availability Zones. AWS Global
Accelerator always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, your user’
location, and policies that you configure. You can test the performance benefits from your location with a speed comparison tool. Like other AWS
services, AWS Global Accelerator is a self-service, pay-per-use offering, requiring no long term commitments or minimum fees.
https://aws.amazon.com/global-accelerator/faqs/
upvoted 4 times
Global Accelerator supports the User Datagram Protocol (UDP) and Transmission Control Protocol (TCP), making it an excellent choice for an online
multi-player game using UDP networking protocol. By setting up Global Accelerator with UDP listeners and endpoint groups in each Region, the
network architecture can minimize latency and packet loss, giving end users a high-quality gaming experience.
upvoted 4 times
AWS Global Accelerator is a service that improves the availability and performance of applications with local or global users. Global Accelerator
improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more
AWS Regions. Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use
cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS
protection.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Global Accelerator
upvoted 1 times
A company hosts a three-tier web application on Amazon EC2 instances in a single Availability Zone. The web application uses a self-managed
MySQL database that is hosted on an EC2 instance to store data in an Amazon Elastic Block Store (Amazon EBS) volume. The MySQL database
currently uses a 1 TB Provisioned IOPS SSD (io2) EBS volume. The company expects traffic of 1,000 IOPS for both reads and writes at peak traffic.
The company wants to minimize any disruptions, stabilize performance, and reduce costs while retaining the capacity for double the IOPS. The
company wants to move the database tier to a fully managed solution that is highly available and fault tolerant.
A. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with an io2 Block Express EBS volume.
B. Use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a General Purpose SSD (gp2) EBS volume.
D. Use two large EC2 instances to host the database in active-passive mode.
Correct Answer: B
Selected Answer: B
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
Selected Answer: B
RDS now supports io2 but it might still be an overkill given Gp2 is enough and we are looking for the most cost effective solution.
upvoted 5 times
Selected Answer: B
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
upvoted 2 times
RDS does not support IO2 or IO2express . GP2 can do the required IOPS
upvoted 2 times
I tried on the portal and only gp3 and i01 are supported.
This is 11 May 2023.
upvoted 3 times
he most cost-effective solution that meets the requirements is to use a Multi-AZ deployment of an Amazon RDS for MySQL DB instance with a
General Purpose SSD (gp2) EBS volume. This solution will provide high availability and fault tolerance while minimizing disruptions and stabilizing
performance. The gp2 EBS volume can handle up to 16,000 IOPS. You can also scale up to 64 TiB of storage.
Amazon RDS for MySQL provides automated backups, software patching, and automatic host replacement. It also provides Multi-AZ deployments
that automatically replicate data to a standby instance in another Availability Zone. This ensures that data is always available even in the event of a
failure.
upvoted 1 times
Selected Answer: B
I thought the answer here is A. But when I found the link from Amazon website; as per AWS:
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1), and
magnetic (also known as standard). They differ in performance characteristics and price, which means that you can tailor your storage performance
and cost to the needs of your database workload. You can create MySQL, MariaDB, Oracle, and PostgreSQL RDS DB instances with up to 64
tebibytes (TiB) of storage. You can create SQL Server RDS DB instances with up to 16 TiB of storage. For this amount of storage, use the Provisioned
IOPS SSD and General Purpose SSD storage types.
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
Selected Answer: B
for DB instances between 1 TiB and 4 TiB, storage is striped across four Amazon EBS volumes providing burst performance of up to 12,000 IOPS.
from "https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html"
upvoted 1 times
Selected Answer: B
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1), and
magnetic (also known as standard)
B - MOST cost-effectively
upvoted 3 times
Selected Answer: B
Selected Answer: A
Amazon RDS provides three storage types: General Purpose SSD (also known as gp2 and gp3), Provisioned IOPS SSD (also known as io1)
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 1 times
Question #354 Topic 1
A company hosts a serverless application on AWS. The application uses Amazon API Gateway, AWS Lambda, and an Amazon RDS for PostgreSQL
database. The company notices an increase in application errors that result from database connection timeouts during times of peak traffic or
unpredictable traffic. The company needs a solution that reduces the application failures with the least amount of change to the code.
Correct Answer: B
Selected Answer: B
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server
and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows
applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS
Proxy, failover times for Aurora and RDS databases are reduced by up to 66%.
https://aws.amazon.com/rds/proxy/
upvoted 9 times
Selected Answer: B
A. Reduce the Lambda concurrency rate? Has nothing to do with decreasing connections timeout.
B. Enable RDS Proxy on the RDS DB instance. Correct answer
C. Resize the RDS DB instance class to accept more connections? More connections means worse performance. Therefore, not correct.
D. Migrate the database to Amazon DynamoDB with on-demand scaling? DynamoDB is a noSQL database. Not correct.
upvoted 4 times
RDS Proxy is a fully managed, highly available, and scalable proxy for Amazon Relational Database Service (RDS) that makes it easy to connect to
your RDS instances from applications running on AWS Lambda. RDS Proxy offloads the management of connections to the database, which can
help to improve performance and reliability.
upvoted 3 times
To reduce application failures resulting from database connection timeouts, the best solution is to enable RDS Proxy on the RDS DB instance
upvoted 1 times
Selected Answer: B
RDS Proxy
upvoted 3 times
Selected Answer: B
Selected Answer: B
RDS proxy
upvoted 1 times
A company is migrating an old application to AWS. The application runs a batch job every hour and is CPU intensive. The batch job takes 15
minutes on average with an on-premises server. The server has 64 virtual CPU (vCPU) and 512 GiB of memory.
Which solution will run the batch job within 15 minutes with the LEAST operational overhead?
B. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
Correct Answer: D
The amount of CPU and memory resources required by the batch job exceeds the capabilities of AWS Lambda and Amazon Lightsail with AWS
Auto Scaling, which offer limited compute resources. AWS Fargate offers containerized application orchestration and scalable infrastructure, but
may require additional operational overhead to configure and manage the environment. AWS Batch is a fully managed service that automatically
provisions the required infrastructure for batch jobs, with options to use different instance types and launch modes.
Therefore, the solution that will run the batch job within 15 minutes with the LEAST operational overhead is D. Use AWS Batch on Amazon EC2.
AWS Batch can handle all the operational aspects of job scheduling, instance management, and scaling while using Amazon EC2
injavascript:void(0)stances with the right amount of CPU and memory resources to meet the job's requirements.
upvoted 19 times
Selected Answer: D
AWS Batch is a fully-managed service that can launch and manage the compute resources needed to execute batch jobs. It can scale the compute
environment based on the size and timing of the batch jobs.
upvoted 11 times
Selected Answer: D
The question needs to be phrased differently. I assume at first it was Lambda, because it says 15 minutes in the question which can be done. Yes it
also does say CPU intensive however they go on with a full stop and then give you the server specs. It does not say it uses that much of the specs
so they need to really rephrase the questions.
upvoted 2 times
AWS Batch can easily schedule and run batch jobs on EC2 instances. It can scale up to the required vCPUs and memory to match the on-premises
server.
Using EC2 provides full control over the instance type to meet the resource needs.
No servers or clusters to manage like with ECS/Fargate or Lightsail. AWS Batch handles this automatically.
More cost effective and operationally simple compared to Lambda which is not ideal for long running batch jobs.
upvoted 4 times
Selected Answer: A
On-Prem was avg 15 min, but target state architecture is expected to finish within 15 min
upvoted 1 times
Selected Answer: D
Not Lambda, "average 15 minutes" means there are jobs with running more and less than 15 minutes. Lambda max is 15 minutes.
upvoted 2 times
Gooniegoogoo 1 year, 3 months ago
This is for certain a tough one. I do see that they have thrown a curve ball in making it Lambda Functional scaling, however what we dont know is i
this application has many request or one large one.. looks like Lambda can scale and use the same lambda env.. seems intensive tho so will go with
D
upvoted 4 times
Selected Answer: D
AWS Batch
upvoted 2 times
Selected Answer: D
Not A because: "AWS Lambda now supports up to 10 GB of memory and 6 vCPU cores for Lambda Functions." https://aws.amazon.com/about-
aws/whats-new/2020/12/aws-lambda-supports-10gb-memory-6-vcpu-cores-lambda-functions/ vs. "The server has 64 virtual CPU (vCPU) and 512
GiB of memory" in the question.
upvoted 6 times
Selected Answer: D
A company stores its data objects in Amazon S3 Standard storage. A solutions architect has found that 75% of the data is rarely accessed after
30 days. The company needs all the data to remain immediately accessible with the same high availability and resiliency, but the company wants
B. Move the data objects to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
C. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
D. Move the data objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately.
Correct Answer: B
One -zone -infrequent access cannot be the answer because it requires high availability so standard infrequent access should be the answer
upvoted 5 times
Option B
upvoted 1 times
Selected Answer: B
S3 Standard-IA is a storage class that is designed for infrequently accessed data. It offers lower storage costs than S3 Standard, but it has a retrieva
latency of 1-5 minutes.
upvoted 3 times
Selected Answer: B
S3 Glacier Deep Archive is intended for data that is rarely accessed and can tolerate retrieval times measured in hours. Moving data to S3 One
Zone-IA immediately would not meet the requirement of immediate accessibility with the same high availability and resiliency.
upvoted 1 times
https://aws.amazon.com/s3/storage-classes/#:~:text=S3%20One%20Zone%2DIA%20is,less%20than%20S3%20Standard%2DIA.
upvoted 1 times
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server
instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the
application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)
A. Store the static files on Amazon S3. Use Amazon CloudFront to cache objects at the edge.
B. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
C. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
D. Store the server-side code on Amazon FSx for Windows File Server. Mount the FSx for Windows File Server volume on each EC2 instance to
E. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on
Correct Answer: AD
Selected Answer: AD
A because Elasticache, despite being ideal for leaderboards per Amazon, doesn't cache at edge locations. D because FSx has higher performance
for low latency needs.
https://www.techtarget.com/searchaws/tip/Amazon-FSx-vs-EFS-Compare-the-AWS-file-services
"FSx is built for high performance and submillisecond latency using solid-state drive storage volumes. This design enables users to select storage
capacity and latency independently. Thus, even a subterabyte file system can have 256 Mbps or higher throughput and support volumes up to 64
TB."
upvoted 6 times
Selected Answer: AD
Storing static files in S3 with CloudFront provides durability, high availability, and low latency by caching at edge locations.
FSx for Windows File Server provides a fully managed Windows native file system that can be accessed from the Windows EC2 instances to share
server-side code. It is designed for high availability and scales up to 10s of GBPS throughput.
EFS and EBS volumes can be attached to a single AZ. FSx and S3 are replicated across AZs for high availability.
upvoted 5 times
Selected Answer: AD
A because Elasticache doesn't cache at edge locations. D because FSx has higher performance for low latency needs.
upvoted 1 times
A social media company runs its application on Amazon EC2 instances behind an Application Load Balancer (ALB). The ALB is the origin for an
Amazon CloudFront distribution. The application has more than a billion images stored in an Amazon S3 bucket and processes thousands of
images each second. The company wants to resize the images dynamically and serve appropriate formats to clients.
Which solution will meet these requirements with the LEAST operational overhead?
A. Install an external image management library on an EC2 instance. Use the image management library to process the images.
B. Create a CloudFront origin request policy. Use the policy to automatically resize images and to serve the appropriate format based on the
C. Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront
D. Create a CloudFront response headers policy. Use the policy to automatically resize images and to serve the appropriate format based on
Correct Answer: C
Use a Lambda@Edge function with an external image management library. Associate the Lambda@Edge function with the CloudFront behaviors
that serve the images.
Using a Lambda@Edge function with an external image management library is the best solution to resize the images dynamically and serve
appropriate formats to clients. Lambda@Edge is a serverless computing service that allows running custom code in response to CloudFront events
such as viewer requests and origin requests. By using a Lambda@Edge function, it's possible to process images on the fly and modify the
CloudFront response before it's sent back to the client. Additionally, Lambda@Edge has built-in support for external libraries that can be used to
process images. This approach will reduce operational overhead and scale automatically with traffic.
upvoted 20 times
Selected Answer: C
The moment there is a need to implement some logic at the CDN think Lambda@Edge.
upvoted 7 times
Selected Answer: C
A Lambda@Edge function is a serverless function that runs at the edge of the CloudFront network. This means that the function is executed close
to the user, which can improve performance.
An external image management library can be used to resize images and to serve the appropriate format.
Associating the Lambda@Edge function with the CloudFront behaviors that serve the images ensures that the function is executed for all requests
that are served by those behaviors.
upvoted 3 times
Selected Answer: B
If the user asks for the most optimized image format (JPEG,WebP, or AVIF) using the directive format=auto, CloudFront Function will select the bes
format based on the Accept header present in the request.
Selected Answer: C
https://aws.amazon.com/cn/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/
upvoted 4 times
everfly 1 year, 7 months ago
Selected Answer: C
https://aws.amazon.com/cn/blogs/networking-and-content-delivery/resizing-images-with-amazon-cloudfront-lambdaedge-aws-cdn-blog/
upvoted 2 times
Question #359 Topic 1
A hospital needs to store patient records in an Amazon S3 bucket. The hospital’s compliance team must ensure that all protected health
information (PHI) is encrypted in transit and at rest. The compliance team must administer the encryption key for data at rest.
A. Create a public SSL/TLS certificate in AWS Certificate Manager (ACM). Associate the certificate with Amazon S3. Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS
keys.
B. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default
encryption for each S3 bucket to use server-side encryption with S3 managed encryption keys (SSE-S3). Assign the compliance team to
C. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Configure default
encryption for each S3 bucket to use server-side encryption with AWS KMS keys (SSE-KMS). Assign the compliance team to manage the KMS
keys.
D. Use the aws:SecureTransport condition on S3 bucket policies to allow only encrypted connections over HTTPS (TLS). Use Amazon Macie to
protect the sensitive data that is stored in Amazon S3. Assign the compliance team to manage Macie.
Correct Answer: C
Option C is correct because it allows the compliance team to manage the KMS keys used for server-side encryption, thereby providing the
necessary control over the encryption keys. Additionally, the use of the "aws:SecureTransport" condition on the bucket policy ensures that all
connections to the S3 bucket are encrypted in transit.
option B might be misleading but using SSE-S3, the encryption keys are managed by AWS and not by the compliance team
upvoted 24 times
Selected Answer: C
Selected Answer: C
Macie does not encrypt the data like the question is asking
https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
Also, SSE-S3 encryption is fully managed by AWS so the Compliance Team can't administer this.
upvoted 2 times
Selected Answer: C
D - Can't be because - Amazon Macie is a data security service that uses machine learning (ML) and pattern matching to discover and help protect
your sensitive data.
Macie discovers sensitive information, can help in protection but can't protect
upvoted 2 times
Selected Answer: C
Selected Answer: A
Option A proposes creating a public SSL/TLS certificate in AWS Certificate Manager and associating it with Amazon S3. This step ensures that data
is encrypted in transit. Then, the default encryption for each S3 bucket will be configured to use server-side encryption with AWS KMS keys (SSE-
KMS), which will provide encryption at rest for the data stored in S3. In this solution, the compliance team will manage the KMS keys, ensuring that
they control the encryption keys for data at rest.
upvoted 1 times
Selected Answer: C
Option C seems to be the correct answer, option A is also close but ACM cannot be integrated with Amazon S3 bucket directly, hence, u can not
attached TLS to S3. You can only attached TLS certificate to ALB, API Gateway and CloudFront and maybe Global Accelerator but definitely NOT EC
instance and S3 bucket
upvoted 1 times
Selected Answer: C
D makes no sense.
upvoted 2 times
Selected Answer: C
Selected Answer: C
Explanation:
The compliance team needs to administer the encryption key for data at rest in order to ensure that protected health information (PHI) is
encrypted in transit and at rest. Therefore, we need to use server-side encryption with AWS KMS keys (SSE-KMS). The default encryption for each
S3 bucket can be configured to use SSE-KMS to ensure that all new objects in the bucket are encrypted with KMS keys.
Additionally, we can configure the S3 bucket policies to allow only encrypted connections over HTTPS (TLS) using the aws:SecureTransport
condition. This ensures that the data is encrypted in transit.
upvoted 1 times
Selected Answer: C
We must provide encrypted in transit and at rest. Macie is needed to discover and recognize any PII or Protected Health Information. We already
know that the hospital is working with the sensitive data ) so protect them witn KMS and SSL. Answer D is unnecessary
upvoted 1 times
Macie does not encrypt the data like the question is asking
https://docs.aws.amazon.com/macie/latest/user/what-is-macie.html
Also, SSE-S3 encryption is fully managed by AWS so the Compliance Team can't administer this.
upvoted 2 times
C [Correct]: Ensures Https only traffic (encrypted transit), Enables compliance team to govern encryption key.
D [Incorrect]: Misleading; PHI is required to be encrypted not discovered. Maice is a discovery service. (https://aws.amazon.com/macie/)
upvoted 4 times
Selected Answer: D
Correct answer should be D. "Use Amazon Macie to protect the sensitive data..."
As requirement says "The hospitals's compliance team must ensure that all protected health information (PHI) is encrypted in transit and at rest."
Macie protects personal record such as PHI. Macie provides you with an inventory of your S3 buckets, and automatically evaluates and monitors
the buckets for security and access control. If Macie detects a potential issue with the security or privacy of your data, such as a bucket that
becomes publicly accessible, Macie generates a finding for you to review and remediate as necessary.
upvoted 4 times
A company uses Amazon API Gateway to run a private gateway with two REST APIs in the same VPC. The BuyStock RESTful web service calls the
CheckFunds RESTful web service to ensure that enough funds are available before a stock can be purchased. The company has noticed in the
VPC flow logs that the BuyStock RESTful web service calls the CheckFunds RESTful web service over the internet instead of through the VPC. A
solutions architect must implement a solution so that the APIs communicate through the VPC.
Which solution will meet these requirements with the FEWEST changes to the code?
D. Add an Amazon Simple Queue Service (Amazon SQS) queue between the two REST APIs.
Correct Answer: B
Selected Answer: B
an interface endpoint is a horizontally scaled, redundant VPC endpoint that provides private connectivity to a service. It is an elastic network
interface with a private IP address that serves as an entry point for traffic destined to the AWS service. Interface endpoints are used to connect
VPCs with AWS services
upvoted 20 times
Selected Answer: B
C. Use a gateway endpoint is wrong because gateway endpoints only support for S3 and dynamoDB, so B is correct
upvoted 9 times
Selected Answer: B
Interface Endpoint (Option B): An interface endpoint (also known as VPC endpoint) allows communication between resources in your VPC and
services without traversing the public internet. In this case, you can create an interface endpoint for API Gateway in your VPC. This enables the
communication between the BuyStock and CheckFunds RESTful web services within the VPC, and it doesn't require significant changes to the code
X-API-Key header (Option A): Adding an X-API-Key header for authorization doesn't address the issue of ensuring that the APIs communicate
through the VPC. It's more related to authentication and authorization mechanisms.
upvoted 3 times
Selected Answer: B
I select C because it's the solution with the " FEWEST changes to the code"
upvoted 1 times
Selected Answer: B
An interface endpoint is powered by PrivateLink, and uses an elastic network interface (ENI) as an entry point for traffic destined to the service
upvoted 2 times
Selected Answer: B
BBBBBB
upvoted 1 times
https://www.linkedin.com/pulse/aws-interface-endpoint-vs-gateway-alex-chang
upvoted 1 times
Selected Answer: B
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html
upvoted 4 times
Selected Answer: C
The only time where an Interface Endpoint may be preferable (for S3 or DynamoDB) over a Gateway Endpoint is if you require access from on-
premises, for example you want private access from your on-premise data center
upvoted 2 times
Selected Answer: B
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-private-apis.html - Interface EP
upvoted 3 times
Question #361 Topic 1
A company hosts a multiplayer gaming application on AWS. The company wants the application to read data with sub-millisecond latency and run
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon RDS for data that is frequently accessed. Run a periodic custom script to export the data to an Amazon S3 bucket.
B. Store the data directly in an Amazon S3 bucket. Implement an S3 Lifecycle policy to move older data to S3 Glacier Deep Archive for long-
term storage. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
C. Use Amazon DynamoDB with DynamoDB Accelerator (DAX) for data that is frequently accessed. Export the data to an Amazon S3 bucket by
using DynamoDB table export. Run one-time queries on the data in Amazon S3 by using Amazon Athena.
D. Use Amazon DynamoDB for data that is frequently accessed. Turn on streaming to Amazon Kinesis Data Streams. Use Amazon Kinesis
Data Firehose to read the data from Kinesis Data Streams. Store the records in an Amazon S3 bucket.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Selected Answer: C
test test
upvoted 1 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
Amazon DynamoDB with DynamoDB Accelerator (DAX) is a fully managed, in-memory caching solution for DynamoDB. DAX can improve the
performance of DynamoDB by up to 10x. This makes it a good choice for data that needs to be accessed with sub-millisecond latency.
DynamoDB table export allows you to export data from DynamoDB to an S3 bucket. This can be useful for running one-time queries on historical
data.
Amazon Athena is a serverless, interactive query service that makes it easy to analyze data in Amazon S3. Athena can be used to run one-time
queries on the data in the S3 bucket.
upvoted 4 times
Selected Answer: C
C is correct
A don't meets a requirement (LEAST operational overhead) because use script
B: Not regarding to require
D: Kinesis for near-real-time (Not for read)
-> C is correct
upvoted 2 times
Selected Answer: C
Option C is the right one. The questions clearly states "sub-millisecond latency "
upvoted 2 times
https://aws.amazon.com/dynamodb/dax/?nc1=h_ls
upvoted 3 times
Selected Answer: C
Cccccccccccc
upvoted 2 times
Question #362 Topic 1
A company uses a payment processing system that requires messages for a particular payment ID to be received in the same order that they were
Which actions should a solutions architect take to meet this requirement? (Choose two.)
A. Write the messages to an Amazon DynamoDB table with the payment ID as the partition key.
B. Write the messages to an Amazon Kinesis data stream with the payment ID as the partition key.
C. Write the messages to an Amazon ElastiCache for Memcached cluster with the payment ID as the key.
D. Write the messages to an Amazon Simple Queue Service (Amazon SQS) queue. Set the message attribute to use the payment ID.
E. Write the messages to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Set the message group to use the payment ID.
Correct Answer: BE
Selected Answer: BE
Option B is preferred over A because Amazon Kinesis Data Streams inherently maintain the order of records within a shard, which is crucial for the
given requirement of preserving the order of messages for a particular payment ID. When you use the payment ID as the partition key, all
messages for that payment ID will be sent to the same shard, ensuring that the order of messages is maintained.
On the other hand, Amazon DynamoDB is a NoSQL database service that provides fast and predictable performance with seamless scalability.
While it can store data with partition keys, it does not guarantee the order of records within a partition, which is essential for the given use case.
Hence, using Kinesis Data Streams is more suitable for this requirement.
As DynamoDB does not keep the order, I think BE is the correct answer here.
upvoted 27 times
I don't understand the question. The only requirement is: " system that requires messages for a particular payment ID to be received in the same
order that they were sent"
Why would you "write the message" to Kinesis or DynamoDB anymore. There is no streaming or DB storage requirement in the question. Between
A/B, B is better logically but it doesn't meet any stated requirement.
Selected Answer: BE
Both Kinesis and SQS FIFO queue guarantee the order, other answers don't.
upvoted 3 times
SQS FIFO (First-In-First-Out) queues preserve the order of messages within a message group.
upvoted 3 times
Selected Answer: BE
Technically both B and E will ensure processing order, but SQS FIFO was specifically built to handle this requirement.
There is no ask on how to store the data so A and C are out.
upvoted 1 times
options D and E are better because they mimic a real-world queue system and ensure that payments are processed in the correct order, just like
customers in a store would be served in the order they arrived. This is crucial for a payment processing system where order matters to avoid
mistakes in payment processing.
upvoted 2 times
Selected Answer: AE
IF the question would be "Choose all the solutions that fulfill these requirements" I would chosen BE.
But it is:
"Which actions should a solutions architect take to meet this requirement? "
For this reason I chose AE, because we don't need both Kinesis AND SQS for this solution. Both choices complement to order processing: order
stored in DB, work item goes to the queue.
upvoted 3 times
Selected Answer: BE
E --> no doubt
B --> see https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
upvoted 1 times
Selected Answer: BE
1) SQS FIFO queues guarantee that messages are received in the exact order they are sent. Using the payment ID as the message group ensures al
messages for a payment ID are received sequentially.
2) Kinesis data streams can also enforce ordering on a per partition key basis. Using the payment ID as the partition key will ensure strict ordering
of messages for each payment ID.
upvoted 2 times
Selected Answer: BE
BE no doubt.
upvoted 1 times
Option A, writing the messages to an Amazon DynamoDB table, would not necessarily preserve the order of messages for a particular payment ID
upvoted 1 times
I don´t unsderstand A, How you can guaratee the order with DynamoDB?? The order is guarantee with SQS FIFO and Kinesis Data Stream in 1
shard...
upvoted 4 times
Selected Answer: BE
No doubt )
upvoted 3 times
Question #363 Topic 1
A company is building a game system that needs to send unique events to separate leaderboard, matchmaking, and authentication services
concurrently. The company needs an AWS event-driven system that guarantees the order of the events.
Correct Answer: B
Selected Answer: B
I don't honestly / can't understand why people go to ChapGPT to ask for the answers.... if I recall correctly they only consolidated their DB until
2021...
upvoted 17 times
Selected Answer: B
The answer is B la. SNS FIFO topics queue should be used combined with SQS FIFO queue in this case. The question asked for correct order to
different event, so asking for SNS fan out here to send to individual SQS.
https://docs.aws.amazon.com/sns/latest/dg/fifo-example-use-case.html
upvoted 13 times
Selected Answer: B
SNS can have many-to-many relations, while SQS supports only one consumer at a time (many-to-one).
upvoted 1 times
Selected Answer: D
AWS does not currently offer FIFO topics for SNS. SNS only supports standard topics, which do not guarantee message order.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Yes, you can technically do this with SQS FIFO partitioned queue by giving separate group ID's to leaderboard, matchmaking etc but this is not as
useful as SNS FIFO and is overkill as no need for storage etc. B is more elegant and concise solution,
upvoted 3 times
Selected Answer: B
just know SNS FIFO also can send events or messages cocurrently to many subscribers while maintaining the order it receives. SNS fanout pattern
is set in standard SNS which is commonly used to fan out events to large number of subscribers and usually for duplicated messages.
upvoted 1 times
SQS looks like a good idea first, but since we have to send the same message to multiple destination, even if SQS could do it, SNS is much more
dedicated to this kind of usage.
upvoted 4 times
https://docs.aws.amazon.com/sns/latest/dg/sns-fifo-topics.html
You can use Amazon SNS FIFO (first in, first out) topics with Amazon SQS FIFO queues to provide strict message ordering and message
deduplication. The FIFO capabilities of each of these services work together to act as a fully managed service to integrate distributed applications
that require data consistency in near-real time. Subscribing Amazon SQS standard queues to Amazon SNS FIFO topics provides best-effort
ordering and at least once delivery.
upvoted 2 times
bbbbbbbbbbbbbbb
upvoted 1 times
It should be the fan-out pattern, and the pattern starts with Amazon SNS FIFO for the orders.
upvoted 2 times
Selected Answer: D
Answer is D. You are so lazy because instead of searching in documentation or your notes, you are asking ChatGPT. Do you really think you will
take this exam ? Hint: ask ChatGPT
upvoted 5 times
Selected Answer: B
Amazon SNS is a highly available and durable publish-subscribe messaging service that allows applications to send messages to multiple
subscribers through a topic. SNS FIFO topics are designed to ensure that messages are delivered in the order in which they are sent. This makes
them ideal for situations where message order is important, such as in the case of the company's game system.
Option A, Amazon EventBridge event bus, is a serverless event bus service that makes it easy to build event-driven applications. While it supports
ordering of events, it does not provide guarantees on the order of delivery.
upvoted 3 times
Question #364 Topic 1
A hospital is designing a new application that gathers symptoms from patients. The hospital has decided to use Amazon Simple Queue Service
(Amazon SQS) and Amazon Simple Notification Service (Amazon SNS) in the architecture.
A solutions architect is reviewing the infrastructure design. Data must be encrypted at rest and in transit. Only authorized personnel of the
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Turn on server-side encryption on the SQS components. Update the default key policy to restrict key usage to a set of authorized principals.
B. Turn on server-side encryption on the SNS components by using an AWS Key Management Service (AWS KMS) customer managed key.
C. Turn on encryption on the SNS components. Update the default key policy to restrict key usage to a set of authorized principals. Set a
condition in the topic policy to allow only encrypted connections over TLS.
D. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key.
Apply a key policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted
E. Turn on server-side encryption on the SQS components by using an AWS Key Management Service (AWS KMS) customer managed key.
Apply an IAM policy to restrict key usage to a set of authorized principals. Set a condition in the queue policy to allow only encrypted
Correct Answer: BD
Selected Answer: BD
read this:
https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html
upvoted 14 times
Important
All requests to topics with SSE enabled must use HTTPS and Signature Version 4.
For information about compatibility of other services with encrypted topics, see your service documentation.
Amazon SNS only supports symmetric encryption KMS keys. You cannot use any other type of KMS key to encrypt your service resources. For
help determining whether a KMS key is a symmetric encryption key, see Identifying asymmetric KMS keys.
upvoted 3 times
My god! Every other question is about SQS! I thought this was AWS Solution Architect test not "How to solve any problem in AWS using SQS" test!
upvoted 13 times
Selected Answer: BD
A and C involve 'updating the default key policy' which is not something you. Either you create a key policy, OR AWS assigns THE "default key
policy".
E 'applies an IAM policy to restrict key usage to a set of authorized principals' which is not how IAM policies work. You can 'apply an IAM policy to
restrict key usage', but it would be restricted to the principals who have the policy attached; you can't specify them in the policy.
Leaves B and D. That B lacks the TLS statement is irrelevant because "all requests to topics with SSE enabled must use HTTPS" anyway.
upvoted 5 times
"All requests to queues with SSE enabled must use HTTPS and Signature Version 4." -> valid for SNS and SQS alike:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html
https://docs.aws.amazon.com/sns/latest/dg/sns-server-side-encryption.html
"Set a condition in the queue policy to allow only encrypted connections over TLS." refers to the "aws:SecureTransport" condition, but it's
actually redundant.
upvoted 1 times
Its only options C and D that covers encryption on transit, encryption at rest and a restriction policy.
upvoted 3 times
"IAM policies you can't specify the principal in an identity-based policy because it applies to the user or role to which it is attached"
reference: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/security_iam_service-with-iam.html
that excludes E
upvoted 1 times
Selected Answer: CD
-> C, D left, one for SNS, one for SQS. TLS: checked, encryption on components: checked
upvoted 4 times
You can protect data in transit using Secure Sockets Layer (SSL) or client-side encryption. You can protect data at rest by requesting Amazon
SQS to encrypt your messages before saving them to disk in its data centers and then decrypt them when the messages are received.
https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html
A key policy is a resource policy for an AWS KMS key. Key policies are the primary way to control access to KMS keys. Every KMS key must have
exactly one key policy. The statements in the key policy determine who has permission to use the KMS key and how they can use it. You can als
use IAM policies and grants to control access to the KMS key, but every KMS key must have a key policy.
upvoted 1 times
B: To encrypt data at rest, we can use a customer-managed key stored in AWS KMS to encrypt the SNS components.
E: To restrict access to the data and allow only authorized personnel to access the data, we can apply an IAM policy to restrict key usage to a set of
authorized principals. We can also set a condition in the queue policy to allow only encrypted connections over TLS to encrypt data in transit.
upvoted 2 times
Karlos99 1 year, 6 months ago
Selected Answer: BD
For a customer managed KMS key, you must configure the key policy to add permissions for each queue producer and consumer.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-key-management.html
upvoted 3 times
bebebe
upvoted 1 times
A company runs a web application that is backed by Amazon RDS. A new database administrator caused data loss by accidentally editing
information in a database table. To help recover from this type of incident, the company wants the ability to restore the database to its state from
Which feature should the solutions architect include in the design to meet this requirement?
A. Read replicas
B. Manual snapshots
C. Automated backups
D. Multi-AZ deployments
Correct Answer: C
Selected Answer: C
Amazon RDS provides automated backups, which can be configured to take regular snapshots of the database instance. By enabling automated
backups and setting the retention period to 30 days, the company can ensure that it retains backups for up to 30 days. Additionally, Amazon RDS
allows for point-in-time recovery within the retention period, enabling the restoration of the database to its state from any point within the last 30
days, including 5 minutes before any change. This feature provides the required capability to recover from accidental data loss incidents.
upvoted 3 times
Automated backups allow you to recover your database to any point in time within your specified retention period, which can be up to 35 days.
The recovery process creates a new Amazon RDS instance with a new endpoint, and the process takes time proportional to the size of the
database. Automated backups are enabled by default and occur daily during the backup window. This feature provides an easy and convenient wa
to recover from data loss incidents such as the one described in the scenario.
upvoted 3 times
Selected Answer: C
Option C, Automated backups, will meet the requirement. Amazon RDS allows you to automatically create backups of your DB instance. Automate
backups enable point-in-time recovery (PITR) for your DB instance down to a specific second within the retention period, which can be up to 35
days. By setting the retention period to 30 days, the company can restore the database to its state from up to 5 minutes before any change within
the last 30 days.
upvoted 3 times
C: Automated Backups
https://aws.amazon.com/rds/features/backup/
upvoted 2 times
Automated Backups...
upvoted 2 times
ccccccccc
upvoted 1 times
Question #366 Topic 1
A company’s web application consists of an Amazon API Gateway API in front of an AWS Lambda function and an Amazon DynamoDB database.
The Lambda function handles the business logic, and the DynamoDB table hosts the data. The application uses Amazon Cognito user pools to
identify the individual users of the application. A solutions architect needs to update the application so that only users who have a subscription
Which solution will meet this requirement with the LEAST operational overhead?
B. Set up AWS WAF on the API Gateway API. Create a rule to filter users who have a subscription.
C. Apply fine-grained IAM permissions to the premium content in the DynamoDB table.
D. Implement API usage plans and API keys to limit the access of users who do not have a subscription.
Correct Answer: D
Selected Answer: D
Implementing API usage plans and API keys is a straightforward way to restrict access to specific users or groups based on subscriptions. It allows
you to control access at the API level and doesn't require extensive changes to your existing architecture. This solution provides a clear and
manageable way to enforce access restrictions without complicating other parts of the application
upvoted 9 times
Selected Answer: C
Selected Answer: C
D: API keys cannot be used to limit access and this can only be done via methods defined in above link
upvoted 2 times
In the same document at the bottom, it says "If you're using a developer portal to publish your APIs, note that all APIs in a given usage plan are
subscribable, even if you haven't made them visible to your customers."
I go with C
upvoted 1 times
Correct link
upvoted 1 times
Selected Answer: D
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers
You can configure usage plans and API keys to allow customers to access selected APIs, and begin throttling requests to those APIs based on
defined limits and quotas. These can be set at the API, or API method level.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-
plans.html#:~:text=Creating%20and%20using-,usage%20plans,-with%20API%20keys
upvoted 1 times
Selected Answer: D
A. This would not actually limit access based on subscriptions. It helps optimize and control API usage, but does not address the core requirement.
B. This could work by checking user subscription status in the WAF rule, but would require ongoing management of WAF and increases operationa
overhead.
C. This is a good approach, using IAM permissions to control DynamoDB access at a granular level based on subscriptions. However, it would
require managing IAM permissions which adds some operational overhead.
D. This option uses API Gateway mechanisms to limit API access based on subscription status. It would require the least amount of ongoing
management and changes, minimizing operational overhead. API keys could be easily revoked/changed as subscription status changes.
upvoted 3 times
Selected Answer: D
The solution that will meet the requirement with the least operational overhead is to implement API Gateway usage plans and API keys to limit
access to premium content for users who do not have a subscription.
Option A is incorrect because API caching and throttling are not designed for authentication or authorization purposes, and it does not provide
access control.
Option B is incorrect because although AWS WAF is a useful tool to protect web applications from common web exploits, it is not designed for
authorization purposes, and it might require additional configuration, which increases the operational overhead.
Option C is incorrect because although IAM permissions can restrict access to data stored in a DynamoDB table, it does not provide a mechanism
for limiting access to specific content based on the user subscription. Moreover, it might require a significant amount of additional IAM
permissions configuration, which increases the operational overhead.
upvoted 3 times
Selected Answer: D
To meet the requirement with the least operational overhead, you can implement API usage plans and API keys to limit the access of users who do
not have a subscription. This way, you can control access to your API Gateway APIs by requiring clients to submit valid API keys with requests. You
can associate usage plans with API keys to configure throttling and quota limits on individual client accounts.
upvoted 2 times
Selected Answer: D
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-usage-plans.html
upvoted 3 times
ccccccccc
upvoted 1 times
A company is using Amazon Route 53 latency-based routing to route requests to its UDP-based application for users around the world. The
application is hosted on redundant servers in the company's on-premises data centers in the United States, Asia, and Europe. The company’s
compliance requirements state that the application must be hosted on premises. The company wants to improve the performance and availability
of the application.
A. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator by
using AWS Global Accelerator, and register the NLBs as its endpoints. Provide access to the application by using a CNAME that points to the
accelerator DNS.
B. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. Create an accelerator
by using AWS Global Accelerator, and register the ALBs as its endpoints. Provide access to the application by using a CNAME that points to
C. Configure three Network Load Balancers (NLBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a
latency-based record that points to the three NLBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the
D. Configure three Application Load Balancers (ALBs) in the three AWS Regions to address the on-premises endpoints. In Route 53, create a
latency-based record that points to the three ALBs, and use it as an origin for an Amazon CloudFront distribution. Provide access to the
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
UDP = NBL
UDP = GLOBAL ACCELERATOR
UPD NOT WORKING WITH CLOUDFRONT
ANS IS A
upvoted 4 times
UDP == NLB
Must be hosted on-premises != CloudFront
upvoted 3 times
"A custom origin is an HTTP server, for example, a web server. The HTTP server can be an Amazon EC2 instance or an HTTP server that you host
somewhere else. "
upvoted 1 times
Selected Answer: A
aaaaaaaa
upvoted 3 times
Question #368 Topic 1
A solutions architect wants all new users to have specific complexity requirements and mandatory rotation periods for IAM user passwords.
B. Set a password policy for each IAM user in the AWS account.
D. Attach an Amazon CloudWatch rule to the Create_newuser event to set the password with the appropriate requirements.
Correct Answer: A
The question is for new users, answer A is not exact for that case.
upvoted 7 times
Selected Answer: A
i get confused, the question saids "NEW" users... if you apply this password policy it would affect all the users in the AWS account....
upvoted 7 times
Selected Answer: B
You can set a custom password policy on your AWS account to specify complexity requirements and mandatory rotation periods for your IAM
users' passwords. When you create or change a password policy, most of the password policy settings are enforced the next time your users
change their passwords. However, some of the settings are enforced immediately.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-
policy.html#:~:text=Setting%20an%20account-,password%20policy,-for%20IAM%20users
upvoted 3 times
Selected Answer: A
To accomplish this, the solutions architect should set an overall password policy for the entire AWS account. This policy will apply to all IAM users i
the account, including new users.
upvoted 3 times
A is correct
upvoted 1 times
Selected Answer: A
aaaaaaa
upvoted 4 times
Question #369 Topic 1
A company has migrated an application to Amazon EC2 Linux instances. One of these EC2 instances runs several 1-hour tasks on a schedule.
These tasks were written by different teams and have no common programming language. The company is concerned about performance and
scalability while these tasks run on a single instance. A solutions architect needs to implement a solution to resolve these concerns.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Batch to run the tasks as jobs. Schedule the jobs by using Amazon EventBridge (Amazon CloudWatch Events).
B. Convert the EC2 instance to a container. Use AWS App Runner to create the container on demand to run the tasks as jobs.
C. Copy the tasks into AWS Lambda functions. Schedule the Lambda functions by using Amazon EventBridge (Amazon CloudWatch Events).
D. Create an Amazon Machine Image (AMI) of the EC2 instance that runs the tasks. Create an Auto Scaling group with the AMI to run multiple
Correct Answer: A
Selected Answer: C
question said "These tasks were written by different teams and have no common programming language", and key word "scalable". Only Lambda
can fulfil these. Lambda can be done in different programming languages, and it is scalable
upvoted 10 times
Selected Answer: A
aaaaaaaa
upvoted 6 times
Selected Answer: D
Answer = D
"performance and scalability while these tasks run on a single instance" They gave me a legacy application and want it to autoscale for performace
They dont want it to run on a single EC2 instance. Shouldn't I make an AMI and provision multiple EC2 instances in an autoscaling group ? I could
put an ALB in front of it. I wont have to deal with "uncommon programming languages" inside the application... Just a thought..
upvoted 2 times
awsgeek75 9 months ago
Selected Answer: A
AWS Batch: AWS Batch is a fully managed service for running batch computing workloads. It dynamically provisions the optimal quantity and type
of compute resources based on the volume and specific resource requirements of the batch jobs. It allows you to run tasks written in different
programming languages with minimal operational overhead.
upvoted 3 times
The tast working for hour but lambda function timeout is 15 minutes. So vote A.
upvoted 1 times
Selected Answer: A
It can run heterogeneous workloads and tasks without needing to convert them to a common format.
AWS Batch manages the underlying compute resources - no need to manage containers, Lambda functions or Auto Scaling groups.
upvoted 5 times
Selected Answer: A
Selected Answer: A
I also go with A.
upvoted 1 times
B and D out!
A and C let's think!
AWS Lambda functions are time limited.
So, Option A
upvoted 1 times
Answer is A.
Could have been C but AWS Lambda functions can be only configured to run up to 15 minutes per execution. While the task in question need an
1hour to run,
upvoted 4 times
Selected Answer: D
question is asking for the LEAST operational overhead. With batch, you have to create the compute environment, create the job queue, create the
job definition and create the jobs --> more operational overhead than creating an ASG
upvoted 2 times
A company runs a public three-tier web application in a VPC. The application runs on Amazon EC2 instances across multiple Availability Zones.
The EC2 instances that run in private subnets need to communicate with a license server over the internet. The company needs a managed
A. Provision a NAT instance in a public subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
B. Provision a NAT instance in a private subnet. Modify each private subnet's route table with a default route that points to the NAT instance.
C. Provision a NAT gateway in a public subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
D. Provision a NAT gateway in a private subnet. Modify each private subnet's route table with a default route that points to the NAT gateway.
Correct Answer: C
Selected Answer: C
As the company needs a managed solution that minimizes operational maintenance - NAT Gateway is a public subnet is the answer.
upvoted 8 times
C
https://docs.aws.amazon.com/appstream2/latest/developerguide/managing-network-internet-NAT-gateway.html
...and a NAT gateway in a public subnet.
upvoted 1 times
Selected Answer: C
This meets the requirements for a managed, low maintenance solution for private subnets to access the internet:
NAT gateway provides automatic scaling, high availability, and fully managed service without admin overhead.
Placing the NAT gateway in a public subnet with proper routes allows private instances to use it for internet access.
Minimal operational maintenance compared to NAT instances.
upvoted 2 times
Placing a NAT gateway in a private subnet (D) would not allow internet access.
upvoted 3 times
Selected Answer: C
ccccc is the best
upvoted 1 times
ccccccccc
upvoted 2 times
Question #371 Topic 1
A company needs to create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster to host a digital media streaming application. The EKS
cluster will use a managed node group that is backed by Amazon Elastic Block Store (Amazon EBS) volumes for storage. The company must
encrypt all data at rest by using a customer managed key that is stored in AWS Key Management Service (AWS KMS).
Which combination of actions will meet this requirement with the LEAST operational overhead? (Choose two.)
A. Use a Kubernetes plugin that uses the customer managed key to perform data encryption.
B. After creation of the EKS cluster, locate the EBS volumes. Enable encryption by using the customer managed key.
C. Enable EBS encryption by default in the AWS Region where the EKS cluster will be created. Select the customer managed key as the default
key.
D. Create the EKS cluster. Create an IAM role that has a policy that grants permission to the customer managed key. Associate the role with
E. Store the customer managed key as a Kubernetes secret in the EKS cluster. Use the customer managed key to encrypt the EBS volumes.
Correct Answer: CD
Selected Answer: CD
https://docs.aws.amazon.com/eks/latest/userguide/managed-node-
groups.html#:~:text=encrypted%20Amazon%20EBS%20volumes%20without%20using%20a%20launch%20template%2C%20encrypt%20all%20new
%20Amazon%20EBS%20volumes%20created%20in%20your%20account.
upvoted 15 times
Selected Answer: BD
Quickly rule out A (which plugin? > overhead) and E because of bad practice
Among B,C,D: B and C are functionally similar > choice must be between B or C, D is fixed
Between B and C: C is out since it set default for all EBS volume in the region, which is more than required and even wrong, say what if other EBS
volumes of other applications in the region have different requirement?
upvoted 10 times
Selected Answer: CD
Selected Answer: CD
It says: 'The company must encrypt ALL data at rest', so there is nothing wrong with 'enabling EBS encryption by default' . C & D
upvoted 3 times
Selected Answer: BD
B&D are correct. C is wrong because when you turn on encryption by defaul, AWS uses its own key while the requirement is using Customer key.
Selected Answer: BD
Not A (avoid 3rd party plugins when there are native services)
Not C ("encryption by default" would impact other services)
Not E (Keys belong in KMS, not in EKS cluster)
upvoted 2 times
I am just a bit concerned that the question does not put any limits on not encrypting all the EBS by default in the account. Both B and C can
work. C is a hack but it is definitely LEAST operational overhead. Also, we don't know if there are other services or not that may be impacted.
What do you think?
upvoted 1 times
Selected Answer: CD
EBS encryption is set regionally. AWS account is global but it does not mean EBS encryption is enable by default at account level. default EBS
encryption is a regional setting within your AWS account. Enabling it in a specific region ensures that all new EBS volumes created in that region ar
encrypted by default, using either the default AWS managed key or a customer managed key that you specify.
upvoted 1 times
Selected Answer: CD
So assuming they won't use this account for anything else we can use C. Enable EBS encryption by default in the AWS Region where the EKS cluste
will be created. Select the customer managed key as the default key.
upvoted 1 times
C) Setting the KMS key as the regional EBS encryption default automatically encrypts new EKS node EBS volumes.
D) The IAM role grants the EKS nodes access to use the key for encryption/decryption operations.
upvoted 1 times
Selected Answer: CD
D - Provides key access permission just to the EKS cluster without changing broader IAM permissions
upvoted 1 times
Selected Answer: BD
Selected Answer: CD
B. Manually enable encryption on the intended EBS volumes after ensuring no default changes. Requires manually enabling encryption on the
nodes but ensures minimum impact.
D. Create an IAM role with access to the key to associate with the EKS cluster. This provides key access permission just to the EKS cluster without
changing broader IAM permissions.
upvoted 2 times
A company wants to migrate an Oracle database to AWS. The database consists of a single table that contains millions of geographic information
systems (GIS) images that are high resolution and are identified by a geographic code.
When a natural disaster occurs, tens of thousands of images get updated every few minutes. Each geographic code has a single image or row that
is associated with it. The company wants a solution that is highly available and scalable during such events.
A. Store the images and geographic codes in a database table. Use Oracle running on an Amazon RDS Multi-AZ DB instance.
B. Store the images in Amazon S3 buckets. Use Amazon DynamoDB with the geographic code as the key and the image S3 URL as the value.
C. Store the images and geographic codes in an Amazon DynamoDB table. Configure DynamoDB Accelerator (DAX) during times of high load.
D. Store the images in Amazon S3 buckets. Store geographic codes and image S3 URLs in a database table. Use Oracle running on an Amazon
Correct Answer: B
Selected Answer: B
Amazon prefers people to move from Oracle to its own services like DynamoDB and S3.
upvoted 13 times
Selected Answer: D
Selected Answer: B
DynamoDB with its HA and built-in scalability. The nature of the table also resonates with NoSQL than SQL DB such as Oracle. Only 1 table so
migration is just a script from Oracle to DynamoDB
D is workable but more expensive with Oracle licenses and other setups for HA and scalability
upvoted 2 times
Selected Answer: B
Selected Answer: B
They are currently using Oracle, but only for one simple table with a single key-value pair. This is a typical use case for a NoSQL database like
DynamoDB (and whoever decided to use Oracle for this in the first place should be fired). Oracle is expensive as hell, so options A and D might
work but are surely not cost-effective. C won't work because the images are too big for the database. Leaves B which would be the ideal solution
and meet the availability and scalability requirements.
upvoted 5 times
Selected Answer: D
Cost effective, D
upvoted 2 times
B option offers a cost-effective solution for storing and accessing high-resolution GIS images during natural disasters. Storing the images in
Amazon S3 buckets provides scalable and durable storage, while using Amazon DynamoDB allows for quick and efficient retrieval of images based
on geographic codes. This solution leverages the strengths of both S3 and DynamoDB to meet the requirements of high availability, scalability, and
cost-effectiveness.
upvoted 1 times
Selected Answer: B
What were the company thinking using the most expensive DB on the planet FOR ONE SINGLE TABLE???
Migrate a single table from SQL to NoSQL should be easy enough I guess...
upvoted 2 times
A company has an application that collects data from IoT sensors on automobiles. The data is streamed and stored in Amazon S3 through
Amazon Kinesis Data Firehose. The data produces trillions of S3 objects each year. Each morning, the company uses the data from the previous
Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models. The data must be
available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes.
A. Use the S3 Intelligent-Tiering storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 1 year.
B. Use the S3 Intelligent-Tiering storage class. Configure S3 Intelligent-Tiering to automatically move objects to S3 Glacier Deep Archive after
1 year.
C. Use the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Create an S3 Lifecycle policy to transition objects to S3 Glacier
D. Use the S3 Standard storage class. Create an S3 Lifecycle policy to transition objects to S3 Standard-Infrequent Access (S3 Standard-IA)
Correct Answer: D
Selected Answer: D
1. Each morning, the company uses the data from the previous 30 days
2. Four times each year, the company uses the data from the previous 12 months to perform analysis and train other ML models
3. The data must be available with minimal delay for up to 1 year. After 1 year, the data must be retained for archival purposes
The data ingestion happens 4 times a year, that means that after the initial 30 days it still needs to be pulled 3 more times, why would you put the
data in standard infrequent if you were going to use it 3 more times and speed is a requirement? Makes more sense to put it in S3 standard, or
intelligent then straight to glacier.
upvoted 1 times
Selected Answer: D
Clear access pattern. data in Standard-Infrequent Access is for data requires rapid access when needed
upvoted 1 times
Selected Answer: D
Selected Answer: D
Selected Answer: A
The data is used every day (typical use case for Standard) for 30 days, for the remaining 12 months it is used 3 or 4 times (typical use case for IA),
after 12 months it is not used at all but must be kept (typical use case for Glacier Deep Archive).
upvoted 1 times
Selected Answer: D
This option optimizes costs while meeting the data access requirements:
Option A meets the requirements most cost-effectively. The S3 Intelligent-Tiering storage class provides automatic tiering of objects between the
S3 Standard and S3 Standard-Infrequent Access (S3 Standard-IA) tiers based on changing access patterns, which helps optimize costs. The S3
Lifecycle policy can be used to transition objects to S3 Glacier Deep Archive after 1 year for archival purposes. This solution also meets the
requirement for minimal delay in accessing data for up to 1 year. Option B is not cost-effective because it does not include the transition of data to
S3 Glacier Deep Archive after 1 year. Option C is not the best solution because S3 Standard-IA is not designed for long-term archival purposes and
incurs higher storage costs. Option D is also not the most cost-effective solution as it transitions objects to the S3 Standard-IA tier after 30 days,
which is unnecessary for the requirement to retrain the suite of ML models each morning using data from the previous 30 days.
upvoted 1 times
Selected Answer: D
Bbbbbbbbb
upvoted 1 times
Selected Answer: D
ddddddd
upvoted 4 times
A company is running several business applications in three separate VPCs within the us-east-1 Region. The applications must be able to
communicate between VPCs. The applications also must be able to consistently send hundreds of gigabytes of data each day to a latency-
A solutions architect needs to design a network connectivity solution that maximizes cost-effectiveness.
A. Configure three AWS Site-to-Site VPN connections from the data center to AWS. Establish connectivity by configuring one VPN connection
B. Launch a third-party virtual network appliance in each VPC. Establish an IPsec VPN tunnel between the data center and each virtual
appliance.
C. Set up three AWS Direct Connect connections from the data center to a Direct Connect gateway in us-east-1. Establish connectivity by
D. Set up one AWS Direct Connect connection from the data center to AWS. Create a transit gateway, and attach each VPC to the transit
gateway. Establish connectivity between the Direct Connect connection and the transit gateway.
Correct Answer: D
Selected Answer: D
AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and on-premises networks through a central hub. This connection
simplifies your network and puts an end to complex peering relationships. Transit Gateway acts as a highly scalable cloud router—each new
connection is made only once.
https://aws.amazon.com/transit-gateway/#:~:text=AWS-,Transit%20Gateway,-connects%20your%20Amazon
upvoted 5 times
Selected Answer: D
AWS Direct connect is costly but the saving comes from less data transfer cost with Direct Connect and Transit gateway
upvoted 3 times
Selected Answer: D
This option leverages a single Direct Connect for consistent, private connectivity between the data center and AWS. The transit gateway allows eac
VPC to share the Direct Connect while keeping the VPCs isolated. This provides a cost-effective architecture to meet the requirements.
upvoted 4 times
Selected Answer: D
Option D
upvoted 3 times
cost-effectiveness
D
upvoted 1 times
Selected Answer: D
maximizes cost-effectiveness
upvoted 2 times
ddddddddd
upvoted 2 times
Question #375 Topic 1
An ecommerce company is building a distributed application that involves several serverless functions and AWS services to complete order-
processing tasks. These tasks require manual approvals as part of the workflow. A solutions architect needs to design an architecture for the
order-processing application. The solution must be able to combine multiple AWS Lambda functions into responsive serverless applications. The
solution also must orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers.
Which solution will meet these requirements with the LEAST operational overhead?
C. Use Amazon Simple Queue Service (Amazon SQS) to build the application.
D. Use AWS Lambda functions and Amazon EventBridge events to build the application.
Correct Answer: A
Selected Answer: A
AWS Step Functions is a fully managed service that makes it easy to build applications by coordinating the components of distributed applications
and microservices using visual workflows. With Step Functions, you can combine multiple AWS Lambda functions into responsive serverless
applications and orchestrate data and services that run on Amazon EC2 instances, containers, or on-premises servers. Step Functions also allows fo
manual approvals as part of the workflow. This solution meets all the requirements with the least operational overhead.
upvoted 14 times
Selected Answer: A
Approval is explicit for the solution. -> "A common use case for AWS Step Functions is a task that requires human intervention (for example, an
approval process). Step Functions makes it easy to coordinate the components of distributed applications as a series of steps in a visual workflow
called a state machine. You can quickly build and run state machines to execute the steps of your application in a reliable and scalable fashion.
(https://aws.amazon.com/pt/blogs/compute/implementing-serverless-manual-approval-steps-in-aws-step-functions-and-amazon-api-gateway/)"
upvoted 5 times
Selected Answer: A
involves several serverless functions and AWS services, require manual approvals as part of the workflow, combine the Lambda functions into
responsive serverless applications, orchestrate data and services = AWS Step Functions
upvoted 3 times
AWS Step Functions allow you to easily coordinate multiple Lambda functions and services into serverless workflows with visual workflows. Step
Functions are designed for building distributed applications that combine services and require human approval steps.
Using Step Functions provides a fully managed orchestration service with minimal operational overhead.
upvoted 5 times
Selected Answer: A
Reference: https://aws.amazon.com/step-functions/#:~:text=AWS%20Step%20Functions%20is%20a,machine%20learning%20(ML)%20pipelines.
upvoted 3 times
Selected Answer: A
Option A: Use AWS Step Functions to build the application.
AWS Step Functions is a serverless workflow service that makes it easy to coordinate distributed applications and microservices using visual
workflows. It is an ideal solution for designing architectures for distributed applications that involve multiple AWS services and serverless functions
as it allows us to orchestrate the flow of our application components using visual workflows. AWS Step Functions also integrates with other AWS
services like AWS Lambda, Amazon EC2, and Amazon ECS, and it has built-in error handling and retry mechanisms. This option provides a
serverless solution with the least operational overhead for building the application.
upvoted 4 times
Question #376 Topic 1
A company has launched an Amazon RDS for MySQL DB instance. Most of the connections to the database come from serverless applications.
Application traffic to the database changes significantly at random intervals. At times of high demand, users report that their applications
Which solution will resolve this issue with the LEAST operational overhead?
A. Create a proxy in RDS Proxy. Configure the users’ applications to use the DB instance through RDS Proxy.
B. Deploy Amazon ElastiCache for Memcached between the users’ applications and the DB instance.
C. Migrate the DB instance to a different instance class that has higher I/O capacity. Configure the users’ applications to use the new DB
instance.
D. Configure Multi-AZ for the DB instance. Configure the users’ applications to switch between the DB instances.
Correct Answer: A
Selected Answer: A
Selected Answer: A
RDS Proxy provides a proxy layer that pools and shares database connections to improve scalability. This allows the proxy to handle connection
spikes to the database gracefully.
Using RDS Proxy requires minimal operational overhead - just create the proxy and reconfigure applications to use it. No code changes needed.
upvoted 3 times
Selected Answer: A
Many applications, including those built on modern serverless architectures, can have a large number of open connections to the database server
and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows
applications to pool and share connections established with the database, improving database efficiency and application scalability.
(https://aws.amazon.com/pt/rds/proxy/)
upvoted 3 times
Selected Answer: A
The correct solution for this scenario would be to create a proxy in RDS Proxy. RDS Proxy allows for managing thousands of concurrent database
connections, which can help reduce connection errors. RDS Proxy also provides features such as connection pooling, read/write splitting, and
retries. This solution requires the least operational overhead as it does not involve migrating to a different instance class or setting up a new cache
layer. Therefore, option A is the correct answer.
upvoted 4 times
A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software
for Amazon EC2 instances. A solutions architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send
reports to the auditing system as soon as they are launched and terminated.
A. Use a scheduled AWS Lambda function and run a script remotely on all EC2 instances to send data to the audit system.
B. Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated.
C. Use an EC2 Auto Scaling launch configuration to run a custom script through user data to send data to the audit system when instances are
D. Run a custom script on the instance operating system to send data to the audit system. Configure the script to be invoked by the EC2 Auto
Correct Answer: B
Selected Answer: B
The most efficient solution for this scenario is to use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when
instances are launched and terminated. The lifecycle hook can be used to delay instance termination until the script has completed, ensuring that
all data is sent to the audit system before the instance is terminated. This solution is more efficient than using a scheduled AWS Lambda function,
which would require running the function periodically and may not capture all instances launched and terminated within the interval. Running a
custom script through user data is also not an optimal solution, as it may not guarantee that all instances send data to the audit system. Therefore,
option B is the correct answer.
upvoted 9 times
Selected Answer: B
Use EC2 Auto Scaling lifecycle hooks to run a custom script to send data to the audit system when instances are launched and terminated
upvoted 1 times
Selected Answer: B
EC2 Auto Scaling lifecycle hooks allow you to perform custom actions as instances launch and terminate. This is the most efficient way to trigger
the auditing script execution at instance launch and termination.
upvoted 4 times
Selected Answer: B
https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html
upvoted 3 times
Selected Answer: B
Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups. These hooks let you create solutions that are aware
of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs.
(https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html)
upvoted 4 times
A company is developing a real-time multiplayer game that uses UDP for communications between the client and servers in an Auto Scaling
group. Spikes in demand are anticipated during the day, so the game server platform must adapt accordingly. Developers want to store gamer
scores and other non-relational data in a database solution that will scale without intervention.
A. Use Amazon Route 53 for traffic distribution and Amazon Aurora Serverless for data storage.
B. Use a Network Load Balancer for traffic distribution and Amazon DynamoDB on-demand for data storage.
C. Use a Network Load Balancer for traffic distribution and Amazon Aurora Global Database for data storage.
D. Use an Application Load Balancer for traffic distribution and Amazon DynamoDB global tables for data storage.
Correct Answer: B
Selected Answer: B
UDP = NLB
Non-relational data = Dynamo DB
upvoted 14 times
Selected Answer: B
This option provides the most scalable and optimized architecture for the real-time multiplayer game:
Network Load Balancer efficiently distributes UDP gaming traffic to the Auto Scaling group of game servers.
DynamoDB On-Demand mode provides auto-scaling non-relational data storage for gamer scores and other game data. DynamoDB is optimized
for fast, high-scale access patterns seen in gaming.
Together, the Network Load Balancer and DynamoDB On-Demand provide an architecture that can smoothly scale up and down to match spikes in
gaming demand.
upvoted 4 times
Selected Answer: B
Option B is a good fit because a Network Load Balancer can handle UDP traffic, and Amazon DynamoDB on-demand can provide automatic scaling
without intervention
upvoted 2 times
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/29756-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
A company hosts a frontend application that uses an Amazon API Gateway API backend that is integrated with AWS Lambda. When the API
receives requests, the Lambda function loads many libraries. Then the Lambda function connects to an Amazon RDS database, processes the
data, and returns the data to the frontend application. The company wants to ensure that response latency is as low as possible for all its users
A. Establish a connection between the frontend application and the database to make queries faster by bypassing the API.
B. Configure provisioned concurrency for the Lambda function that handles the requests.
C. Cache the results of the queries in Amazon S3 for faster retrieval of similar datasets.
D. Increase the size of the database to increase the number of connections Lambda can establish at one time.
Correct Answer: B
Selected Answer: B
Configuring provisioned concurrency would get rid of the "cold start" of the function therefore speeding up the proccess.
upvoted 16 times
Selected Answer: B
Provisioned concurrency – Provisioned concurrency initializes a requested number of execution environments so that they are prepared to respond
immediately to your function's invocations. Note that configuring provisioned concurrency incurs charges to your AWS account.
upvoted 10 times
Selected Answer: B
Provisioned concurrency pre-initializes execution environments which are prepared to respond immediately to incoming function requests.
upvoted 6 times
Selected Answer: B
Provisioned concurrency ensures a configured number of execution environments are ready to serve requests to the Lambda function. This avoids
cold starts where the function would otherwise need to load all the libraries on each invocation.
upvoted 3 times
Selected Answer: B
Provisioned concurrency ensures a configured number of execution environments are ready to serve requests to the Lambda function. This avoids
cold starts where the function would otherwise need to load all the libraries on each invocation.
upvoted 1 times
Selected Answer: B
Answer B is correct
https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html
Answer C: need to modify the application
upvoted 4 times
Selected Answer: B
https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html
upvoted 3 times
Question #380 Topic 1
A company is migrating its on-premises workload to the AWS Cloud. The company already uses several Amazon EC2 instances and Amazon RDS
DB instances. The company wants a solution that automatically starts and stops the EC2 instances and DB instances outside of business hours.
A. Scale the EC2 instances by using elastic resize. Scale the DB instances to zero outside of business hours.
B. Explore AWS Marketplace for partner solutions that will automatically start and stop the EC2 instances and DB instances on a schedule.
C. Launch another EC2 instance. Configure a crontab schedule to run shell scripts that will start and stop the existing EC2 instances and DB
instances on a schedule.
D. Create an AWS Lambda function that will start and stop the EC2 instances and DB instances. Configure Amazon EventBridge to invoke the
Correct Answer: D
Selected Answer: D
The most efficient solution for automatically starting and stopping EC2 instances and DB instances on a schedule while minimizing cost and
infrastructure maintenance is to create an AWS Lambda function and configure Amazon EventBridge to invoke the function on a schedule.
Option A, scaling EC2 instances by using elastic resize and scaling DB instances to zero outside of business hours, is not feasible as DB instances
cannot be scaled to zero.
Option B, exploring AWS Marketplace for partner solutions, may be an option, but it may not be the most efficient solution and could potentially
add additional costs.
Option C, launching another EC2 instance and configuring a crontab schedule to run shell scripts that will start and stop the existing EC2 instances
and DB instances on a schedule, adds unnecessary infrastructure and maintenance.
upvoted 16 times
Selected Answer: D
This option leverages AWS Lambda and EventBridge to automatically schedule the starting and stopping of resources.
Selected Answer: D
Create an AWS Lambda function that will start and stop the EC2 instances and DB instances. Configure Amazon EventBridge to invoke the Lambda
function on a schedule.
upvoted 3 times
Selected Answer: D
DDDDDDDDDDD
upvoted 1 times
Question #381 Topic 1
A company hosts a three-tier web application that includes a PostgreSQL database. The database stores the metadata from documents. The
company searches the metadata for key terms to retrieve documents that the company reviews in a report each month. The documents are stored
in Amazon S3. The documents are usually written only once, but they are updated frequently.
The reporting process takes a few hours with the use of relational queries. The reporting process must not prevent any document modifications or
the addition of new documents. A solutions architect needs to implement a solution to speed up the reporting process.
Which solution will meet these requirements with the LEAST amount of change to the application code?
A. Set up a new Amazon DocumentDB (with MongoDB compatibility) cluster that includes a read replica. Scale the read replica to generate the
reports.
B. Set up a new Amazon Aurora PostgreSQL DB cluster that includes an Aurora Replica. Issue queries to the Aurora Replica to generate the
reports.
C. Set up a new Amazon RDS for PostgreSQL Multi-AZ DB instance. Configure the reporting module to query the secondary RDS node so that
D. Set up a new Amazon DynamoDB table to store the documents. Use a fixed write capacity to support new document entries. Automatically
Correct Answer: B
Selected Answer: B
Aurora PostgreSQL provides native PostgreSQL compatibility, so minimal code changes would be required.
Using an Aurora Replica separates the reporting workload from the main workload, preventing any slowdown of document updates/inserts.
Aurora can auto-scale read replicas to handle the reporting load.
This allows leveraging the existing PostgreSQL database without major changes. DynamoDB would require more significant rewrite of data access
code.
RDS Multi-AZ alone would not fully separate the workloads, as the secondary is for HA/failover more than scaling read workloads.
upvoted 10 times
Selected Answer: B
Selected Answer: B
We also have a requirement for the Least amount of change to the code.
Since our DB is PostgreSQL, A & D are immediately out.
Multi-AZ won't help with offloading read requests, hence the answer is B ;)
upvoted 3 times
Selected Answer: C
D. Reporting process Must not prevent = allow modification and addition of new document.
Selected Answer: A
Why not A? :(
upvoted 1 times
B is the right one. why admin does not correct these wrong answers?
upvoted 3 times
Selected Answer: B
The reporting process queries the metadata (not the documents) and use relational queries-> A, D out
C: wrong since secondary RDS node in MultiAZ setup is in standby mode, not available for querying
B: reporting using a Replica is a design pattern. Using Aurora is an exam pattern.
upvoted 4 times
Selected Answer: B
B is right..
upvoted 1 times
Selected Answer: B
Selected Answer: B
Option B (Set up a new Amazon Aurora PostgreSQL DB cluster that includes an Aurora Replica. Issue queries to the Aurora Replica to generate the
reports) is the best option for speeding up the reporting process for a three-tier web application that includes a PostgreSQL database storing
metadata from documents, while not impacting document modifications or additions, with the least amount of change to the application code.
upvoted 2 times
Selected Answer: B
Aurora is a relational database, it supports PostgreSQL and with the help of read replicas we can issue the reporting proccess that take several
hours to the replica, therefore not affecting the primary node which can handle new writes or document modifications.
upvoted 1 times
Selected Answer: B
bbbbbbbb
upvoted 1 times
Question #382 Topic 1
A company has a three-tier application on AWS that ingests sensor data from its users’ devices. The traffic flows through a Network Load Balancer
(NLB), then to Amazon EC2 instances for the web tier, and finally to EC2 instances for the application tier. The application tier makes calls to a
database.
What should a solutions architect do to improve the security of the data in transit?
C. Change the load balancer to an Application Load Balancer (ALB). Enable AWS WAF on the ALB.
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances by using AWS Key Management Service (AWS KMS).
Correct Answer: A
Selected Answer: A
Network Load Balancers now support TLS protocol. With this launch, you can now offload resource intensive decryption/encryption from your
application servers to a high throughput, and low latency Network Load Balancer. Network Load Balancer is now able to terminate TLS traffic and
set up connections with your targets either over TCP or TLS protocol.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
https://exampleloadbalancer.com/nlbtls_demo.html
upvoted 19 times
Selected Answer: A
security of data in transit -> think of SSL/TLS. Check: NLB supports TLS
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-tls-listener.html
Selected Answer: A
Selected Answer: A
TLS provides encryption for data in motion over the network, protecting against eavesdropping and tampering. A valid server certificate signed by
a trusted CA will provide further security.
upvoted 5 times
Selected Answer: A
To improve the security of data in transit, you can configure a TLS listener on the Network Load Balancer (NLB) and deploy the server certificate on
it. This will encrypt traffic between clients and the NLB. You can also use AWS Certificate Manager (ACM) to provision, manage, and deploy SSL/TLS
certificates for use with AWS services and your internal connected resources1.
You can also change the load balancer to an Application Load Balancer (ALB) and enable AWS WAF on it. AWS WAF is a web application firewall
that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume
excessive resources3.
the A and C correct without transit but the need to improve the security of the data in transit? so he need SSL/TLS certificates
upvoted 2 times
Selected Answer: A
A company is planning to migrate a commercial off-the-shelf application from its on-premises data center to AWS. The software has a software
licensing model using sockets and cores with predictable capacity and uptime requirements. The company wants to use its existing licenses,
Correct Answer: A
Selected Answer: A
Selected Answer: A
Here's why:
License Flexibility: Dedicated Reserved Hosts allow the company to bring their existing licenses to AWS. This option enables them to continue using
their purchased licenses without any additional cost or licensing changes.
Cost Optimization: Reserved Hosts offer significant cost savings compared to On-Demand pricing. By purchasing Reserved Hosts, the company can
benefit from discounted hourly rates for the entire term of the reservation, which typically spans one or three years.
upvoted 2 times
Selected Answer: A
Actually the question is a bit ambiguous because there ARE "software licensing model using sockets and cores" that accept virtual sockets are core
as the base, for which C would work. But most of these license models are based on PHYSICAL sockets, thus A.
upvoted 3 times
TariqKipkemei 11 months, 2 weeks ago
Selected Answer: A
Dedicated Hosts give you visibility and control over how instances are placed on a physical server and also enable you to use your existing server-
bound software licenses like Windows Server
upvoted 2 times
Selected Answer: C
Dedicated Reserved Instances (DRIs) are the most cost-effective option for workloads that have predictable capacity and uptime requirements. DRI
offer a significant discount over On-Demand Instances, and they can be used to lock in a price for a period of time.
In this case, the company has predictable capacity and uptime requirements because the software has a software licensing model using sockets
and cores. The company also wants to use its existing licenses, which were purchased earlier this year. Therefore, DRIs are the most cost-effective
option.
upvoted 3 times
Selected Answer: C
I don't agree with people voting "A". The question reference that the COTS Application has a licensing model based on "sockets and cores". The
question does not specify if it means TCP sockets (= open connections) or hardware sockets, so I assume that "TCP sockets are intended". If this is
the case, sockets and cores can also remain stable with reserved instances - which are cheaper than reserved hosts.
I would go with "A" only if the question would clearly state that the COTS application has some strong dependency on physiscal hardware.
upvoted 1 times
Selected Answer: A
Bring custom purchased licenses to AWS -> Dedicated Host -> C,D out
Need cost effective solution -> "reserved" -> A
upvoted 4 times
Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2, so
that you get the flexibility and cost effectiveness of using your own licenses, but with the resiliency, simplicity and elasticity of AWS.
upvoted 1 times
Selected Answer: A
Dedicated Host Reservations provide a billing discount compared to running On-Demand Dedicated Hosts. Reservations are available in three
payment options.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/dedicated-hosts-overview.html
upvoted 3 times
A company runs an application on Amazon EC2 Linux instances across multiple Availability Zones. The application needs a storage layer that is
highly available and Portable Operating System Interface (POSIX)-compliant. The storage layer must provide maximum data durability and must be
shareable across the EC2 instances. The data in the storage layer will be accessed frequently for the first 30 days and will be accessed
A. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Glacier.
B. Use the Amazon S3 Standard storage class. Create an S3 Lifecycle policy to move infrequently accessed data to S3 Standard-Infrequent
C. Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a lifecycle management policy to move infrequently
D. Use the Amazon Elastic File System (Amazon EFS) One Zone storage class. Create a lifecycle management policy to move infrequently
Correct Answer: C
Selected Answer: C
Selected Answer: C
POSIX -> EFS, "maximum data durability" rules out One Zone
upvoted 3 times
Selected Answer: C
Also EFS one zone can work with multiple EC2s in different AZs. But there will be a cost involved when you are accessing the EFS from a different
AZ EC2. (EC2 data access charges)
https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html
So if "all" EC2 instances accessing the files frequently there will be a storage cost + EC2 data access charges if you choose one zone.
So i would choose C.
upvoted 1 times
https://aws.amazon.com/efs/features/infrequent-access/
upvoted 1 times
Selected Answer: C
Use the Amazon Elastic File System (Amazon EFS) Standard storage class. Create a lifecycle management policy to move infrequently accessed data
to EFS Standard-Infrequent Access (EFS Standard-IA).
upvoted 2 times
Amazon Elastic File System (Amazon EFS) Standard storage class = "maximum data durability"
upvoted 1 times
Selected Answer: D
D - It should be cost-effective
upvoted 2 times
Linux based system points to EFS plus POSIX-compliant is also EFS related.
upvoted 2 times
Selected Answer: C
Selected Answer: C
A solutions architect is creating a new VPC design. There are two public subnets for the load balancer, two private subnets for web servers, and
two private subnets for MySQL. The web servers use only HTTPS. The solutions architect has already created a security group for the load
balancer allowing port 443 from 0.0.0.0/0. Company policy requires that each resource has the least access required to still be able to perform its
tasks.
Which additional configuration strategy should the solutions architect use to meet these requirements?
A. Create a security group for the web servers and allow port 443 from 0.0.0.0/0. Create a security group for the MySQL servers and allow port
B. Create a network ACL for the web servers and allow port 443 from 0.0.0.0/0. Create a network ACL for the MySQL servers and allow port
C. Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and
D. Create a network ACL for the web servers and allow port 443 from the load balancer. Create a network ACL for the MySQL servers and allow
Correct Answer: C
Selected Answer: C
Option C aligns with the least access principle and provides a clear and granular control over the communication between different components in
the architecture.
Option D suggests using network ACLs, but security groups are more suitable for controlling access to individual instances based on their security
group membership, which is why Option C is the more appropriate choice in this contex
upvoted 2 times
Selected Answer: C
Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow
port 3306 from the web servers security group.
upvoted 2 times
C) Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow
port 3306 from the web servers security group.
This option follows the principle of least privilege by only allowing necessary access:
Web server SG allows port 443 from load balancer SG (not open to world)
MySQL SG allows port 3306 only from web server SG
upvoted 3 times
Selected Answer: C
Create a security group for the web servers and allow port 443 from the load balancer. Create a security group for the MySQL servers and allow
port 3306 from the web servers security group
upvoted 1 times
Selected Answer: C
Load balancer is public facing accepting all traffic coming towards the VPC (0.0.0.0/0). The web server needs to trust traffic originating from the
ALB. The DB will only trust traffic originating from the Web server on port 3306 for Mysql
upvoted 4 times
Selected Answer: C
Selected Answer: C
cccccc
upvoted 1 times
Question #386 Topic 1
An ecommerce company is running a multi-tier application on AWS. The front-end and backend tiers both run on Amazon EC2, and the database
runs on Amazon RDS for MySQL. The backend tier communicates with the RDS instance. There are frequent calls to return identical datasets from
D. Implement Amazon Kinesis Data Firehose to stream the calls to the database.
Correct Answer: B
Selected Answer: B
the best solution is to implement Amazon ElastiCache to cache the large datasets, which will store the frequently accessed data in memory,
allowing for faster retrieval times. This can help to alleviate the frequent calls to the database, reduce latency, and improve the overall performance
of the backend tier.
upvoted 13 times
Selected Answer: B
Answer is B This will help reduce the frequency of calls to the database and improve overall performance by serving frequently accessed data from
the cache instead of fetching it from the database every time. It’s is not option C as it suggests implementing an RDS for MySQL read replica to
cache database calls. While read replicas can offload read operations from the primary database instance and improve read scalability, they are
primarily used for read scaling and high availability rather than caching. Read replicas are intended to handle read-heavy workloads by distributing
read requests across multiple instances. However, they do not inherently cache data like ElastiCache does.
upvoted 1 times
This will help reduce the frequency of calls to the database and improve overall performance by serving frequently accessed data from the cache
instead of fetching it from the database every time.
It’s is not option C as it suggests implementing an RDS for MySQL read replica to cache database calls. While read replicas can offload read
operations from the primary database instance and improve read scalability, they are primarily used for read scaling and high availability rather
than caching.
Read replicas are intended to handle read-heavy workloads by distributing read requests across multiple instances. However, they do not
inherently cache data like ElastiCache does.
upvoted 1 times
Selected Answer: B
As per Amazon Q:
ElastiCache can be used to cache datasets from queries to RDS databases. Some key points:
While creating an ElastiCache cluster from the RDS console provides convenience, the application is still responsible for leveraging the cache.
Caching query results in ElastiCache can significantly improve performance by allowing high-volume read operations to be served from cache
versus hitting the database.
This is especially useful for applications with high read throughput needs, as scaling the database can become more expensive compared to scaling
the cache as needs increase. ElastiCache nodes can support up to 400,000 queries per second.
Cost savings are directly proportional to read throughput - higher throughput applications see greater savings.
upvoted 1 times
Murtadhaceit 10 months ago
Selected Answer: B
The best scenario to implement caching, identical calls to the same data sets.
upvoted 2 times
Selected Answer: B
The key issue is repeated calls to return identical datasets from the RDS database causing performance slowdowns.
Implementing Amazon ElastiCache for Redis or Memcached would allow these repeated query results to be cached, improving backend
performance by reducing load on the database.
upvoted 3 times
The key issue is repeated calls to return identical datasets from the RDS database causing performance slowdowns.
Implementing Amazon ElastiCache for Redis or Memcached would allow these repeated query results to be cached, improving backend
performance by reducing load on the database.
upvoted 1 times
Key term is identical datasets from the database it means caching can solve this issue by cached in frequently used dataset from DB
upvoted 4 times
Question #387 Topic 1
A new employee has joined a company as a deployment engineer. The deployment engineer will be using AWS CloudFormation templates to
create multiple AWS resources. A solutions architect wants the deployment engineer to perform job activities while following the principle of least
privilege.
Which combination of actions should the solutions architect take to accomplish this goal? (Choose two.)
A. Have the deployment engineer use AWS account root user credentials for performing AWS CloudFormation stack operations.
B. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the PowerUsers IAM policy attached.
C. Create a new IAM user for the deployment engineer and add the IAM user to a group that has the AdministratorAccess IAM policy attached.
D. Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS
E. Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch
Correct Answer: DE
Selected Answer: DE
Selected Answer: DE
ABC are just giving too much access so CD are logical choices
upvoted 2 times
Selected Answer: DE
Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation
actions only.
Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks
using that IAM role.
upvoted 2 times
The two actions that should be taken to follow the principle of least privilege are:
D) Create a new IAM user for the deployment engineer and add the IAM user to a group that has an IAM policy that allows AWS CloudFormation
actions only.
E) Create an IAM role for the deployment engineer to explicitly define the permissions specific to the AWS CloudFormation stack and launch stacks
using that IAM role.
The principle of least privilege states that users should only be given the minimal permissions necessary to perform their job function.
upvoted 3 times
Selected Answer: DE
Option D, creating a new IAM user and adding them to a group with an IAM policy that allows AWS CloudFormation actions only, ensures that the
deployment engineer has the necessary permissions to perform AWS CloudFormation operations while limiting access to other resources and
actions. This aligns with the principle of least privilege by providing the minimum required permissions for their job activities.
Option E, creating an IAM role with specific permissions for AWS CloudFormation stack operations and allowing the deployment engineer to
assume that role, is another valid approach. By using an IAM role, the deployment engineer can assume the role when necessary, granting them
temporary permissions to perform CloudFormation actions. This provides a level of separation and limits the permissions granted to the engineer
to only the required CloudFormation operations.
upvoted 2 times
Babaaaaa 1 year, 4 months ago
Selected Answer: DE
Dddd,Eeee
upvoted 1 times
Selected Answer: DE
Selected Answer: DE
I agree DE
upvoted 2 times
Question #388 Topic 1
A company is deploying a two-tier web application in a VPC. The web tier is using an Amazon EC2 Auto Scaling group with public subnets that
span multiple Availability Zones. The database tier consists of an Amazon RDS for MySQL DB instance in separate private subnets. The web tier
The web application is not working as intended. The web application reports that it cannot connect to the database. The database is confirmed to
be up and running. All configurations for the network ACLs, security groups, and route tables are still in their default states.
A. Add an explicit rule to the private subnet’s network ACL to allow traffic from the web tier’s EC2 instances.
B. Add a route in the VPC route table to allow traffic between the web tier’s EC2 instances and the database tier.
C. Deploy the web tier's EC2 instances and the database tier’s RDS instance into two separate VPCs, and configure VPC peering.
D. Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from the web tiers security group.
Correct Answer: D
Selected Answer: D
Security group defaults block all inbound traffic..Add an inbound rule to the security group of the database tier’s RDS instance to allow traffic from
the web tiers security group
upvoted 10 times
Selected Answer: D
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Default NACLs allow all traffic, and in this question NACLs, SGs and route tables are in their default states.
upvoted 2 times
I think the answer should be A. Sine the services are in different subnets, the NACL would by default block all the incoming traffic to the subnet.
Security group rule wouldn't be able to override NACL rule.
upvoted 1 times
Selected Answer: D
Security Groups are tied on instance where as network ACL are tied to Subnet.
upvoted 4 times
Selected Answer: D
By default, all inbound traffic to an RDS instance is blocked. Therefore, an inbound rule needs to be added to the security group of the RDS
instance to allow traffic from the security group of the web tier's EC2 instances.
upvoted 3 times
Selected Answer: D
D is correct option
upvoted 1 times
Selected Answer: D
ddddddd
upvoted 2 times
Question #389 Topic 1
A company has a large dataset for its online advertising business stored in an Amazon RDS for MySQL DB instance in a single Availability Zone.
The company wants business reporting queries to run without impacting the write operations to the production DB instance.
B. Scale out the DB instance horizontally by placing it behind an Elastic Load Balancer.
C. Scale up the DB instance to a larger instance type to handle write operations and queries.
D. Deploy the DB instance in multiple Availability Zones to process the business reporting queries.
Correct Answer: A
Selected Answer: A
reporting queries to run without impacting the write operations -> read replicas
upvoted 3 times
Selected Answer: A
RDS read replicas allow read-only copies of the production DB instance to be created
Queries to the read replica don't affect the source DB instance performance
This isolates reporting queries from production traffic and write operations
So using RDS read replicas is the best way to meet the requirements of running reporting queries without impacting production write operations.
upvoted 4 times
Selected Answer: A
"single AZ", "large dataset", "Amazon RDS for MySQL database". Want "business report queries". --> Solution "Read replicas", choose A.
upvoted 1 times
No doubt A.
upvoted 2 times
Selected Answer: A
Option "A" is the right answer . Read replica use cases - You have a production database
that is taking on normal load & You want to run a reporting application to run some analytics
• You create a Read Replica to run the new workload there
• The production application is unaffected
• Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)
upvoted 2 times
aaaaaaaaaaa
upvoted 2 times
cegama543 1 year, 6 months ago
Selected Answer: A
option A is the best solution for ensuring that business reporting queries can run without impacting write operations to the production DB
instance.
upvoted 3 times
Question #390 Topic 1
A company hosts a three-tier ecommerce application on a fleet of Amazon EC2 instances. The instances run in an Auto Scaling group behind an
Application Load Balancer (ALB). All ecommerce data is stored in an Amazon RDS for MariaDB Multi-AZ DB instance.
The company wants to optimize customer session management during transactions. The application must store session data durably.
D. Deploy an Amazon ElastiCache for Redis cluster to store customer session information.
E. Use AWS Systems Manager Application Manager in the application to manage user session information.
Correct Answer: AD
Selected Answer: AD
https://aws.amazon.com/caching/session-management/
upvoted 26 times
Selected Answer: BD
I did not get why A is most voted? The question did not mention anything about fixed routing target so the ALB should route traffic randomly to
each server. Then we just need to provide cache session management to avoid session lost issue instead of using sticky session.
upvoted 12 times
Selected Answer: BD
Option A suggests using sticky sessions (session affinity) on the Application Load Balancer (ALB). While sticky sessions can help route requests from
the same client to the same backend server, it doesn't directly address the requirement for durable storage of session data. Sticky sessions are
typically used to maintain session state at the load balancer level, but they do not provide data durability in case of server failures or restarts.
Option A - is not correct ! ! !
Selected Answer: AB
Going for AB. Sticky Sessions to "optimize customer session management during transactions" and DynamoDB to "store session data durably".
D, ElastiCache does NOT allow "durable" storage. Just because there's an article that contains both words "ElastiCache" and "durable" does not
prove the contrary.
C and E, Cognito and Systems Manager, have nothing to do with the issue.
upvoted 4 times
dkw2342 7 months ago
I agree that ElastiCache for Redis is not a durable KV store.
"Which solutions will meet these requirements? (Choose two.)" Solutions (plural) implies two ways to *independently* fulfill the requirements. If
you're supposed to select a combination of options, it's usually phrased like this: "Which combination of solutions ..."
upvoted 2 times
Amazon ElastiCache for Redis is highly suited as a session store to manage session information such as user authentication tokens, session state
and more. Simply use ElastiCache for Redis as a fast key-value store with appropriate TTL on session keys to manage your session information.
Session management is commonly required for online applications, including games, e-commerce websites, and social media platforms.
upvoted 2 times
Selected Answer: BD
I don't understand what Sticky Session has to do with session storage. For the intent of the problem, I think DynamoDB and Redis are appropriate.
upvoted 4 times
Selected Answer: BD
In that case, B and D solve the same part of the requirement (storing session data), just B is durable (as required) while D is not durable (thus
failing to meet the requirement). We still need to 'optimize customer session management'.
upvoted 3 times
Selected Answer: AD
Well, this documentation says it all. Option A is obvious, and D ElastiCache for Redis, can even support replication in case of node failure/session
data loss.
https://aws.amazon.com/caching/session-management/
upvoted 3 times
Selected Answer: AD
https://aws.amazon.com/caching/session-management/
upvoted 2 times
go for AD
upvoted 1 times
go with B
upvoted 2 times
For D : "Amazon ElastiCache for Redis is highly suited as a session store to manage session information such as user authentication tokens, session
state, and more."
https://aws.amazon.com/elasticache/redis/
upvoted 2 times
https://aws.amazon.com/redis/
upvoted 1 times
ElastiCache can serve as a cache for DynamoDB and provide low latency while DynamoDB (!) provides durability.
upvoted 1 times
Selected Answer: AB
A company needs a backup strategy for its three-tier stateless web application. The web application runs on Amazon EC2 instances in an Auto
Scaling group with a dynamic scaling policy that is configured to respond to scaling events. The database tier runs on Amazon RDS for
PostgreSQL. The web application does not require temporary local storage on the EC2 instances. The company’s recovery point objective (RPO) is
2 hours.
The backup strategy must maximize scalability and optimize resource utilization for this environment.
A. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances and database every 2 hours to meet the RPO.
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable automated backups in Amazon
C. Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Enable automated backups in Amazon RDS and use
D. Take snapshots of Amazon Elastic Block Store (Amazon EBS) volumes of the EC2 instances every 2 hours. Enable automated backups in
Correct Answer: C
Selected Answer: C
that if there is no temporary local storage on the EC2 instances, then snapshots of EBS volumes are not necessary. Therefore, if your application
does not require temporary storage on EC2 instances, using AMIs to back up the web and application tiers is sufficient to restore the system after a
failure.
Snapshots of EBS volumes would be necessary if you want to back up the entire EC2 instance, including any applications and temporary data
stored on the EBS volumes attached to the instances. When you take a snapshot of an EBS volume, it backs up the entire contents of that volume.
This ensures that you can restore the entire EC2 instance to a specific point in time more quickly. However, if there is no temporary data stored on
the EBS volumes, then snapshots of EBS volumes are not necessary.
upvoted 32 times
Selected Answer: C
The web application does not require temporary local storage on the EC2 instances => No EBS snapshot is required, retaining the latest AMI is
enough.
upvoted 14 times
Selected Answer: C
The web application does not require temporary local storage on the EC2 instances so we do not care about ECS.
We only need two things here , the image of the instance (AMI) and a database backup.
C
upvoted 2 times
"The web application does not require temporary local storage on the EC2 instances" rules out any option to back up the EC2 EBS volumes.
upvoted 1 times
Selected Answer: C
Since the application has no local data on instances, AMIs alone can meet the RPO by restoring instances from the most recent AMI backup. When
combined with automated RDS backups for the database, this provides a complete backup solution for this environment.
The other options involving EBS snapshots would be unnecessary given the stateless nature of the instances. AMIs provide all the backup needed
for the app tier.
This uses native, automated AWS backup features that require minimal ongoing management:
- AMI automated backups provide point-in-time recovery for the stateless app tier.
- RDS automated backups provide point-in-time recovery for the database.
upvoted 3 times
Selected Answer: B
BBBBBBBBBB
upvoted 1 times
I vote for D
upvoted 1 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
why B? I mean "stateless" and "does not require temporary local storage" have indicate that we don't need to take snapshot for ec2 volume.
upvoted 3 times
With this solution, a snapshot lifecycle policy can be created to take Amazon Elastic Block Store (Amazon EBS) snapshots periodically, which will
ensure that EC2 instances can be restored in the event of an outage. Additionally, automated backups can be enabled in Amazon RDS for
PostgreSQL to take frequent backups of the database tier. This will help to minimize the RPO to 2 hours.
Taking snapshots of Amazon EBS volumes of the EC2 instances and database every 2 hours (Option A) may not be cost-effective and efficient, as
this approach would require taking regular backups of all the instances and volumes, regardless of whether any changes have occurred or not.
Retaining the latest Amazon Machine Images (AMIs) of the web and application tiers (Option C) would provide only an image backup and not a
data backup, which is required for the database tier. Taking snapshots of Amazon EBS volumes of the EC2 instances every 2 hours and enabling
automated backups in Amazon RDS and using point-in-time recovery (Option D) would result in higher costs and may not be necessary to meet
the RPO requirement of 2 hours.
upvoted 4 times
"Retaining the latest Amazon Machine Images (AMIs) of the web and application tiers (Option C) would provide only an image backup and not
a data backup, which is required for the database tier." False because option C also includes "automated backups in Amazon RDS".
upvoted 1 times
B. Configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots. Enable automated backups in Amazon RDS to
meet the RPO.
The best solution is to configure a snapshot lifecycle policy to take Amazon Elastic Block Store (Amazon EBS) snapshots, and enable automated
backups in Amazon RDS to meet the RPO. An RPO of 2 hours means that the company needs to ensure that the backup is taken every 2 hours to
minimize data loss in case of a disaster. Using a snapshot lifecycle policy to take Amazon EBS snapshots will ensure that the web and application
tier can be restored quickly and efficiently in case of a disaster. Additionally, enabling automated backups in Amazon RDS will ensure that the
database tier can be restored quickly and efficiently in case of a disaster. This solution maximizes scalability and optimizes resource utilization
because it uses automated backup solutions built into AWS.
upvoted 3 times
A company wants to deploy a new public web application on AWS. The application includes a web server tier that uses Amazon EC2 instances.
The application also includes a database tier that uses an Amazon RDS for MySQL DB instance.
The application must be secure and accessible for global customers that have dynamic IP addresses.
How should a solutions architect configure the security groups to meet these requirements?
A. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB
instance to allow inbound traffic on port 3306 from the security group of the web servers.
B. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the
security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
C. Configure the security group for the web servers to allow inbound traffic on port 443 from the IP addresses of the customers. Configure the
security group for the DB instance to allow inbound traffic on port 3306 from the IP addresses of the customers.
D. Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0. Configure the security group for the DB
Correct Answer: A
Selected Answer: A
"The application must be secure and accessible for global customers that have dynamic IP addresses." This just means "anyone" so BC are wrong a
you cannot know in advance about the dynamic IP addresses. D is just opening the DB to the internet.
The keyword is dynamic IPs from the customer, then B, C out, D out due to 0.0.0.0/0
upvoted 5 times
Selected Answer: A
It allows HTTPS access from any public IP address, meeting the requirement for global customer access.
HTTPS provides encryption for secure communication.
And for the database security group, only allowing inbound port 3306 from the web server security group properly restricts access to only the
resources that need it.
upvoted 3 times
Selected Answer: A
Selected Answer: A
A no doubt.
upvoted 2 times
Selected Answer: A
dynamic source ips = allow all traffic - Configure the security group for the web servers to allow inbound traffic on port 443 from 0.0.0.0/0.
Configure the security group for the DB instance to allow inbound traffic on port 3306 from the security group of the web servers.
upvoted 2 times
Selected Answer: A
If the customers have dynamic IP addresses, option A would be the most appropriate solution for allowing global access while maintaining security
upvoted 4 times
Selected Answer: B
Keyword dynamic ...A is the right answer. If the IP were static and specific, B would be the right answer
upvoted 4 times
aaaaaaa
upvoted 1 times
Ans - A
upvoted 1 times
Selected Answer: A
aaaaaa
upvoted 1 times
Question #393 Topic 1
A payment processing company records all voice communication with its customers and stores the audio files in an Amazon S3 bucket. The
company needs to capture the text from the audio files. The company must remove from the text any personally identifiable information (PII) that
belongs to customers.
A. Process the audio files by using Amazon Kinesis Video Streams. Use an AWS Lambda function to scan for known PII patterns.
B. When an audio file is uploaded to the S3 bucket, invoke an AWS Lambda function to start an Amazon Textract task to analyze the call
recordings.
C. Configure an Amazon Transcribe transcription job with PII redaction turned on. When an audio file is uploaded to the S3 bucket, invoke an
AWS Lambda function to start the transcription job. Store the output in a separate S3 bucket.
D. Create an Amazon Connect contact flow that ingests the audio files with transcription turned on. Embed an AWS Lambda function to scan
for known PII patterns. Use Amazon EventBridge to start the contact flow when an audio file is uploaded to the S3 bucket.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Amazon Transcribe is a service provided by Amazon Web Services (AWS) that converts speech to text using automatic speech recognition (ASR)
technology
upvoted 4 times
Selected Answer: C
AWS Transcribe https://aws.amazon.com/transcribe/ . Redacting or identifying (Personally identifiable instance) PII in real-time stream
https://docs.aws.amazon.com/transcribe/latest/dg/pii-redaction-stream.html .
upvoted 1 times
Selected Answer: C
Option C is the most suitable solution as it suggests using Amazon Transcribe with PII redaction turned on. When an audio file is uploaded to the
S3 bucket, an AWS Lambda function can be used to start the transcription job. The output can be stored in a separate S3 bucket to ensure that the
PII redaction is applied to the transcript. Amazon Transcribe can redact PII such as credit card numbers, social security numbers, and phone
numbers.
upvoted 3 times
C for sure.....
upvoted 1 times
Selected Answer: C
ccccccccc
upvoted 1 times
Selected Answer: C
Option C is correct..
upvoted 1 times
Question #394 Topic 1
A company is running a multi-tier ecommerce web application in the AWS Cloud. The application runs on Amazon EC2 instances with an Amazon
RDS for MySQL Multi-AZ DB instance. Amazon RDS is configured with the latest generation DB instance with 2,000 GB of storage in a General
Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume. The database performance affects the application during periods of high
demand.
A database administrator analyzes the logs in Amazon CloudWatch Logs and discovers that the application performance always degrades when
D. Replace the 2,000 GB gp3 volume with two 1,000 GB gp3 volumes.
Correct Answer: D
Selected Answer: D
*Striping* is something that RDS does automatically depending on storage class and volume size: "When you select General Purpose SSD or
Provisioned IOPS SSD, depending on the engine selected and the amount of storage requested, Amazon RDS automatically stripes across
multiple volumes to enhance performance (...)"
For MariaDB with 400 to 64,000 GiB of gp3 storage, RDS automatically provisions 4 volumes. This gives us 12,000 IOPS *baseline* and can be
increased up to 64,000 *provisioned* IOPS.
Therefore: Option B
upvoted 3 times
Selected Answer: B
It can not be option C as RDS does not support io2 storage type (only io1).
Here is a link to the RDS storage documentation: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
Also it is not the best option to take Magnetic storage as it supports max 1000 IOPS.
I vote for option B as gp3 storage type supports up to 64 000 IOPS where question mentioned with problem at level of 20 000.
upvoted 15 times
16,000 IOPS
1,000 MiB/s of throughput
160-TiB volume size
upvoted 1 times
Selected Answer: B
B for sure
upvoted 1 times
Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 3 times
Selected Answer: C
Selected Answer: C
Provisioned IOPS SSDs (io2) are specifically designed to deliver sustained high performance and low latency (RDS is supported in IO2). They can
handle more than 20,000 IOPS.
upvoted 5 times
Selected Answer: C
It should be "C" right, now.
https://aws.amazon.com/blogs/aws/amazon-rds-now-supports-io2-block-express-volumes-for-mission-critical-database-workloads/
upvoted 3 times
Selected Answer: C
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html
upvoted 2 times
Selected Answer: C
io2 is now supported by RDS as of 2024. It wasn't at one point, but people need to check the docs when they start saying it's not supported. Just
because it was once true does not mean that it still is.
upvoted 8 times
https://aws.amazon.com/blogs/aws/amazon-rds-now-supports-io2-block-express-volumes-for-mission-critical-database-
workloads/#:~:text=1%20io2%20Block%20Express%20volumes%20are%20available%20on,of%20IOPS%20to%20allocated%20storage%20is%2050
0%3A1.%20
upvoted 6 times
Selected Answer: D
Answer D
upvoted 1 times
Selected Answer: C
Option C. Replace the volume with a Provisioned IOPS SSD (io2) volume.
Provisioned IOPS SSD (io2) volumes allow you to specify a consistent level of IOPS to meet performance requirements. By provisioning the
necessary IOPS, you can ensure that the database performance remains stable even during periods of high demand. This solution addresses the
issue of performance degradation when the number of read and write IOPS exceeds 20,000.
upvoted 1 times
An IAM user made several configuration changes to AWS resources in their company's account during a production deployment last week. A
solutions architect learned that a couple of security group rules are not configured as desired. The solutions architect wants to confirm which IAM
Which service should the solutions architect use to find the desired information?
A. Amazon GuardDuty
B. Amazon Inspector
C. AWS CloudTrail
D. AWS Config
Correct Answer: C
Selected Answer: C
C. AWS CloudTrail
The best option is to use AWS CloudTrail to find the desired information. AWS CloudTrail is a service that enables governance, compliance,
operational auditing, and risk auditing of AWS account activities. CloudTrail can be used to log all changes made to resources in an AWS account,
including changes made by IAM users, EC2 instances, AWS management console, and other AWS services. By using CloudTrail, the solutions
architect can identify the IAM user who made the configuration changes to the security group rules.
upvoted 12 times
Selected Answer: C
I was initially a bit confused on what Config and CloudTrail actually do, as both can be used to track configuration changes.
However, this explanation is probably the best one I have come across so far:
"Config reports on what has changed, whereas CloudTrail reports on who made the change, when, and from which location"
Since the question is which IAM user was responsible for making the changes, the answer is CloudTrail.
upvoted 3 times
Selected Answer: C
CloudTrail = which user made which api calls. This is used for audit purpose.
upvoted 2 times
Selected Answer: C
AWS CloudTrail is the correct service to use here to identify which user was responsible for the security group configuration changes
upvoted 1 times
Selected Answer: C
AWS CloudTrail
upvoted 1 times
AWS CloudTrail
upvoted 1 times
C. AWS CloudTrail
upvoted 2 times
Selected Answer: C
A company has implemented a self-managed DNS service on AWS. The solution consists of the following:
B. Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
C. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the accelerator.
D. Create an AWS WAF web ACL that includes a rate-based rule. Associate the web ACL with the EC2 instances.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Global Accelerator is what is exposed to the Internet = where DDoS attacks could land = what must be protected by Shield Advanced
upvoted 2 times
Selected Answer: B
B. Subscribe to AWS Shield Advanced. Add the EC2 instances as resources to protect.
A. While you can add the accelerator as a resource to protect with AWS Shield Advanced, it's generally more effective to protect the individual
resources (in this case, the EC2 instances) because AWS Shield Advanced will automatically protect resources associated with Global Accelerator
upvoted 1 times
Selected Answer: A
Answer is A
https://docs.aws.amazon.com/waf/latest/developerguide/ddos-event-mitigation-logic-gax.html
upvoted 1 times
Selected Answer: A
AWS Shield is a managed service that provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS.
AWS Shield Standard is automatically enabled to all AWS customers at no additional cost. AWS Shield Advanced is an optional paid service. AWS
Shield Advanced provides additional protections against more sophisticated and larger attacks for your applications running on Amazon Elastic
Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53.
upvoted 3 times
aaaaa
accelator can not be attached to shield
upvoted 2 times
Sorry I meant A
upvoted 2 times
An ecommerce company needs to run a scheduled daily job to aggregate and filter sales records for analytics. The company stores the sales
records in an Amazon S3 bucket. Each object can be up to 10 GB in size. Based on the number of sales events, the job can take up to an hour to
complete. The CPU and memory usage of the job are constant and are known in advance.
A solutions architect needs to minimize the amount of operational effort that is needed for the job to run.
A. Create an AWS Lambda function that has an Amazon EventBridge notification. Schedule the EventBridge event to run once a day.
B. Create an AWS Lambda function. Create an Amazon API Gateway HTTP API, and integrate the API with the function. Create an Amazon
EventBridge scheduled event that calls the API and invokes the function.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge
scheduled event that launches an ECS task on the cluster to run the job.
D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type and an Auto Scaling group with at least
one EC2 instance. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job.
Correct Answer: C
Selected Answer: C
The requirement is to run a daily scheduled job to aggregate and filter sales records for analytics in the most efficient way possible. Based on the
requirement, we can eliminate option A and B since they use AWS Lambda which has a limit of 15 minutes of execution time, which may not be
sufficient for a job that can take up to an hour to complete.
Between options C and D, option C is the better choice since it uses AWS Fargate which is a serverless compute engine for containers that
eliminates the need to manage the underlying EC2 instances, making it a low operational effort solution. Additionally, Fargate also provides instant
scale-up and scale-down capabilities to run the scheduled job as per the requirement.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled
event that launches an ECS task on the cluster to run the job.
upvoted 23 times
Selected Answer: C
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch type. Create an Amazon EventBridge scheduled
event that launches an ECS task on the cluster to run the job
upvoted 1 times
Selected Answer: C
Selected Answer: C
"1-hour job" -> A, B out since max duration for Lambda is 15 min
Selected Answer: C
The solution that meets the requirements with the least operational overhead is to create a **Regional AWS WAF web ACL with a rate-based rule**
and associate the web ACL with the API Gateway stage. This solution will protect the application from HTTP flood attacks by monitoring incoming
requests and blocking requests from IP addresses that exceed the predefined rate.
Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint is also a good solution but it requires mor
operational overhead than the previous solution.
Using Amazon CloudWatch metrics to monitor the Count metric and alerting the security team when the predefined rate is reached is not a
solution that can protect against HTTP flood attacks.
Creating an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours is not a solution
that can protect against HTTP flood attacks.
upvoted 1 times
The solution that meets these requirements is C. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an AWS Fargate launch
type. Create an Amazon EventBridge scheduled event that launches an ECS task on the cluster to run the job. This solution will minimize the
amount of operational effort that is needed for the job to run.
A company needs to transfer 600 TB of data from its on-premises network-attached storage (NAS) system to the AWS Cloud. The data transfer
must be complete within 2 weeks. The data is sensitive and must be encrypted in transit. The company’s internet connection can support an
A. Use Amazon S3 multi-part upload functionality to transfer the files over HTTPS.
B. Create a VPN connection between the on-premises NAS system and the nearest AWS Region. Transfer the data over the VPN connection.
C. Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to
Amazon S3.
D. Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN
Correct Answer: C
Selected Answer: C
With the existing data link the transfer takes ~ 600 days in the best case. Thus, (A) and (B) are not applicable. Solution (D) could meet the target
with a transfer time of 6 days, but the lead time for the direct connect deployment can take weeks! Thus, (C) is the only valid solution.
upvoted 11 times
Selected Answer: C
Use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices. Use the devices to transfer the data to Amazon
S3.
upvoted 1 times
Selected Answer: C
We need the admin in here to tell us how they plan on this being achieved over connection with such a slow connection lol.
It's C, folks.
upvoted 2 times
Selected Answer: C
Best option is to use multiple AWS Snowball Edge Storage Optimized devices. Option "C" is the correct one.
upvoted 1 times
Selected Answer: C
The best option is to use the AWS Snow Family console to order several AWS Snowball Edge Storage Optimized devices and use the devices to
transfer the data to Amazon S3. Snowball Edge is a petabyte-scale data transfer device that can help transfer large amounts of data securely and
quickly. Using Snowball Edge can be the most cost-effective solution for transferring large amounts of data over long distances and can help meet
the requirement of transferring 600 TB of data within two weeks.
upvoted 3 times
Question #399 Topic 1
A financial company hosts a web application on AWS. The application uses an Amazon API Gateway Regional API endpoint to give users the
ability to retrieve current stock prices. The company’s security team has noticed an increase in the number of API requests. The security team is
concerned that HTTP flood attacks might take the application offline.
A solutions architect must design a solution to protect the application from this type of attack.
Which solution meets these requirements with the LEAST operational overhead?
A. Create an Amazon CloudFront distribution in front of the API Gateway Regional API endpoint with a maximum TTL of 24 hours.
B. Create a Regional AWS WAF web ACL with a rate-based rule. Associate the web ACL with the API Gateway stage.
C. Use Amazon CloudWatch metrics to monitor the Count metric and alert the security team when the predefined rate is reached.
D. Create an Amazon CloudFront distribution with Lambda@Edge in front of the API Gateway Regional API endpoint. Create an AWS Lambda
function to block requests from IP addresses that exceed the predefined rate.
Correct Answer: B
Selected Answer: B
Regional AWS WAF web ACL is a managed web application firewall that can be used to protect your API Gateway API from a variety of attacks,
including HTTP flood attacks.
Rate-based rule is a type of rule that can be used to limit the number of requests that can be made from a single IP address within a specified
period of time.
API Gateway stage is a logical grouping of API resources that can be used to control access to your API.
upvoted 8 times
Selected Answer: B
A rate-based rule in AWS WAF allows the security team to configure thresholds that trigger rate-based rules, which enable AWS WAF to track the
rate of requests for a specified time period and then block them automatically when the threshold is exceeded. This provides the ability to prevent
HTTP flood attacks with minimal operational overhead.
upvoted 5 times
Selected Answer: B
Answer is B
upvoted 1 times
Selected Answer: B
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html
upvoted 1 times
Selected Answer: B
bbbbbbbb
upvoted 3 times
Question #400 Topic 1
A meteorological startup company has a custom web application to sell weather data to its users online. The company uses Amazon DynamoDB
to store its data and wants to build a new service that sends an alert to the managers of four internal teams every time a new weather event is
recorded. The company does not want this new service to affect the performance of the current application.
What should a solutions architect do to meet these requirements with the LEAST amount of operational overhead?
A. Use DynamoDB transactions to write new event data to the table. Configure the transactions to notify internal teams.
B. Have the current application publish a message to four Amazon Simple Notification Service (Amazon SNS) topics. Have each team
C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic
D. Add a custom attribute to each record to flag new items. Write a cron job that scans the table every minute for items that are new and
notifies an Amazon Simple Queue Service (Amazon SQS) queue to which the teams can subscribe.
Correct Answer: C
Selected Answer: C
The best solution to meet these requirements with the least amount of operational overhead is to enable Amazon DynamoDB Streams on the table
and use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to which the teams can subscribe. This solution
requires minimal configuration and infrastructure setup, and Amazon DynamoDB Streams provide a low-latency way to capture changes to the
DynamoDB table. The triggers automatically capture the changes and publish them to the SNS topic, which notifies the internal teams.
upvoted 13 times
Answer B is not the best solution because it requires changes to the current application, which may affect its performance, and it creates
additional work for the teams to subscribe to multiple topics.
Answer D is not a good solution because it requires a cron job to scan the table every minute, which adds additional operational overhead to
the system.
Therefore, the correct answer is C. Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon SNS topic to whic
the teams can subscribe.
upvoted 5 times
Selected Answer: C
Enable Amazon DynamoDB Streams on the table. Use triggers to write to a single Amazon Simple Notification Service (Amazon SNS) topic to whic
the teams can subscribe
upvoted 3 times
Selected Answer: C
Question keyword: "sends an alert", a new weather event is recorded". Answer keyword C "Amazon DynamoDB Streams on the table", "Amazon
Simple Notification Service" (Amazon SNS). Choose C. Easy question.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
https://aws.amazon.com/blogs/database/dynamodb-streams-use-cases-and-design-patterns/
upvoted 3 times
Best answer is C
upvoted 2 times
DynamoDB Streams
upvoted 3 times
Selected Answer: C
Answer : C
upvoted 1 times
Selected Answer: C
cccccccc
upvoted 1 times
Question #401 Topic 1
A company wants to use the AWS Cloud to make an existing application highly available and resilient. The current version of the application
resides in the company's data center. The application recently experienced data loss after a database server crashed because of an unexpected
power outage.
The company needs a solution that avoids any single points of failure. The solution must give the application the ability to scale to meet user
demand.
A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon
B. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group in a single Availability Zone. Deploy the database
C. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon
RDS DB instance with a read replica in a single Availability Zone. Promote the read replica to replace the primary DB instance if the primary DB
instance fails.
D. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Deploy the
primary and secondary database servers on EC2 instances across multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS)
Correct Answer: A
Selected Answer: A
Selected Answer: A
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB
instance in a Multi-AZ configuration
upvoted 2 times
Selected Answer: A
A most def.
upvoted 2 times
Selected Answer: A
Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones. Use an Amazon RDS DB
instance in a Multi-AZ configuration.
upvoted 2 times
The correct answer is A. Deploy the application servers by using Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones.
Use an Amazon RDS DB instance in a Multi-AZ configuration.
To make an existing application highly available and resilient while avoiding any single points of failure and giving the application the ability to
scale to meet user demand, the best solution would be to deploy the application servers using Amazon EC2 instances in an Auto Scaling group
across multiple Availability Zones and use an Amazon RDS DB instance in a Multi-AZ configuration.
By using an Amazon RDS DB instance in a Multi-AZ configuration, the database is automatically replicated across multiple Availability Zones,
ensuring that the database is highly available and can withstand the failure of a single Availability Zone. This provides fault tolerance and avoids
any single points of failure.
upvoted 2 times
Selected Answer: D
Why not D?
upvoted 1 times
Selected Answer: A
Selected Answer: A
Answers is A
upvoted 1 times
Selected Answer: A
Option A is the correct solution. Deploying the application servers in an Auto Scaling group across multiple Availability Zones (AZs) ensures high
availability and fault tolerance. An Auto Scaling group allows the application to scale horizontally to meet user demand. Using Amazon RDS DB
instance in a Multi-AZ configuration ensures that the database is automatically replicated to a standby instance in a different AZ. This provides
database redundancy and avoids any single point of failure.
upvoted 1 times
Highly available
upvoted 1 times
Selected Answer: A
Selected Answer: A
A company needs to ingest and handle large amounts of streaming data that its application generates. The application runs on Amazon EC2
instances and sends data to Amazon Kinesis Data Streams, which is configured with default settings. Every other day, the application consumes
the data and writes the data to an Amazon S3 bucket for business intelligence (BI) processing. The company observes that Amazon S3 is not
receiving all the data that the application sends to Kinesis Data Streams.
A. Update the Kinesis Data Streams default settings by modifying the data retention period.
B. Update the application to use the Kinesis Producer Library (KPL) to send the data to Kinesis Data Streams.
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
D. Turn on S3 Versioning within the S3 bucket to preserve every version of every object that is ingested in the S3 bucket.
Correct Answer: A
Selected Answer: A
"A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365 days)."
https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
The question mentioned Kinesis data stream default settings and "every other day". After 24hrs, the data isn't in the Data stream if the default
settings is not modified to store data more than 24hrs.
upvoted 27 times
Selected Answer: C
C. Update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams.
The best option is to update the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. Kinesis Data
Streams scales horizontally by increasing or decreasing the number of shards, which controls the throughput capacity of the stream. By increasing
the number of shards, the application will be able to send more data to Kinesis Data Streams, which can help ensure that S3 receives all the data.
upvoted 17 times
- Answer C updates the number of Kinesis shards to handle the throughput of the data that is sent to Kinesis Data Streams. By increasing the
number of shards, the data is distributed across multiple shards, which allows for increased throughput and ensures that all data is ingested and
processed by Kinesis Data Streams.
- Monitoring the Kinesis Data Streams and adjusting the number of shards as needed to handle changes in data throughput can ensure that the
application can handle large amounts of streaming data.
upvoted 2 times
Thanks.
upvoted 2 times
Answer is C
Issue with A) Update the Kinesis Data Streams default settings by modifying the data retention period. is below
Limitation: Modifying the data retention period affects how long data is kept in the stream, but it does not address the issue of the stream's
capacity to ingest data. If the stream is unable to handle the incoming data volume, extending the retention period will not resolve the data loss
issue.
upvoted 1 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
And since the question does not mention which type, I would go with On-demand. Therefore, A is the correct answer.
upvoted 2 times
Selected Answer: A
Data records are stored in shards in a kinesis data stream temporarily. The time period from when a record is added, to when it is no longer
accessible is called the retention period. This time period is 24 hours by default, but could be adjusted to 365 days.
Kinesis Data Streams automatically scales the number of shards in response to changes in data volume and traffic, so this rules out option C.
https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html#:~:text=the%20number%20of-,shards,-in%20response%20to
upvoted 1 times
I have only voted A because it mentions the default setting in Kinesis, if it did not mention that then I would look to increase the Shards. By default
it is 24 hours and can go to 365 days. I think the question should be rephrased slightly. I had trouble deciding between A & C. Also apparently the
most voted answer is the correct answer as per some advice I was given.
upvoted 2 times
Default retention is 24 hrs, but the data read is every other day, so the S3 will never receive the data, Change the default retention period to 48
hours.
upvoted 2 times
Selected Answer: C
By default, a Kinesis data stream is created with one shard. If the data throughput to the stream is higher than the capacity of the single shard, the
data stream may not be able to handle all the incoming data, and some data may be lost.
Therefore, to handle the high volume of data that the application sends to Kinesis Data Streams, the number of Kinesis shards should be increased
to handle the required throughput.
Kinesis Data Streams shards are the basic units of scalability and availability. Each shard can process up to 1,000 records per second with a
maximum of 1 MB of data per second. If the application is sending more data to Kinesis Data Streams than the shards can handle, then some of th
data will be dropped.
upvoted 1 times
the default retention period is 24 hours "The default retention period of 24 hours covers scenarios where intermittent lags in processing require
catch-up with the real-time data. "
so we should increment this
upvoted 1 times
Selected Answer: A
keyword here is - default settings and every other day and since "A Kinesis data stream stores records from 24 hours by default, up to 8760 hours
(365 days)."
https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html
Will go with A
upvoted 1 times
Selected Answer: A
C is wrong because even if you update the number of Kinesis shards, you still need to change the default data retention period first. Otherwise, you
would lose data after 24 hours.
upvoted 2 times
Selected Answer: C
Therefore, to handle the high volume of data that the application sends to Kinesis Data Streams, the number of Kinesis shards should be increased
to handle the required throughput
upvoted 2 times
Question #403 Topic 1
A developer has an application that uses an AWS Lambda function to upload files to Amazon S3 and needs the required permissions to perform
the task. The developer already has an IAM user with valid IAM credentials required for Amazon S3.
A. Add required IAM permissions in the resource policy of the Lambda function.
B. Create a signed request using the existing IAM credentials in the Lambda function.
C. Create a new IAM user and use the existing IAM credentials in the Lambda function.
D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
Correct Answer: D
Selected Answer: D
Create Lambda execution role and attach existing S3 IAM role to the lambda function
upvoted 3 times
Therefore, the correct answer is D. Create an IAM execution role with the required permissions and attach the IAM role to the Lambda function.
upvoted 4 times
L'architecte de solutions doit créer un rôle d'exécution IAM ayant les autorisations nécessaires pour accéder à Amazon S3 et effectuer les
opérations requises (par exemple, charger des fichiers). Ensuite, le rôle doit être associé à la fonction Lambda, de sorte que la fonction puisse
assumer ce rôle et avoir les autorisations nécessaires pour interagir avec Amazon S3.
upvoted 3 times
Selected Answer: D
Answer is D
upvoted 2 times
Selected Answer: D
D - correct ans
upvoted 2 times
Selected Answer: D
Create Lambda execution role and attach existing S3 IAM role to the lambda function
upvoted 2 times
Definitely D
upvoted 2 times
ddddddd
upvoted 1 times
[Removed] 1 year, 6 months ago
Selected Answer: D
dddddddd
upvoted 1 times
Question #404 Topic 1
A company has deployed a serverless application that invokes an AWS Lambda function when new documents are uploaded to an Amazon S3
bucket. The application uses the Lambda function to process the documents. After a recent marketing campaign, the company noticed that the
B. Configure an S3 bucket replication policy. Stage the documents in the S3 bucket for later processing.
C. Deploy an additional Lambda function. Load balance the processing of the documents across the two Lambda functions.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Send the requests to the queue. Configure the queue as an event source for
Lambda.
Correct Answer: D
Selected Answer: D
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Send the requests to the queue. Configure the queue as an event source for
Lambd
upvoted 2 times
Selected Answer: D
Selected Answer: D
To improve the architecture of this application, the best solution would be to use Amazon Simple Queue Service (Amazon SQS) to buffer the
requests and decouple the S3 bucket from the Lambda function. This will ensure that the documents are not lost and can be processed at a later
time if the Lambda function is not available.
This will ensure that the documents are not lost and can be processed at a later time if the Lambda function is not available. By using Amazon SQS
the architecture is decoupled and the Lambda function can process the documents in a scalable and fault-tolerant manner.
upvoted 4 times
Cette solution permet de gérer efficacement les pics de charge et d'éviter la perte de documents en cas d'augmentation soudaine du trafic.
Lorsque de nouveaux documents sont chargés dans le compartiment Amazon S3, les demandes sont envoyées à la file d'attente Amazon SQS, qui
agit comme un tampon. La fonction Lambda est déclenchée en fonction des événements dans la file d'attente, ce qui permet un traitement
équilibré et évite que l'application ne soit submergée par un grand nombre de documents simultanés.
upvoted 1 times
Selected Answer: D
Selected Answer: D
D is correct
upvoted 1 times
D is correct
upvoted 1 times
dddddddd
upvoted 2 times
Question #405 Topic 1
A solutions architect is designing the architecture for a software demonstration environment. The environment will run on Amazon EC2 instances
in an Auto Scaling group behind an Application Load Balancer (ALB). The system will experience significant increases in traffic during working
Which combination of actions should the solutions architect take to ensure that the system can scale to meet demand? (Choose two.)
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
B. Use AWS Auto Scaling to scale the capacity of the VPC internet gateway.
C. Launch the EC2 instances in multiple AWS Regions to distribute the load across Regions.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
E. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the
Correct Answer: DE
What does "ALB capacity" even means anyway? It should be "Target Group capacity" no?
Answer should be DE, as D is a more comprehensive answer (and more practical in real life)
upvoted 13 times
Selected Answer: DE
"The system will experience significant increases in traffic during working hours" -> addressed by D
"But is not required to operate on weekends" -> addressed by E
upvoted 10 times
AD
E - the question doesn't ask about cost. Also, shutting it down during the weekend does nothing to improve scaling during the week. It doesn't
address the requirements.
upvoted 2 times
Selected Answer: DE
D) Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. This will allow the Auto Scaling group to
dynamically scale in and out based on demand.
E) Use scheduled scaling to change the Auto Scaling group capacity to zero on weekends when traffic is expected to be low. This will minimize
costs by terminating unused instances.
upvoted 6 times
Basado en los requerimientos la opción que se requiere para optimizar los costos de 0 operaciones en los fines de semana
upvoted 1 times
A&D. Is not possible, way you can put an ALB capacity based in cpu and in request rate???? You need to select one or another option (and this is
for all questions here guys!)
upvoted 3 times
Selected Answer: AE
It is possible to set to zero. "is not required to operate on weekends" means the instances are not required during the weekends.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-capacity-limits.html
upvoted 3 times
Either one or two or all of these combinations will meet the need:
Use AWS Auto Scaling to adjust the ALB capacity based on request rate.
Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization.
Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default
values at the start of the week.
upvoted 2 times
Selected Answer: DE
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#target-tracking-choose-metrics
Based on docs, ASG can't track ALB's request rate, so the answer is D&E
meanwhile ASG can track CPU rates.
upvoted 4 times
Selected Answer: DE
Scaling should be at the ASG not ALB. So, not sure about "Use AWS Auto Scaling to adjust the ALB capacity based on request rate"
upvoted 5 times
Selected Answer: AD
A. Use AWS Auto Scaling to adjust the ALB capacity based on request rate: This will allow the system to scale up or down based on incoming traffic
demand. The solutions architect should use AWS Auto Scaling to monitor the request rate and adjust the ALB capacity as needed.
D. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization: This will allow the system to scale up or
down based on the CPU utilization of the EC2 instances in the Auto Scaling group. The solutions architect should use a target tracking scaling
policy to maintain a specific CPU utilization target and adjust the number of EC2 instances in the Auto Scaling group accordingly.
upvoted 9 times
Selected Answer: AD
A. Use a target tracking scaling policy to scale the Auto Scaling group based on instance CPU utilization. This approach allows the Auto Scaling
group to automatically adjust the number of instances based on the specified metric, ensuring that the system can scale to meet demand during
working hours.
D. Use scheduled scaling to change the Auto Scaling group minimum, maximum, and desired capacity to zero for weekends. Revert to the default
values at the start of the week. This approach allows the Auto Scaling group to reduce the number of instances to zero during weekends when
traffic is expected to be low. It will help the organization to save costs by not paying for instances that are not needed during weekends.
Therefore, options A and D are the correct answers. Options B and C are not relevant to the scenario, and option E is not a scalable solution as it
would require manual intervention to adjust the group capacity every week.
upvoted 1 times
Selected Answer: DE
This is why I don't believe A is correct use auto scaling to adjust the ALB .... D&E
upvoted 3 times
AD
D there is no requirement for cost minimization in the scenario therefore, A & D are the answers
upvoted 3 times
Question #406 Topic 1
A solutions architect is designing a two-tiered architecture that includes a public subnet and a database subnet. The web servers in the public
subnet must be open to the internet on port 443. The Amazon RDS for MySQL DB instance in the database subnet must be accessible only to the
Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)
A. Create a network ACL for the public subnet. Add a rule to deny outbound traffic to 0.0.0.0/0 on port 3306.
B. Create a security group for the DB instance. Add a rule to allow traffic from the public subnet CIDR block on port 3306.
C. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
D. Create a security group for the DB instance. Add a rule to allow traffic from the web servers’ security group on port 3306.
E. Create a security group for the DB instance. Add a rule to deny all traffic except traffic from the web servers’ security group on port 3306.
Correct Answer: CD
Selected Answer: CD
Remember guys that SG is not used for Deny action, just Allow
upvoted 7 times
Selected Answer: CD
The following are the default rules for a security group that you create:
Selected Answer: CD
Selected Answer: CD
Remember guys that SG is not used for Deny action, just Allow
upvoted 4 times
Selected Answer: CD
To meet the requirements of allowing access to the web servers in the public subnet on port 443 and the Amazon RDS for MySQL DB instance in
the database subnet on port 3306, the best solution would be to create a security group for the web servers and another security group for the DB
instance, and then define the appropriate inbound and outbound rules for each security group.
1. Create a security group for the web servers in the public subnet. Add a rule to allow traffic from 0.0.0.0/0 on port 443.
2. Create a security group for the DB instance. Add a rule to allow traffic from the web servers' security group on port 3306.
This will allow the web servers in the public subnet to receive traffic from the internet on port 443, and the Amazon RDS for MySQL DB instance in
the database subnet to receive traffic only from the web servers on port 3306.
upvoted 2 times
CD - Correct ans.
upvoted 2 times
Selected Answer: CD
cdcdcdcdcdc
upvoted 2 times
Question #407 Topic 1
A company is implementing a shared storage solution for a gaming application that is hosted in the AWS Cloud. The company needs the ability to
use Lustre clients to access data. The solution must be fully managed.
A. Create an AWS DataSync task that shares the data as a mountable file system. Mount the file system to the application server.
B. Create an AWS Storage Gateway file gateway. Create a file share that uses the required client protocol. Connect the application server to the
file share.
C. Create an Amazon Elastic File System (Amazon EFS) file system, and configure it to support Lustre. Attach the file system to the origin
D. Create an Amazon FSx for Lustre file system. Attach the file system to the origin server. Connect the application server to the file system.
Correct Answer: D
Selected Answer: D
To meet the requirements of a shared storage solution for a gaming application that can be accessed using Lustre clients and is fully managed, the
best solution would be to use Amazon FSx for Lustre.
Amazon FSx for Lustre is a fully managed file system that is optimized for compute-intensive workloads, such as high-performance computing,
machine learning, and gaming. It provides a POSIX-compliant file system that can be accessed using Lustre clients and offers high performance,
scalability, and data durability.
This solution provides a highly available, scalable, and fully managed shared storage solution that can be accessed using Lustre clients. Amazon FSx
for Lustre is optimized for compute-intensive workloads and provides high performance and durability.
upvoted 5 times
Answer B, creating an AWS Storage Gateway file gateway and connecting the application server to the file share, may not provide the required
performance and scalability for a gaming application.
Answer C, creating an Amazon Elastic File System (Amazon EFS) file system and configuring it to support Lustre, may not provide the required
performance and scalability for a gaming application and may require additional configuration and management overhead.
upvoted 2 times
Selected Answer: D
Selected Answer: D
D - correct ans
upvoted 2 times
Selected Answer: D
Selected Answer: D
Option D is the best solution because Amazon FSx for Lustre is a fully managed, high-performance file system that is designed to support
compute-intensive workloads, such as those required by gaming applications. FSx for Lustre provides sub-millisecond access to petabyte-scale file
systems, and supports Lustre clients natively. This means that the gaming application can access the shared data directly from the FSx for Lustre file
system without the need for additional configuration or setup.
Additionally, FSx for Lustre is a fully managed service, meaning that AWS takes care of all maintenance, updates, and patches for the file system,
which reduces the operational overhead required by the company.
upvoted 1 times
Selected Answer: D
dddddddddddd
upvoted 1 times
Question #408 Topic 1
A company runs an application that receives data from thousands of geographically dispersed remote devices that use UDP. The application
processes the data immediately and sends a message back to the device if necessary. No data is stored.
The company needs a solution that minimizes latency for the data transmission from the devices. The solution also must provide rapid failover to
A. Configure an Amazon Route 53 failover routing policy. Create a Network Load Balancer (NLB) in each of the two Regions. Configure the NLB
B. Use AWS Global Accelerator. Create a Network Load Balancer (NLB) in each of the two Regions as an endpoint. Create an Amazon Elastic
Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the target
C. Use AWS Global Accelerator. Create an Application Load Balancer (ALB) in each of the two Regions as an endpoint. Create an Amazon
Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS service as the
D. Configure an Amazon Route 53 failover routing policy. Create an Application Load Balancer (ALB) in each of the two Regions. Create an
Amazon Elastic Container Service (Amazon ECS) cluster with the Fargate launch type. Create an ECS service on the cluster. Set the ECS
service as the target for the ALB. Process the data in Amazon ECS.
Correct Answer: B
Selected Answer: B
Geographically dispersed (related to UDP) - Global Accelerator - multiple entrances worldwide to the AWS network to provide better transfer rates
UDP - NLB (Network Load Balancer).
upvoted 13 times
Selected Answer: B
if its UDP it has to be Global Accelarator + NLB package, plus it has the provision for rapid failover as well, piece of cake.
upvoted 2 times
Global Accelerator provides UDP support and minimizes latency using the AWS global network.
Using NLBs allows the UDP traffic to be load balanced across Availability Zones.
ECS Fargate provides rapid scaling and failover across Regions.
NLB endpoints allow rapid failover if one Region goes down.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Global accelerator for multi region automatic failover. NLB for UDP.
upvoted 2 times
Selected Answer: B
To meet the requirements of minimizing latency for data transmission from the devices and providing rapid failover to another AWS Region, the
best solution would be to use AWS Global Accelerator in combination with a Network Load Balancer (NLB) and Amazon Elastic Container Service
(Amazon ECS).
AWS Global Accelerator is a service that improves the availability and performance of applications by using static IP addresses (Anycast) to route
traffic to optimal AWS endpoints. With Global Accelerator, you can direct traffic to multiple Regions and endpoints, and provide automatic failover
to another AWS Region.
upvoted 3 times
bbbbbbbb
upvoted 1 times
Question #409 Topic 1
A solutions architect must migrate a Windows Internet Information Services (IIS) web application to AWS. The application currently relies on a file
share hosted in the user's on-premises network-attached storage (NAS). The solutions architect has proposed migrating the IIS web servers to
Amazon EC2 instances in multiple Availability Zones that are connected to the storage solution, and configuring an Elastic Load Balancer attached
to the instances.
Which replacement to the on-premises file share is MOST resilient and durable?
C. Migrate the file share to Amazon FSx for Windows File Server.
D. Migrate the file share to Amazon Elastic File System (Amazon EFS).
Correct Answer: C
Selected Answer: C
Selected Answer: C
The most resilient and durable replacement for the on-premises file share in this scenario would be Amazon FSx for Windows File Server.
Amazon FSx is a fully managed Windows file system service that is built on Windows Server and provides native support for the SMB protocol. It is
designed to be highly available and durable, with built-in backup and restore capabilities. It is also fully integrated with AWS security services,
providing encryption at rest and in transit, and it can be configured to meet compliance standards.
upvoted 6 times
Migrating the file share to Amazon EFS (Linux ONLY) could be an option, but Amazon FSx for Windows File Server would be more appropriate i
this case because it is specifically designed for Windows file shares and provides better performance for Windows applications.
upvoted 5 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
Selected Answer: C
Selected Answer: D
Amazon EFS is a scalable and fully-managed file storage service that is designed to provide high availability and durability. It can be accessed by
multiple EC2 instances across multiple Availability Zones simultaneously. Additionally, it offers automatic and instantaneous data replication across
different availability zones within a region, which makes it resilient to failures.
upvoted 1 times
Selected Answer: C
Amazon FSx
upvoted 1 times
FSx for Windows is a fully managed Windows file system share drive . Hence C is the correct answer.
upvoted 2 times
Selected Answer: C
ccccccccc
upvoted 1 times
Question #410 Topic 1
A company is deploying a new application on Amazon EC2 instances. The application writes data to Amazon Elastic Block Store (Amazon EBS)
volumes. The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
A. Create an IAM role that specifies EBS encryption. Attach the role to the EC2 instances.
B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level.
D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is
active.
Correct Answer: B
Selected Answer: B
The solution that will meet the requirement of ensuring that all data that is written to the EBS volumes is encrypted at rest is B. Create the EBS
volumes as encrypted volumes and attach the encrypted EBS volumes to the EC2 instances.
When you create an EBS volume, you can specify whether to encrypt the volume. If you choose to encrypt the volume, all data written to the
volume is automatically encrypted at rest using AWS-managed keys. You can also use customer-managed keys (CMKs) stored in AWS KMS to
encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach them to EC2 instances to ensure that all data written to
the volumes is encrypted at rest.
Answer A is incorrect because attaching an IAM role to the EC2 instances does not automatically encrypt the EBS volumes.
Answer C is incorrect because adding an EC2 instance tag does not ensure that the EBS volumes are encrypted.
upvoted 11 times
B is the answer
upvoted 1 times
B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances.
upvoted 1 times
The other options either do not meet the requirement of encrypting data at rest (A and C) or do so in a more complex or less efficient manner (D).
upvoted 1 times
Create encrypted EBS volumes and attach encrypted EBS volumes to EC2 instances..
upvoted 2 times
Selected Answer: B
bbbbbbbb
upvoted 1 times
Question #411 Topic 1
A company has a web application with sporadic usage patterns. There is heavy usage at the beginning of each month, moderate usage at the start
of each week, and unpredictable usage during the week. The application consists of a web server and a MySQL database server running inside the
data center. The company would like to move the application to the AWS Cloud, and needs to select a cost-effective database platform that will
A. Amazon DynamoDB
Correct Answer: C
Selected Answer: C
C: Aurora Serverless is a MySQL-compatible relational database engine that automatically scales compute and memory resources based on
application usage. no upfront costs or commitments required.
A: DynamoDB is a NoSQL
B: Fixed cost on RDS class
D: More operation requires
upvoted 10 times
Selected Answer: C
The is a huge demand for auto-scaling which Amazon RDS cannot do. This contributes to the cost savings as Aurora serverless would scale done in
low peak times, this contributes to low costs.
upvoted 3 times
Selected Answer: C
Answer C, MySQL-compatible Amazon Aurora Serverless, would be the best solution to meet the company's requirements.
upvoted 1 times
Selected Answer: C
Since we have sporadic & unpredictable usage for DB, Aurora Serverless would be fit more cost-efficient for this case scenario than RDS MySQL.
https://www.techtarget.com/searchcloudcomputing/answer/When-should-I-use-Amazon-RDS-vs-Aurora-Serverless
upvoted 1 times
Selected Answer: C
C for sure.
upvoted 2 times
Selected Answer: C
Answer C, MySQL-compatible Amazon Aurora Serverless, would be the best solution to meet the company's requirements.
Aurora Serverless can be a cost-effective option for databases with sporadic or unpredictable usage patterns since it automatically scales up or
down based on the current workload. Additionally, Aurora Serverless is compatible with MySQL, so it does not require any modifications to the
application's database code.
upvoted 4 times
Selected Answer: B
Amazon RDS for MySQL is a cost-effective database platform that will not require database modifications. It makes it easier to set up, operate, and
scale MySQL deployments in the cloud. With Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient and resizable
hardware capacity².
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability.
DynamoDB is a good choice for applications that require low-latency data access¹.
MySQL-compatible Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where
the database will automatically start up, shut down, and scale capacity up or down based on your application's needs³.
So, Amazon RDS for MySQL is the best option for your requirements.
upvoted 2 times
Amazon RDS for MySQL is a fully-managed relational database service that makes it easy to set up, operate, and scale MySQL deployments in
the cloud. Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the
database will automatically start up, shut down, and scale capacity up or down based on your application’s needs. It is a simple, cost-effective
option for infrequent, intermittent, or unpredictable workloads.
upvoted 2 times
Selected Answer: C
Amazon Aurora Serverless : a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads
upvoted 3 times
cccccccccccccccccccc
upvoted 2 times
Question #412 Topic 1
An image-hosting company stores its objects in Amazon S3 buckets. The company wants to avoid accidental exposure of the objects in the S3
buckets to the public. All S3 objects in the entire AWS account need to remain private.
A. Use Amazon GuardDuty to monitor S3 bucket policies. Create an automatic remediation action rule that uses an AWS Lambda function to
B. Use AWS Trusted Advisor to find publicly accessible S3 buckets. Configure email notifications in Trusted Advisor when a change is
C. Use AWS Resource Access Manager to find publicly accessible S3 buckets. Use Amazon Simple Notification Service (Amazon SNS) to
invoke an AWS Lambda function when a change is detected. Deploy a Lambda function that programmatically remediates the change.
D. Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents
IAM users from changing the setting. Apply the SCP to the account.
Correct Answer: D
Answer is D ladies and gentlemen. While guard duty helps to monitor s3 for potential threats its a reactive action. We should always be proactive
and not reactive in our solutions so D, block public access to avoid any possibility of the info becoming publicly accessible
upvoted 17 times
Selected Answer: D
Answer D is the correct solution that meets the requirements. The S3 Block Public Access feature allows you to restrict public access to S3 buckets
and objects within the account. You can enable this feature at the account level to prevent any S3 bucket from being made public, regardless of the
bucket policy settings. AWS Organizations can be used to apply a Service Control Policy (SCP) to the account to prevent IAM users from changing
this setting, ensuring that all S3 objects remain private. This is a straightforward and effective solution that requires minimal operational overhead.
upvoted 8 times
Selected Answer: D
Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM user
from changing the setting. Apply the SCP to the account
upvoted 1 times
Use the S3 Block Public Access feature on the account level. Use AWS Organizations to create a service control policy (SCP) that prevents IAM user
from changing the setting. Apply the SCP to the account
upvoted 1 times
Selected Answer: A
A is correct!
upvoted 1 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html
upvoted 3 times
elearningtakai 1 year, 6 months ago
Selected Answer: D
Selected Answer: D
Option D provided real solution by using bucket policy to restrict public access. Other options were focus on detection which wasn't what was bee
asked
upvoted 2 times
Question #413 Topic 1
An ecommerce company is experiencing an increase in user traffic. The company’s store is deployed on Amazon EC2 instances as a two-tier web
application consisting of a web tier and a separate database tier. As traffic increases, the company notices that the architecture is causing
significant delays in sending timely marketing and order confirmation email to users. The company wants to reduce the time it spends resolving
A. Create a separate application tier using EC2 instances dedicated to email processing.
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES).
C. Configure the web instance to send email through Amazon Simple Notification Service (Amazon SNS).
D. Create a separate application tier using EC2 instances dedicated to email processing. Place the instances in an Auto Scaling group.
Correct Answer: B
Selected Answer: B
Amazon SES is a cost-effective and scalable email service that enables businesses to send and receive email using their own email addresses and
domains. Configuring the web instance to send email through Amazon SES is a simple and effective solution that can reduce the time spent
resolving complex email delivery issues and minimize operational overhead.
upvoted 9 times
Selected Answer: B
The best option for addressing the company's needs of minimizing operational overhead and reducing time spent resolving email delivery issues is
to use Amazon Simple Email Service (Amazon SES).
Answer A of creating a separate application tier for email processing may add additional complexity to the architecture and require more
operational overhead.
Answer C of using Amazon Simple Notification Service (Amazon SNS) is not an appropriate solution for sending marketing and order confirmation
emails since Amazon SNS is a messaging service that is designed to send messages to subscribed endpoints or clients.
Answer D of creating a separate application tier using EC2 instances dedicated to email processing placed in an Auto Scaling group is a more
complex solution than necessary and may result in additional operational overhead.
upvoted 5 times
Selected Answer: B
Amazon Simple Email Service (Amazon SES) lets you reach customers confidently without an on-premises Simple Mail Transfer Protocol (SMTP)
email server using the Amazon SES API or SMTP interface.
upvoted 2 times
Selected Answer: B
B. Configure the web instance to send email through Amazon Simple Email Service (Amazon SES)
upvoted 1 times
Selected Answer: B
bbbbbbbb
upvoted 2 times
Question #414 Topic 1
A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV
format. The company needs to store this data in the AWS Cloud in near-real time for analysis.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Use AWS DataSync to transfer the files to Amazon S3. Create a scheduled task that runs at the end of each day.
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation workflow.
D. Deploy an AWS Transfer for SFTP endpoint. Create a script that checks for new files on the network share and uploads the new files by
using SFTP.
Correct Answer: B
Selected Answer: B
Both Amazon S3 File Gateway and AWS DataSync are suitable for this scenario.
But there is a requirement for 'LEAST administrative overhead'.
Option C involves the creation of an entirely new application to consume the DataSync API, this rules out this option.
upvoted 12 times
Selected Answer: B
Key words:
1. near-real-time (A is out)
2. LEAST administrative (C n D is out)
upvoted 7 times
Selected Answer: C
Using DataSync avoids having to rewrite the business system to use a new file gateway or SFTP endpoint.
Calling the DataSync API from an application allows automating the data transfer instead of running scheduled tasks or scripts.
DataSync directly transfers files from the network share to S3 without needing an intermediate server
upvoted 2 times
Selected Answer: B
Selected Answer: B
B. Create an Amazon S3 File Gateway. Update the business system to use a new network share from the S3 File Gateway.
- It presents a simple network file share interface that the business system can write to, just like a standard network share. This requires minimal
changes to the business system.
- The S3 File Gateway automatically uploads all files written to the share to an S3 bucket in the background. This handles the transfer and upload to
S3 without requiring any scheduled tasks, scripts or automation.
- All ongoing management like monitoring, scaling, patching etc. is handled by AWS for the S3 File Gateway.
upvoted 4 times
kruasan 1 year, 5 months ago
The other options would require more ongoing administrative effort:
A) AWS DataSync would require creating and managing scheduled tasks and monitoring them.
C) Using the DataSync API would require developing an application and then managing and monitoring it.
D) The SFTP option would require creating scripts, managing SFTP access and keys, and monitoring the file transfer process.
So overall, the S3 File Gateway requires the least amount of ongoing management and administration as it presents a simple file share interface
but handles the upload to S3 in a fully managed fashion. The business system can continue writing to a network share as is, while the files are
transparently uploaded to S3.
The S3 File Gateway is the most hands-off, low-maintenance solution in this scenario.
upvoted 3 times
Selected Answer: B
Selected Answer: B
It's B. DataSync has a scheduler and it runs on hour intervals, it cannot be used real-time
upvoted 1 times
The correct answer is C. Use AWS DataSync to transfer the files to Amazon S3. Create an application that uses the DataSync API in the automation
workflow.
To store the CSV reports generated by the business system in the AWS Cloud in near-real time for analysis, the best solution with the least
administrative overhead would be to use AWS DataSync to transfer the files to Amazon S3 and create an application that uses the DataSync API in
the automation workflow.
AWS DataSync is a fully managed service that makes it easy to automate and accelerate data transfer between on-premises storage systems and
AWS Cloud storage, such as Amazon S3. With DataSync, you can quickly and securely transfer large amounts of data to the AWS Cloud, and you
can automate the transfer process using the DataSync API.
upvoted 4 times
Answer B, creating an Amazon S3 File Gateway and updating the business system to use a new network share from the S3 File Gateway, is not
the best solution because it requires additional configuration and management overhead.
Answer D, deploying an AWS Transfer for the SFTP endpoint and creating a script to check for new files on the network share and upload the
new files using SFTP, is not the best solution because it requires additional scripting and management overhead
upvoted 2 times
Selected Answer: B
A company is storing petabytes of data in Amazon S3 Standard. The data is stored in multiple S3 buckets and is accessed with varying frequency.
The company does not know access patterns for all the data. The company needs to implement a solution for each S3 bucket to optimize the cost
of S3 usage.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
B. Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move each object to the identified
storage tier.
C. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier Instant Retrieval.
D. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-
IA).
Correct Answer: A
Selected Answer: A
Selected Answer: A
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
upvoted 2 times
Key words: 'The company does not know access patterns for all the data', so A.
upvoted 4 times
Creating an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering would be the most efficient
solution to optimize the cost of S3 usage. S3 Intelligent-Tiering is a storage class that automatically moves objects between two access tiers
(frequent and infrequent) based on changing access patterns. It is a cost-effective solution that does not require any manual intervention to move
data to different storage classes, unlike the other options.
upvoted 4 times
Answer C, Transitioning objects to S3 Glacier Instant Retrieval would be appropriate for data that is accessed less frequently and does not
require immediate access.
Answer D, S3 One Zone-IA would be appropriate for data that can be recreated if lost and does not require the durability of S3 Standard or S3
Standard-IA.
upvoted 2 times
Selected Answer: A
For me is A. Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
Why?
"S3 Intelligent-Tiering is the ideal storage class for data with unknown, changing, or unpredictable access patterns"
https://aws.amazon.com/s3/storage-classes/intelligent-tiering/
upvoted 2 times
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
upvoted 1 times
A rapidly growing global ecommerce company is hosting its web application on AWS. The web application includes static content and dynamic
content. The website stores online transaction processing (OLTP) data in an Amazon RDS database The website’s users are experiencing slow
page loads.
Which combination of actions should a solutions architect take to resolve this issue? (Choose two.)
Correct Answer: BD
Selected Answer: BD
To resolve the issue of slow page loads for a rapidly growing e-commerce website hosted on AWS, a solutions architect can take the following two
actions:
Configuring an Amazon Redshift cluster is not relevant to this issue since Redshift is a data warehousing service and is typically used for the
analytical processing of large amounts of data.
Hosting the dynamic web content in Amazon S3 may not necessarily improve performance since S3 is an object storage service, not a web
application server. While S3 can be used to host static web content, it may not be suitable for hosting dynamic web content since S3 doesn't
support server-side scripting or processing.
Configuring a Multi-AZ deployment for the RDS DB instance will improve high availability but may not necessarily improve performance.
upvoted 13 times
Selected Answer: BD
Selected Answer: BD
The two options that will best help resolve the slow page loads are:
and
Explanation:
CloudFront can cache static content globally and improve latency for static content delivery.
Multi-AZ RDS improves performance and availability of the database driving dynamic content.
upvoted 3 times
Selected Answer: BD
BD is correct.
upvoted 3 times
Selected Answer: BD
Resolve latency = Amazon CloudFront distribution and read replica for the RDS DB
upvoted 4 times
B and D
upvoted 2 times
To resolve this issue, a solutions architect should take the following two actions:
Create a read replica for the RDS DB instance. This will help to offload read traffic from the primary database instance and improve performance.
upvoted 2 times
Selected Answer: BD
Question asked about performance improvements, not HA. Cloudfront & Read Replica
upvoted 2 times
Selected Answer: BE
https://aws.amazon.com/rds/features/read-replicas/?nc1=h_ls
upvoted 1 times
Selected Answer: BE
Amazon CloudFront can handle both static and Dynamic contents hence there is not need for option C l.e hosting the static data on Amazon S3.
RDS read replica will reduce the amount of reads on the RDS hence leading a better performance. Multi-AZ is for disaster Recovery , which means
D is also out.
upvoted 1 times
Selected Answer: BC
CloudFont with S3
upvoted 1 times
B and E
upvoted 2 times
Question #417 Topic 1
A company uses Amazon EC2 instances and AWS Lambda functions to run its application. The company has VPCs with public subnets and private
subnets in its AWS account. The EC2 instances run in a private subnet in one of the VPCs. The Lambda functions need direct network access to
The application will run for at least 1 year. The company expects the number of Lambda functions that the application uses to increase during that
time. The company wants to maximize its savings on all application resources and to keep network latency between the services low.
A. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions’ duration and memory usage and the number of invocations.
Connect the Lambda functions to the private subnet that contains the EC2 instances.
B. Purchase an EC2 Instance Savings Plan Optimize the Lambda functions' duration and memory usage, the number of invocations, and the
amount of data that is transferred. Connect the Lambda functions to a public subnet in the same VPC where the EC2 instances run.
C. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the
amount of data that is transferred. Connect the Lambda functions to the private subnet that contains the EC2 instances.
D. Purchase a Compute Savings Plan. Optimize the Lambda functions’ duration and memory usage, the number of invocations, and the
amount of data that is transferred. Keep the Lambda functions in the Lambda service VPC.
Correct Answer: C
Selected Answer: C
By purchasing a Compute Savings Plan, the company can save on the costs of running both EC2 instances and Lambda functions. The Lambda
functions can be connected to the private subnet that contains the EC2 instances through a VPC endpoint for AWS services or a VPC peering
connection. This provides direct network access to the EC2 instances while keeping the traffic within the private network, which helps to minimize
network latency.
Optimizing the Lambda functions’ duration, memory usage, number of invocations, and amount of data transferred can help to further minimize
costs and improve performance. Additionally, using a private subnet helps to ensure that the EC2 instances are not directly accessible from the
public internet, which is a security best practice.
upvoted 16 times
Answer B is not the best solution because connecting the Lambda functions to a public subnet may not be as secure as connecting them to a
private subnet. Also, keeping the EC2 instances in a private subnet helps to ensure that they are not directly accessible from the public internet.
Answer D is not the best solution because keeping the Lambda functions in the Lambda service VPC may not provide direct network access to
the EC2 instances, which may impact the performance of the application.
upvoted 7 times
Selected Answer: C
Implement Compute Savings Plan because it applies to Lambda usage as well, then connect the Lambda functions to the private subnet that
contains the EC2 instances
upvoted 5 times
"Savings Plans are a flexible pricing model that offer low prices on Amazon EC2, AWS Lambda, and AWS Fargate usage, in exchange for a
commitment to a consistent amount of usage (measured in $/hour) for a 1 or 3 year term."
- That already excludes A and B.
The question requires to "keep network latency between the services low", which can be achieved by connecting the Lambda functions to the
private subnet that contains the EC2 instances.
C is the answer.
upvoted 1 times
A Compute Savings Plan covers both EC2 and Lambda and allows maximizing savings on all resources.
Optimizing Lambda configuration reduces costs.
Connecting the Lambda functions to the private subnet with the EC2 instances provides direct network access between them, keeping latency low.
The Lambda functions are isolated in the private subnet rather than public, improving security.
upvoted 3 times
Selected Answer: C
Selected Answer: C
Lambda functions need direct network access to the EC2 instances for the application to work and these EC2 instances are in the private subnet
So the correct answer is C.
upvoted 2 times
Question #418 Topic 1
A solutions architect needs to allow team members to access Amazon S3 buckets in two different AWS accounts: a development account and a
production account. The team currently has access to S3 buckets in the development account by using unique IAM users that are assigned to an
The solutions architect has created an IAM role in the production account. The role has a policy that grants access to an S3 bucket in the
production account.
Which solution will meet these requirements while complying with the principle of least privilege?
B. Add the development account as a principal in the trust policy of the role in the production account.
C. Turn off the S3 Block Public Access feature on the S3 bucket in the production account.
D. Create a user in the production account with unique credentials for each team member.
Correct Answer: B
well, if you made it this far, it means you are persistent :) Good luck with your exam!
upvoted 70 times
Selected Answer: B
By adding the development account as a principal in the trust policy of the IAM role in the production account, you are allowing users from the
development account to assume the role in the production account. This allows the team members to access the S3 bucket in the production
account without granting them unnecessary privileges.
upvoted 7 times
Selected Answer: B
Add the development account as a principal in the trust policy of the role in the production account
upvoted 2 times
The best solution is B) Add the development account as a principal in the trust policy of the role in the production account.
This allows cross-account access to the S3 bucket in the production account by assuming the IAM role. The development account users can assum
the role to gain temporary access to the production bucket.
upvoted 4 times
Selected Answer: B
https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/
An AWS account accesses another AWS account – This use case is commonly referred to as a cross-account role pattern. It allows human or
machine IAM principals from one AWS account to assume this role and act on resources within a second AWS account. A role is assumed to enable
this behavior when the resource in the target account doesn’t have a resource-based policy that could be used to grant cross-account access.
upvoted 2 times
Selected Answer: B
About Trust policy – The trust policy defines which principals can assume the role, and under which conditions. A trust policy is a specific type of
resource-based policy for IAM roles.
Answer A, attaching the Administrator Access policy to development account users, provides too many permissions and violates the principle of
least privilege. This would give users more access than they need, which could lead to security issues if their credentials are compromised.
Answer C, turning off the S3 Block Public Access feature, is not a recommended solution as it is a security best practice to enable S3 Block Public
Access to prevent accidental public access to S3 buckets.
Answer D, creating a user in the production account with unique credentials for each team member, is also not a recommended solution as it can
be difficult to manage and scale for large teams. It is also less secure, as individual user credentials can be more easily compromised.
upvoted 2 times
The solution that will meet these requirements while complying with the principle of least privilege is to add the development account as a
principal in the trust policy of the role in the production account. This will allow team members to access Amazon S3 buckets in two different AWS
accounts while complying with the principle of least privilege.
Option A is not recommended because it grants too much access to development account users. Option C is not relevant to this scenario. Option D
is not recommended because it does not comply with the principle of least privilege.
upvoted 1 times
A company uses AWS Organizations with all features enabled and runs multiple Amazon EC2 workloads in the ap-southeast-2 Region. The
company has a service control policy (SCP) that prevents any resources from being created in any other Region. A security policy requires the
An audit discovers that employees have created Amazon Elastic Block Store (Amazon EBS) volumes for EC2 instances without encrypting the
volumes. The company wants any new EC2 instances that any IAM user or root user launches in ap-southeast-2 to use encrypted EBS volumes.
The company wants a solution that will have minimal effect on employees who create EBS volumes.
A. In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
B. Create an IAM permission boundary. Attach the permission boundary to the root organizational unit (OU). Define the boundary to deny the
C. Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:CreateVolume action whenthe
D. Update the IAM policies for each account to deny the ec2:CreateVolume action when the ec2:Encrypted condition equals false.
E. In the Organizations management account, specify the Default EBS volume encryption setting.
Correct Answer: CE
Selected Answer: CE
Option (C): Creating an SCP and attaching it to the root organizational unit (OU) will deny the ec2:CreateVolume action when the ec2:Encrypted
condition equals false. This means that any IAM user or root user in any account in the organization will not be able to create an EBS volume
without encrypting it.
Option (E): Specifying the Default EBS volume encryption setting in the Organizations management account will ensure that all new EBS volumes
created in any account in the organization are encrypted by default.
upvoted 9 times
Selected Answer: CE
CE
Prevent future issues by creating a SCP and set a default encryption.
upvoted 8 times
The problem here is we don't know in which account the workload is on. The account in ap-xx-is that the management account or it's a member
account?? That will decide to select either A or E. C is certainly correct
upvoted 1 times
Selected Answer: CE
(A) is incorrect bc absent of SCP or the Organizations management account, the scope of the EC2 console is too narrow to be applied to 'any IAM
user or root user'.
upvoted 1 times
https://repost.aws/knowledge-center/ebs-automatic-encryption
Newly created Amazon EBS volumes aren't encrypted by default. However, you can turn on default encryption for new EBS volumes and snapshot
copies that are created within a specified Region. To turn on encryption by default, use the Amazon Elastic Compute Cloud (Amazon EC2) console.
upvoted 2 times
A: will enforce automatic encryption in a account. This will have no effect on employees. Do this in every account.
B: permission boundary is not appropriate here.
C: an SCP will force employees to create encrypted volumes in every account.
D: This would work but is too much maintenance.
E: Setting EBS volume encryption in the Organizations management account will only have impact on volumes in that account, not on other
accounts.
upvoted 2 times
The solution should "have minimal effect on employees who create EBS volumes". Thus new volumes should automatically be encrypted. Options
B, C and D do NOT automatically encrypt volumes, they simply cause requests to create non-encrypted volumes to fail.
upvoted 2 times
In the Amazon EC2 console, select the EBS encryption account attribute and define a default encryption key.
-> This has to be done in every AWS account separately.
Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:CreateVolume action whenthe ec2:Encrypted
condition equals false.
-> This will just act as a safeguard in case an admin would disable default encryption in the member account, so it should not have any effect
on employees who create EBS volumes.
Create an SCP. Attach the SCP to the root organizational unit (OU). Define the SCP to deny the ec2:DisableEbsEncryptionByDefault action.
-> This will prevent disabling default encryption once is has been enabled.
upvoted 1 times
Selected Answer: AE
Option A: By default, EBS encryption is not enabled for EC2 instances. However, you can set an EBS encryption by default in your AWS account in
the Amazon EC2 console. This ensures that every new EBS volume that is created is encrypted.
Option E: With AWS Organizations, you can centrally set the default EBS encryption for your organization's accounts. This helps in enforcing a
consistent encryption policy across your organization.
Option B, C and D are not correct because while you can use IAM policies or SCPs to restrict the creation of unencrypted EBS volumes, this could
potentially impact employees' ability to create necessary resources if not properly configured. They might require additional permissions
management, which is not mentioned in the requirements. By setting the EBS encryption by default at the account or organization level (Options A
and E), you can ensure all new volumes are encrypted without affecting the ability of employees to create resources.
upvoted 3 times
Selected Answer: CE
SCPs are a great way to enforce policies across an entire AWS Organization, preventing users from creating resources that do not comply with the
set policies.
In AWS Management Console, one can go to EC2 dashboard -> Settings -> Data encryption -> Check "Always encrypt new EBS volumes" and
choose a default KMS key. This ensures that every new EBS volume created will be encrypted by default, regardless of how it is created.
upvoted 2 times
Selected Answer: CE
CとEが正しいと考える。
upvoted 3 times
CE for me as well
upvoted 2 times
SCP that denies the ec2:CreateVolume action when the ec2:Encrypted condition equals false. This will prevent users and service accounts in
member accounts from creating unencrypted EBS volumes in the ap-southeast-2 Region.
upvoted 2 times
A company wants to use an Amazon RDS for PostgreSQL DB cluster to simplify time-consuming database administrative tasks for production
database workloads. The company wants to ensure that its database is highly available and will provide automatic failover support in most
scenarios in less than 40 seconds. The company wants to offload reads off of the primary instance and keep costs as low as possible.
A. Use an Amazon RDS Multi-AZ DB instance deployment. Create one read replica and point the read workload to the read replica.
B. Use an Amazon RDS Multi-AZ DB duster deployment Create two read replicas and point the read workload to the read replicas.
C. Use an Amazon RDS Multi-AZ DB instance deployment. Point the read workload to the secondary instances in the Multi-AZ pair.
D. Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
Correct Answer: D
Selected Answer: D
Selected Answer: D
Explanation:
The company wants high availability, automatic failover support in less than 40 seconds, read offloading from the primary instance, and cost-
effectiveness.
1. Amazon RDS Multi-AZ deployments provide high availability and automatic failover support.
2. In a Multi-AZ DB cluster, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone. If a failure occurs,
Amazon RDS performs an automatic failover to the standby, minimizing downtime.
3. The "Reader endpoint" for an Amazon RDS DB cluster provides load-balancing support for read-only connections to the DB cluster. Directing
read traffic to the reader endpoint helps in offloading read operations from the primary instance.
upvoted 11 times
to offload read we use read replicas also there is no such thing as reader endpoint in rds, it is only on aurora
upvoted 1 times
https://aws.amazon.com/rds/features/multi-az/
Amazon RDS Multi-AZ with two readable standbys
upvoted 1 times
Selected Answer: D
A would be cheapest but "failover times are typically 60–120 seconds" which does not meet our requirements. We need Multi-AZ DB cluster (not
instance). This has a reader endpoint by default, thus no need for additional read replicas (to "keep costs as low as possible").
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html
upvoted 6 times
Selected Answer: A
In a Multi-AZ configuration, the DB instances and EBS storage volumes are deployed across two Availability Zones.
It provides high availability and failover support for DB instances.
This setup is primarily for disaster recovery.
It involves a primary DB instance and a standby replica, which is a copy of the primary DB instance.
The standby replica is not accessible directly; instead, it serves as a failover target in case the primary instance fails.
upvoted 1 times
It is D.
A is not correct. Multi-AZ DB instance deployment, which creates a primary instance and a standby instance to provide failover support. However,
the standby instance does not serve traffic.
upvoted 1 times
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-az-instance-multi-az-instance-or-multi-az-
database-cluster/#:~:text=Unlike%20Multi%2DAZ%20instance%20deployment,different%20AZs%20serving%20read%20traffic.
You don't have to create read replicas with cluster deployment so B is out.
upvoted 1 times
D. Use an Amazon RDS Multi-AZ DB cluster deployment. Point the read workload to the reader endpoint
upvoted 1 times
Selected Answer: D
Use an Amazon RDS Multi-AZ DB cluster deployment Point the read workload to the reader endpoint.
upvoted 1 times
Selected Answer: A
The solutions architect should use an Amazon RDS Multi-AZ DB instance deployment. The company can create one read replica and point the read
workload to the read replica. Amazon RDS provides high availability and failover support for DB instances using Multi-AZ deployments.
upvoted 1 times
Multi-AZ DB clusters typically have lower write latency when compared to Multi-AZ DB instance deployments. They also allow read-only workloads
to run on reader DB instances.
upvoted 1 times
TariqKipkemei 1 year, 3 months ago
Selected Answer: D
This is as case where both option A and D can work, but option D gives 2 DB instances for read compared to only 1 given by option A. Costwise
they are the same as both options use 3 DB instances.
upvoted 1 times
Selected Answer: D
It's D. Read well: "A company wants to use an Amazon RDS for PostgreSQL DB CLUSTER".
upvoted 3 times
Question #421 Topic 1
A company runs a highly available SFTP service. The SFTP service uses two Amazon EC2 Linux instances that run with elastic IP addresses to
accept traffic from trusted IP sources on the internet. The SFTP service is backed by shared storage that is attached to the instances. User
accounts are created and managed as Linux users in the SFTP servers.
The company wants a serverless option that provides high IOPS performance and highly configurable security. The company also wants to
A. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume. Create an AWS Transfer Family SFTP service with a public endpoint
that allows only trusted IP addresses. Attach the EBS volume to the SFTP service endpoint. Grant users access to the SFTP service.
B. Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP
addresses and a VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses.
Attach the EFS volume to the SFTP service endpoint. Grant users access to the SFTP service.
C. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a public endpoint that
allows only trusted IP addresses. Attach the S3 bucket to the SFTP service endpoint. Grant users access to the SFTP service.
D. Create an Amazon S3 bucket with default encryption enabled. Create an AWS Transfer Family SFTP service with a VPC endpoint that has
internal access in a private subnet. Attach a security group that allows only trusted IP addresses. Attach the S3 bucket to the SFTP service
Correct Answer: B
Selected Answer: B
Selected Answer: B
Option B best meets the company's requirements by leveraging AWS Transfer Family with an EFS volume, ensuring high availability, security, and
performance.
upvoted 1 times
Selected Answer: B
Selected Answer: B
B
EFS has lower latency and higher throughput than S3 when accessed from within the same availability zone.
upvoted 2 times
Create an encrypted Amazon Elastic File System (Amazon EFS) volume. Create an AWS Transfer Family SFTP service with elastic IP addresses and a
VPC endpoint that has internet-facing access. Attach a security group to the endpoint that allows only trusted IP addresses. Attach the EFS volume
to the SFTP service endpoint. Grant users access to the SFTP service.
upvoted 1 times
A)Transfer SFTP doesn’t support EBS, not for share data, & not serverless: infeasible.
B)EFS mounts via ENIs not endpts: infeasible.
D)pub endpt for internet access is missing: infeasible.
upvoted 4 times
Selected Answer: B
Selected Answer: B
Option D is incorrect because it suggests using an S3 bucket in a private subnet with a VPC endpoint, which may not meet the requirement of
maintaining control over user permissions as effectively as the EFS-based solution.
upvoted 2 times
A company is developing a new machine learning (ML) model solution on AWS. The models are developed as independent microservices that
fetch approximately 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the models through an
asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent.
The company provides models to hundreds of users. The usage patterns for the models are irregular. Some models could be unused for days or
A. Direct the requests from the API to a Network Load Balancer (NLB). Deploy the models as AWS Lambda functions that are invoked by the
NLB.
B. Direct the requests from the API to an Application Load Balancer (ALB). Deploy the models as Amazon Elastic Container Service (Amazon
ECS) services that read from an Amazon Simple Queue Service (Amazon SQS) queue. Use AWS App Mesh to scale the instances of the ECS
C. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as AWS Lambda functions
that are invoked by SQS events. Use AWS Auto Scaling to increase the number of vCPUs for the Lambda functions based on the SQS queue
size.
D. Direct the requests from the API into an Amazon Simple Queue Service (Amazon SQS) queue. Deploy the models as Amazon Elastic
Container Service (Amazon ECS) services that read from the queue. Enable AWS Auto Scaling on Amazon ECS for both the cluster and copies
Correct Answer: D
asynchronous=SQS, microservices=ECS.
Use AWS Auto Scaling to adjust the number of ECS services.
upvoted 14 times
Selected Answer: D
Selected Answer: D
ALB is mentioned in other options to distract you, you dont need ALB for scaling here, we would need ECS autoscaling, they play with that idea in
option B a bit however D gets it in a completely optimized way.... A and C both have lambda which for Machine learning models with workloads on
heavy side, will not fly
upvoted 1 times
I go with everyone D.
upvoted 2 times
D, no need for an App Load balancer like C says, no where in the text.
SQS is needed to ensure all request gets routed properly in a Microservices architecture and also that it waits until its picked up.
ECS with Autoscaling, will scale based on the unknown pattern of usage as mentioned.
upvoted 1 times
Selected Answer: D
A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Choose two.)
A. Role
B. Group
C. Organization
Correct Answer: AB
Selected Answer: AB
Selected Answer: AB
Isn't the content of the policy completely irrelevant? IAM policies are applied to users, groups or roles ...
upvoted 6 times
AB is correct, but the question is misleading because, according to the AWS IAM documentation, groups are not considered principals:
https://docs.aws.amazon.com/IAM/latest/UserGuide/intro-structure.html#intro-structure-principal."
upvoted 2 times
Selected Answer: AB
A. Role
B. Group
upvoted 2 times
Role or group
upvoted 2 times
Question #424 Topic 1
A company is running a custom application on Amazon EC2 On-Demand Instances. The application has frontend nodes that need to run 24 hours
a day, 7 days a week and backend nodes that need to run only for a short time based on workload. The number of backend nodes varies during the
day.
The company needs to scale out and scale in more instances based on workload.
A. Use Reserved Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
B. Use Reserved Instances for the frontend nodes. Use Spot Instances for the backend nodes.
C. Use Spot Instances for the frontend nodes. Use Reserved Instances for the backend nodes.
D. Use Spot Instances for the frontend nodes. Use AWS Fargate for the backend nodes.
Correct Answer: B
Selected Answer: B
Reserved+ spot .
Fargate for serverless
upvoted 15 times
Selected Answer: A
Has to be A, It can scale down if required and you will be charged for what you use with fargate. Secondly they have not said the backend can have
timeouts or can be down for a little period of time or something. So it has to rule out any spot instances even though they are cheaper.
upvoted 14 times
Selected Answer: B
Selected Answer: B
Not A because Fargate runs containers, not EC2 instances. But we have no indication that the workload would be containerized; it runs "on EC2
instances".
Not C and D because frontend must run 24/7, can't use Spot.
Thus B, yes, Spot instances are risky, but as they need to run "only for a short time" it seems acceptable.
Technically ideal option would be Reserved Instances for frontend nodes and On-demand instances for backend nodes, but that is not an option
here.
upvoted 6 times
it is safe
upvoted 1 times
Selected Answer: B
Reserved Instances (RIs) for Frontend Nodes: Since the frontend nodes need to run continuously (24/7), using Reserved Instances for them makes
sense. RIs provide significant cost savings compared to On-Demand Instances for steady-state workloads.
Spot Instances for Backend Nodes: Spot Instances are suitable for short-duration workloads and can be significantly cheaper than On-Demand
Instances. Since the number of backend nodes varies during the day, Spot Instances can help you take advantage of spare capacity at a lower cost.
Keep in mind that Spot Instances may be interrupted if the capacity is needed elsewhere, so they are best suited for stateless and fault-tolerant
workloads.
upvoted 1 times
Selected Answer: B
AWS Fargate is a serverless compute engine for containers that allows you to run containers without having to manage the underlying
infrastructure. It simplifies the process of deploying and managing containerized applications by abstracting away the complexities of server
management, scaling, and cluster orchestration.
No containerized application requirements are mentioned in the question. Plain EC2 instances. So Fargate is not actually an option
upvoted 2 times
Selected Answer: A
(B) would take chance, though unlikely (A) is server-less auto-scaling. In case backend is idle, it might scale down, save money but no need to worr
for interruption by Spot instance.
upvoted 3 times
Selected Answer: A
If you will use spot instances you must assumme lost any job in course. This scenary has not explicit mentions about aaplication can tolerate this
situations, then, on my opinion, option A is the most suitable.
upvoted 3 times
Selected Answer: B
Question keyword "scale out and scale in more instances". Therefore not related Kubernetes. Choose B, reserved instance for front-end and spot
instance for back-end.
upvoted 1 times
Gooniegoogoo 1 year, 3 months ago
im on the fence for SPOT because you could lose your spot during a workload and it doesnt mention that, that is acceptable.. Business needs to
define requirements and document acceptability for this or you lose your job..
upvoted 1 times
Frontend nodes that need to run 24 hours a day, 7 days a week = Reserved Instances
Backend nodes run only for a short time = Spot Instances
upvoted 2 times
Question #425 Topic 1
A company uses high block storage capacity to runs its workloads on premises. The company's daily peak input and output transactions per
second are not more than 15,000 IOPS. The company wants to migrate the workloads to Amazon EC2 and to provision disk performance
Which Amazon Elastic Block Store (Amazon EBS) volume type will meet these requirements MOST cost-effectively?
Correct Answer: C
Selected Answer: C
Selected Answer: C
Both GP2 and GP3 has max IOPS 16000 but GP3 is cost effective.
https://aws.amazon.com/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/
upvoted 9 times
Selected Answer: C
Selected Answer: C
The GP3 (General Purpose SSD) volume type in Amazon Elastic Block Store (EBS) is the most cost-effective option for the given requirements. GP3
volumes offer a balance of price and performance and are suitable for a wide range of workloads, including those with moderate I/O needs.
GP3 volumes allow you to provision performance independently from storage capacity, which means you can adjust the baseline performance
(measured in IOPS) and throughput (measured in MiB/s) separately from the volume size. This flexibility allows you to optimize your costs while
meeting the workload requirements.
In this case, since the company's daily peak input and output transactions per second are not more than 15,000 IOPS, GP3 volumes provide a
suitable and cost-effective option for their workloads.
upvoted 1 times
It is not C pals. The company wants to migrate the workloads to Amazon EC2 and to provision disk performance independent of storage capacity.
With GP3 we have to increase storage capacity to increase IOPS over baseline.
You can only chose IOPS independetly with IO family and IO2 is in general better then IO1.
upvoted 2 times
Selected Answer: C
Therefore, the most suitable and cost-effective option in this scenario is the GP3 volume type (option C).
upvoted 1 times
Selected Answer: C
A company needs to store data from its healthcare application. The application’s data frequently changes. A new regulation requires audit access
The company hosts the application on an on-premises infrastructure that is running out of storage capacity. A solutions architect must securely
migrate the existing data to AWS while satisfying the new regulation.
A. Use AWS DataSync to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
B. Use AWS Snowcone to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
C. Use Amazon S3 Transfer Acceleration to move the existing data to Amazon S3. Use AWS CloudTrail to log data events.
D. Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events.
Correct Answer: A
A is better because:
- Data sync is used for migrate. Storage gw is used to connect on-prem to AWS.
- dataevents is to log for access, management events is for config or management
upvoted 12 times
Selected Answer: A
We need to log "access at all levels" aka "data events", thus B and D are out (logging only "management events" like granting permissions or
changing the access tier).
C, S3 Transfer Acceleration is to increase upload performance from widespread sources or over unreliable networks, but it just provides an
endpoint, it does not upload anything itself.
upvoted 9 times
Selected Answer: A
Selected Answer: D
Selected Answer: A
B and C don't solve the problem
A is extending the data and management events are for administrative actions only (tracking account creation, user security actions etc.).
C uses DataSync to move all the data and logs data events which include S3 file uploads and downloads.
Management events: User logs into an EC2 instance, creates an S3 IAM role
Data events: User uploads a file to S3
upvoted 3 times
AWS DataSync is designed for fast, simple, and secure data transfer, but it focuses more on data synchronization rather than on-premises
migration.
upvoted 1 times
Selected Answer: A
Option D (Use AWS Storage Gateway to move the existing data to Amazon S3. Use AWS CloudTrail to log management events): AWS Storage
Gateway is typically used for hybrid cloud storage solutions and may introduce additional complexity for a one-time data migration task. It might
not be as straightforward as using AWS Snowcone for this specific scenario.
upvoted 1 times
Selected Answer: A
both DataSync and Storage Gateway are fine to sync data...but to "audit access at all levels of the stored data" ...it should be data events(data plane
operation)..management event is some account level things.
So answer should be A
upvoted 2 times
Selected Answer: D
While both DataSync and Storage Gateway allow syncing of data between on-premise and cloud, DataSync is built for rapid shifting of data into a
cloud environment, not specifically for continued use in on-premise servers.
upvoted 2 times
AWS DataSync is an online data transfer service that simplifies, automates, and accelerates the process of copying large amounts of data to and
from AWS storage services over the Internet or over AWS Direct Connect.
upvoted 1 times
Selected Answer: A
tabbyDolly 1 month ago is right. Also Data Sync is designed for data changes.
upvoted 2 times
The company hosts applications on on-premises infrastructure, so they should use a Storage Gateway solution.
upvoted 2 times
A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache
A. Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions.
B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
C. Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application.
D. Launch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use
Correct Answer: B
B
AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale applications. It supports a variety of platforms, including Java
and Apache Tomcat. By using Elastic Beanstalk, the solutions architect can upload the Java application and configure the environment to run
Apache Tomcat.
upvoted 9 times
Selected Answer: B
Selected Answer: B
The key word here from the question if you notice is "The Java application must be DEPLOYED..." hence Elastic Beanstalk, it is a serverless
deployment service and supports a variety of platforms(apache Tomcat in our situation), and it will scale automatically with less operational
overhead(unlike option D with a lot of operation overhead)
upvoted 1 times
https://aws.amazon.com/elasticbeanstalk/details/
upvoted 1 times
Selected Answer: B
B. Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
upvoted 3 times
Selected Answer: B
Keyword "AWS Elastic Beanstalk" for re-architecture from Java web-app inside Apache Tomcat to AWS Cloud.
upvoted 2 times
Selected Answer: B
Definitely B
upvoted 1 times
Clearly B.
upvoted 2 times
nosense 1 year, 4 months ago
Selected Answer: B
Selected Answer: B
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
upvoted 1 times
Question #428 Topic 1
A serverless application uses Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. The Lambda function needs permissions to read and
Which solution will give the Lambda function access to the DynamoDB table MOST securely?
A. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the
DynamoDB table. Store the access_key_id and secret_access_key parameters as part of the Lambda environment variables. Ensure that other
AWS users do not have read and write access to the Lambda function configuration.
B. Create an IAM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the
DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role.
C. Create an IAM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the
DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string
parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table.
D. Create an IAM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the
Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
Correct Answer: B
Selected Answer: B
DynamoDB needs to trust Lambda. NOT the other way around. So Lambda must be configured as a trusted service. Role for service which gives B
and D options. D is setting up (somehow?) to allow Lambda to trust DynamoDB... or the wording makes no sense.
upvoted 6 times
Selected Answer: B
Keyword B. " IAM role that includes Lambda as a trusted service", not "IAM role that includes DynamoDB as a trusted service" in D. It is IAM role,
not IAM user.
upvoted 5 times
Selected Answer: B
B sounds better.
upvoted 2 times
vote B
upvoted 1 times
Selected Answer: B
B is right
Role key word and trusted service lambda
upvoted 4 times
Question #429 Topic 1
The following IAM policy is attached to an IAM group. This is the only policy applied to the group.
What are the effective IAM permissions of this policy for group members?
A. Group members are permitted any Amazon EC2 action within the us-east-1 Region. Statements after the Allow permission are not applied.
B. Group members are denied any Amazon EC2 permissions in the us-east-1 Region unless they are logged in with multi-factor authentication
(MFA).
C. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions when logged in with multi-
factor authentication (MFA). Group members are permitted any other Amazon EC2 action.
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in
with multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region.
Correct Answer: D
Selected Answer: D
One of the few situations when actual answer is same as the most voted answer lol
upvoted 1 times
Not sure why everyone vote D, I think that the valid option as to be C as the second condition regarding MFA there is point that only refer to a
specific region, so basically this means that is for all the regions
upvoted 2 times
D. Group members are allowed the ec2:StopInstances and ec2:TerminateInstances permissions for the us-east-1 Region only when logged in with
multi-factor authentication (MFA). Group members are permitted any other Amazon EC2 action within the us-east-1 Region
upvoted 2 times
Selected Answer: D
A. "Statements after the Allow permission are not applied." --> Wrong.
B. "denied any Amazon EC2 permissions in the us-east-1 Region" --> Wrong. Just deny 2 items.
C. "allowed the ec2:StopInstances and ec2:TerminateInstances permissions for all Regions" --> Wrong. Just region us-east-1.
D. ok.
upvoted 1 times
Selected Answer: D
D is correct
upvoted 2 times
Selected Answer: D
D is right
upvoted 2 times
Question #430 Topic 1
A manufacturing company has machine sensors that upload .csv files to an Amazon S3 bucket. These .csv files must be converted into images
and must be made available as soon as possible for the automatic generation of graphical reports.
The images become irrelevant after 1 month, but the .csv files must be kept to train machine learning (ML) models twice a year. The ML trainings
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A. Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour, generates the image files, and uploads the images to the S3
bucket.
B. Design an AWS Lambda function that converts the .csv files into images and stores the images in the S3 bucket. Invoke the Lambda
C. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after
D. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent
Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days.
E. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent
Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
Correct Answer: BC
Selected Answer: BC
B for processing the images via Lambda as it's more cost efficient than EC2 spot instances
C for expiring images after 30 days and because the ML trainings are planned weeks in advance so S3 glacier is ideal for slow retrieval and cheap
storage.
Selected Answer: BC
Not A, we need the images "as soon as possible", A runs every hour
"ML trainings and audits are planned weeks in advance" thus Glacier (C) is ok.
upvoted 2 times
Answer is B&C. For D, you must store data for 30 days in s3 standard before move to IA tiers, glacier is fine
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-
considerations.html#:~:text=Before%20you%20transition%20objects%20to%20S3%20Standard%2DIA%20or%20S3%20One%20Zone%2DIA%2C%2
0you%20must%20store%20them%20for%20at%20least%2030%20days%20in%20Amazon%20S3
upvoted 3 times
Selected Answer: BC
Definitely B & C
upvoted 2 times
Selected Answer: BC
the key word is Weeks in advance, even you save data in S3 Gracia will also OK to take couples days to retrieve the data
upvoted 2 times
Definitely B & C
upvoted 1 times
B. CORRECT
C. CORRECT
D. Why Store on S3 One Zone-Infrequent Access (S3 One Zone-IA) when the files are going to irrelevant after 1 month? (Availability 99.99% -
consider cost)
E. again, Why use Reduced Redundancy Storage (RRS) when the files are irrelevant after 1 month? (Availability 99.99% - consider cost)
upvoted 3 times
https://docs.aws.amazon.com/amazonglacier/latest/dev/introduction.html
upvoted 4 times
Selected Answer: BC
https://aws.amazon.com/jp/about-aws/whats-new/2021/11/amazon-s3-glacier-storage-class-amazon-s3-glacier-flexible-retrieval/
upvoted 2 times
A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for
MySQL in the database layer. Several players will compete concurrently online. The game’s developers want to display a top-10 scoreboard in near-
real time and offer the ability to stop and restore the game while preserving the current scores.
A. Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
B. Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
C. Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
D. Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
Correct Answer: B
Selected Answer: B
Redis provides fast in-memory data storage and processing. It can compute the top 10 scores and update the cache in milliseconds.
ElastiCache Redis supports sorting and ranking operations needed for the top 10 leaderboard.
The cached leaderboard can be retrieved from Redis vs hitting the MySQL database for every read. This reduces load on the database.
Redis supports persistence, so scores are preserved if the cache stops/restarts
upvoted 9 times
Selected Answer: B
Real-time gaming leaderboards are easy to create with Amazon ElastiCache for Redis. Just use the Redis Sorted Set data structure, which provides
uniqueness of elements while maintaining the list sorted by their scores. Creating a real-time ranked list is as simple as updating a user's score eac
time it changes. You can also use Sorted Sets to handle time series data by using timestamps as the score.
https://aws.amazon.com/elasticache/redis/#:~:text=ElastiCache%20for%20Redis.-,Gaming,-Leaderboards
upvoted 6 times
Selected Answer: B
concurrently = memcached
upvoted 1 times
Selected Answer: B
See case study of leaderboard with Redis at https://redis.io/docs/data-types/sorted-sets/ , it is feature "sorted sets". See comparison between Redi
an d Memcached at https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.html , the different at feature "Sorted sets"
upvoted 3 times
Selected Answer: B
advanced data structures, complex querying, pub/sub messaging, or persistence, Redis may be a better fit.
upvoted 1 times
https://aws.amazon.com/jp/blogs/news/building-a-real-time-gaming-leaderboard-with-amazon-elasticache-for-redis/
upvoted 3 times
Selected Answer: B
B is right
upvoted 1 times
An ecommerce company wants to use machine learning (ML) algorithms to build and train models. The company will use the models to visualize
complex scenarios and to detect trends in customer data. The architecture team wants to integrate its ML models with a reporting platform to
analyze the augmented data and use the data directly in its business intelligence dashboards.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Glue to create an ML transform to build and train models. Use Amazon OpenSearch Service to visualize the data.
B. Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
C. Use a pre-built ML Amazon Machine Image (AMI) from the AWS Marketplace to build and train models. Use Amazon OpenSearch Service to
D. Use Amazon QuickSight to build and train models by using calculated fields. Use Amazon QuickSight to visualize the data.
Correct Answer: B
Selected Answer: B
Use Amazon SageMaker to build and train models. Use Amazon QuickSight to visualize the data.
upvoted 2 times
Selected Answer: B
Question keyword "machine learning", answer keyword "Amazon SageMaker". Choose B. Use Amazon QuickSight for visualization. See "Gaining
insights with machine learning (ML) in Amazon QuickSight" at https://docs.aws.amazon.com/quicksight/latest/user/making-data-driven-decisions-
with-ml-in-quicksight.html
upvoted 2 times
Sagemaker.
upvoted 1 times
Selected Answer: B
Most likely B.
upvoted 1 times
ML== SageMaker
upvoted 1 times
A company is running its production and nonproduction environment workloads in multiple AWS accounts. The accounts are in an organization in
AWS Organizations. The company needs to design a solution that will prevent the modification of cost usage tags.
A. Create a custom AWS Config rule to prevent tag modification except by authorized principals.
C. Create a service control policy (SCP) to prevent tag modification except by authorized principals.
Correct Answer: C
Selected Answer: C
Tip: AWS Organziaton + service control policy (SCP) - This for any questions, you see both together. then you tell me
C. Create a service control policy (SCP) to prevent tag modification except by authorized principals.
upvoted 5 times
Selected Answer: C
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html
D "Amazon CloudWatch" just for logging, not for prevent tag modification
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-cwe.html
Amazon Organziaton has "Service Control Policy (SCP)" with "tag policy"
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html . Choose C.
AWS Config for technical stuff, not for tag policies. Not A.
upvoted 3 times
Selected Answer: C
Service control policies (SCPs) are a type of organization policy that you can use to manage permissions in your organization.
upvoted 1 times
Selected Answer: C
I'd say C.
upvoted 2 times
hiroohiroo 1 year, 4 months ago
Selected Answer: C
https://docs.aws.amazon.com/ja_jp/organizations/latest/userguide/orgs_manage_policies_scps_examples_tagging.html
upvoted 3 times
Selected Answer: C
A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto
Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in anotherAWS Region
What should a solutions architect do to meet these requirements with the LEAST amount of downtime?
A. Create an Auto Scaling group and a load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table.
Configure DNS failover to point to the new disaster recovery Region's load balancer.
B. Create an AWS CloudFormation template to create EC2 instances, load balancers, and DynamoDB tables to be launched when needed
Configure DNS failover to point to the new disaster recovery Region's load balancer.
C. Create an AWS CloudFormation template to create EC2 instances and a load balancer to be launched when needed. Configure the
DynamoDB table as a global table. Configure DNS failover to point to the new disaster recovery Region's load balancer.
D. Create an Auto Scaling group and load balancer in the disaster recovery Region. Configure the DynamoDB table as a global table. Create an
Amazon CloudWatch alarm to trigger an AWS Lambda function that updates Amazon Route 53 pointing to the disaster recovery load balancer.
Correct Answer: A
Selected Answer: A
A and D is correct.
But Route 53 haves a feature DNS failover when instances down so we dont need use Cloudwatch and lambda to trigger
-> A correct
upvoted 14 times
Selected Answer: C
They are not asking for automatic failover, they want to "ensure the application can (!) be made available in another AWS Region with minimal
downtime". This works with C; they would just execute the template and it would be available in short time.
A would create a DR environment that IS already available, which is not what the question asks for.
D is like A, just abusing Lambda to update the DNS record (which doesn't make sense).
B would create a separate, empty database
upvoted 8 times
ChatGPT:
Option C involves creating an AWS CloudFormation template to create EC2 instances and a load balancer only when needed, and configuring the
DynamoDB table as a global table. This approach might introduce more downtime because the infrastructure in the disaster recovery region is not
pre-deployed and ready to take over immediately. The process of launching instances and configuring the load balancer can take some time,
leading to delays during the failover.
Option A, on the other hand, ensures that the necessary infrastructure (Auto Scaling group, load balancer, and DynamoDB global table) is already
set up and running in the disaster recovery region. This pre-deployment reduces downtime since the failover can be handled quickly by updating
DNS to point to the disaster recovery region's load balancer.
upvoted 1 times
anikolov 8 months, 3 weeks ago
Selected Answer: A
There are 2 parts. DB and application. Dynamo DB recovery in another region is not possible without global table so option B is out.
A will make the infra available in 2 regions which is not required. The question is about DR, not scaling.
D Use Lambda to modify R53 to point to new region. This is going to cause delays but is possible and it will also be running a scaled EC2 instances
in passive region.
C Make a CF template which can launch the infra when needed. DB is global table so it will be available.
upvoted 3 times
AWS CloudFormation Template: Use CloudFormation to define the infrastructure components (EC2 instances, load balancer, etc.) in a template. Thi
allows for consistent and repeatable infrastructure deployment.
EC2 Instances and Load Balancer: Launch the EC2 instances and load balancer in the disaster recovery (DR) Region using the CloudFormation
template. This enables the deployment of the application in the DR Region when needed.
DynamoDB Global Table: Configure the DynamoDB table as a global table. DynamoDB Global Tables provide automatic multi-region, multi-master
replication, ensuring that the data is available in both the primary and DR Regions.
DNS Failover: Configure DNS failover to point to the new DR Region's load balancer. This allows for seamless failover of traffic to the DR Region
when needed.
Option A is close, but it introduces an Auto Scaling group in the disaster recovery Region, which might introduce unnecessary complexity and
potential scaling delays. Option D introduces a Lambda function triggered by CloudWatch alarms, which might add latency and complexity
compared to the more direct approach in Option C.
upvoted 1 times
Selected Answer: A
Only B and C take care of EC2 instants. But since B does not take care of Data in the Dynamo DB, C is the only correct Answer.
upvoted 1 times
Selected Answer: C
I think CloudFormation is easier than manual provision of Auto Scaling group and load balancer in DR region.
upvoted 2 times
Selected Answer: A
Creating Auto Scaling group and load balancer in DR region allows fast launch of capacity when needed.
Configuring DynamoDB as a global table provides continuous data replication.
Using DNS failover via Route 53 to point to the DR region's load balancer enables rapid traffic shifting.
upvoted 2 times
By leveraging an Amazon CloudWatch alarm, Option D allows for an automated failover mechanism. When triggered, the CloudWatch alarm can
execute an AWS Lambda function, which in turn can update the DNS records in Amazon Route 53 to redirect traffic to the disaster recovery load
balancer in the new Region. This automation helps reduce the potential for human error and further minimizes downtime.
Answer is D
upvoted 2 times
Selected Answer: C
The company wants to ensure the application 'CAN' be made available in another AWS Region with minimal downtime. Meaning they want to be
able to launch infra on need basis.
Best answer is C.
upvoted 2 times
D
upvoted 1 times
Selected Answer: C
C suits best
upvoted 3 times
Selected Answer: A
AがDNS フェイルオーバー
upvoted 1 times
Question #435 Topic 1
A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The
A. Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion
Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration
B. Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to
migrate the database with ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing
replication.
C. Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema
Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and
D. Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration
Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
But I didn't understand why we are using a schema conversion tool, because AWS have already a managed service engine for MySQL Db, (RDS for
MySQL or Aurora for my SQL is on the table )
upvoted 5 times
Selected Answer: D
To calculate the time it would take to transfer 20TB of data over a 1 GB dedicated AWS Direct Connect, we can use the formula:
Here, the data size is 20TB, which is equivalent to 20,000 GB or 20,000,000 MB. The data transfer rate is 1 GB/s.
Therefore, it would take approximately 20,000 seconds or 5.56 hours to transfer 20TB of data over a 1 GB dedicated AWS Direct Connect.
upvoted 3 times
Has to be A. the option for D would only work if they said they have like 6 Months plus. It would take too long to set up.
upvoted 2 times
Selected Answer: A
I agreed with A.
Why not D.?
When you initiate the process by requesting an AWS Direct Connect connection, it typically starts with the AWS Direct Connect provider. This
provider may need to coordinate with AWS to allocate the necessary resources. This initial setup phase can take anywhere from a few days to a
couple of weeks.
Couple of weeks? No Good
upvoted 3 times
Keyword "20 TB", choose "AWS Snowball", there are A or C. C has word "GPU" what is not related, therefore choose A.
upvoted 2 times
Answer A
upvoted 1 times
D is correct
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.Process.html
upvoted 2 times
Selected Answer: A
D Direct Connection will need a long time to setup plus need to deal with Network and Security changes with existing environment. Ad then plus
the Data trans time... No way can be done in 2 weeks.
upvoted 4 times
Selected Answer: D
Overall, option D combines the reliability and cost-effectiveness of AWS Direct Connect, AWS DMS, and AWS SCT to migrate the database
efficiently and minimize downtime.
upvoted 2 times
D - Direct Connect takes atleast a month to setup! Requirement is for within 2 weeks.
upvoted 4 times
AWS Snowball Edge Storage Optimized device is used for large-scale data transfers, but the lead time for delivery, data transfer, and return
shipping would likely exceed the 2-week time frame. Also, ongoing database changes wouldn't be replicated while the device is in transit.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/ja_jp/snowball/latest/developer-guide/device-differences.html#device-options
Aです。
upvoted 3 times
A company moved its on-premises PostgreSQL database to an Amazon RDS for PostgreSQL DB instance. The company successfully launched a
new product. The workload on the database has increased. The company wants to accommodate the larger workload without adding
infrastructure.
A. Buy reserved DB instances for the total workload. Make the Amazon RDS for PostgreSQL DB instance larger.
C. Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance.
Correct Answer: A
Selected Answer: A
A.
"without adding infrastructure" means scaling vertically and choosing larger instance.
"MOST cost-effectively" reserved instances
upvoted 14 times
Selected Answer: A
B - Multi-AZ is for HA, does not help 'accommodating the larger workload'
C - Adding "another instance" will not help, we can't split the workload between two instances
D - On-demand instance is a good choice for unknown workload, but here we know the workload, it's just higher than before
upvoted 2 times
Not A : "launched a new product", reserved instances are for known workloads, a new product doesn't have known workload.
Not B : "accommodate the larger workload", while Multi-AZ can help with larger workloads, they are more for higher availability.
Not C : "without adding infrastructure", adding a PostGresQL instance is new infrastructure.
upvoted 3 times
Making the RDS PostgreSQL instance Multi-AZ adds a standby replica to handle larger workloads and provides high availability.
Even though it adds infrastructure, the cost is less than doubling the infrastructure with a separate DB instance.
It provides better performance, availability, and disaster recovery than a single larger instance.
upvoted 2 times
Selected Answer: A
Keyword "Amazon RDS for PostgreSQL instance large" . See list of size of instance at https://aws.amazon.com/rds/instance-types/
upvoted 1 times
A.
Not C: without adding infrastructure
upvoted 2 times
Therefore, the recommended solution is Option C: Buy reserved DB instances for the workload and add another Amazon RDS for PostgreSQL DB
instance to accommodate the increased workload in a cost-effective manner.
upvoted 1 times
A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The
site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security
team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incomingtraffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Correct Answer: B
Selected Answer: B
Selected Answer: B
Best solution Shield Advanced, not listed here, thus second-best solution, WAF with rate limiting
upvoted 6 times
Selected Answer: B
A. Amazon Inspector = Software vulnerabilities like OS patches etc. Not fit for purpose.
C. Changing IP from DDoS so don't know the incoming traffic for configuration (even if it was possible)
D. GardDuty is for workload and AWS account monitoring so it can't help with DDoS.
B is correct as AWS WAF + ALB can configure rate limiting even if source IP changes.
upvoted 5 times
Selected Answer: B
Selected Answer: A
This case is A
upvoted 1 times
AWS Web Application Firewall (WAF) + ALB (Application Load Balancer) See image at https://aws.amazon.com/waf/ .
https://docs.aws.amazon.com/waf/latest/developerguide/ddos-responding.html .
Question keyword "high request rate", answer keyword "rate-limiting rule" https://docs.aws.amazon.com/waf/latest/developerguide/waf-rate-
based-example-limit-login-page-keys.html
Amazon GuardDuty for theat detection https://aws.amazon.com/guardduty/ , not for DDoS.
upvoted 2 times
Selected Answer: B
B in swahili 'ba' :)
external systems, incoming requests = AWS WAF
upvoted 1 times
Selected Answer: B
B no doubt.
upvoted 1 times
Selected Answer: B
AWS WAF (Web Application Firewall) is a service that provides protection for web applications against common web exploits. By associating AWS
WAF with the Application Load Balancer (ALB), you can inspect incoming traffic and define rules to allow or block requests based on various
criteria.
upvoted 4 times
In this scenario, the company is facing performance issues due to a high request rate from illegitimate external systems with changing IP addresses
By configuring a rate-limiting rule in AWS WAF, the company can restrict the number of requests coming from each IP address, preventing
excessive traffic from overwhelming the website. This will help mitigate the impact of potential DDoS attacks and ensure that legitimate users can
access the site without interruption.
upvoted 4 times
https://aws.amazon.com/blogs/security/how-to-use-amazon-guardduty-and-aws-web-application-firewall-to-automatically-block-suspicious-
hosts/
upvoted 2 times
A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private
subnet. The auditor has its own AWS account and requires its own copy of the database.
What is the MOST secure way for the company to share the database with the auditor?
A. Create a read replica of the database. Configure IAM standard database authentication to grant the auditor access.
B. Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for the auditor. Grant the user
C. Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the auditor to grant access to the
D. Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS Key Management Service
Correct Answer: D
Selected Answer: D
The most secure way for the company to share the database with the auditor is option D: Create an encrypted snapshot of the database, share the
snapshot with the auditor, and allow access to the AWS Key Management Service (AWS KMS) encryption key.
By creating an encrypted snapshot, the company ensures that the database data is protected at rest. Sharing the encrypted snapshot with the
auditor allows them to have their own copy of the database securely.
In addition, granting access to the AWS KMS encryption key ensures that the auditor has the necessary permissions to decrypt and access the
encrypted snapshot. This allows the auditor to restore the snapshot and access the data securely.
This approach provides both data protection and access control, ensuring that the database is securely shared with the auditor while maintaining
the confidentiality and integrity of the data.
upvoted 19 times
why not A ?
upvoted 2 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html
With AWS RDS, you can share snapshots across accounts so no need to go through S3 or replication. Option D allows more secure way by using
encryption and sharing encryption key.
upvoted 1 times
Selected Answer: D
Selected Answer: D
Selected Answer: D
Most likely D.
upvoted 3 times
D for me
upvoted 2 times
Question #439 Topic 1
A solutions architect configured a VPC that has a small range of IP addresses. The number of Amazon EC2 instances that are in the VPC is
Which solution resolves this issue with the LEAST operational overhead?
A. Add an additional IPv4 CIDR block to increase the number of IP addresses and create additional subnets in the VPC. Create new resources
B. Create a second VPC with additional subnets. Use a peering connection to connect the second VPC with the first VPC Update the routes
C. Use AWS Transit Gateway to add a transit gateway and connect a second VPC with the first VPUpdate the routes of the transit gateway and
D. Create a second VPC. Create a Site-to-Site VPN connection between the first VPC and the second VPC by using a VPN-hosted solution on
Amazon EC2 and a virtual private gateway. Update the route between VPCs to the traffic through the VPN. Create new resources in the
Correct Answer: A
Selected Answer: A
A is correct: You assign a single CIDR IP address range as the primary CIDR block when you create a VPC and can add up to four secondary CIDR
blocks after creation of the VPC.
upvoted 6 times
Selected Answer: A
best option
upvoted 2 times
Selected Answer: A
After you've created your VPC, you can associate additional IPv4 CIDR blocks with the VPC
upvoted 2 times
Selected Answer: A
A is best
upvoted 2 times
A valid
upvoted 1 times
Question #440 Topic 1
A company used an Amazon RDS for MySQL DB instance during application testing. Before terminating the DB instance at the end of the test
cycle, a solutions architect created two backups. The solutions architect created the first backup by using the mysqldump utility to create a
database dump. The solutions architect created the second backup by enabling the final DB snapshot option on RDS termination.
The company is now planning for a new test cycle and wants to create a new DB instance from the most recent backup. The company has chosen
B. Upload the RDS snapshot to Amazon S3. Then import the RDS snapshot into Aurora.
C. Upload the database dump to Amazon S3. Then import the database dump into Aurora.
D. Use AWS Database Migration Service (AWS DMS) to import the RDS snapshot into Aurora.
E. Upload the database dump to Amazon S3. Then use AWS Database Migration Service (AWS DMS) to import the database dump into Aurora.
Correct Answer: AC
Selected Answer: AC
A,C
A because the snapshot is already stored in AWS.
C because you dont need a migration tool going from MySQL to MySQL. You would use the MySQL utility.
upvoted 11 times
Selected Answer: AC
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Import.html
upvoted 7 times
Selected Answer: CE
AWS DMS does not support migrating data directly from an RDS snapshot. DMS can migrate data from a live RDS instance or from a database
dump, but not from a snapshot.
Also dump can be migrated to aurora using sql client.
upvoted 1 times
A per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.RDSMySQL.Snapshot.html
C per https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html
upvoted 6 times
A and B
upvoted 1 times
Selected Answer: AC
Similar : https://repost.aws/knowledge-center/aurora-postgresql-migrate-from-rds
upvoted 2 times
A and C
upvoted 1 times
Either import the RDS snapshot directly into Aurora or upload the database dump to Amazon S3, then import the database dump into Aurora.
upvoted 2 times
C and E are the solutions that can restore the backups into Amazon Aurora.
The RDS DB snapshot contains backup data in a proprietary format that cannot be directly imported into Aurora.
The mysqldump database dump contains SQL statements that can be imported into Aurora after uploading to S3.
AWS DMS can migrate the dump file from S3 into Aurora.
upvoted 3 times
Exclude B, because no need upload DB snapshot to Amazon S3. Exclude D, because no need Migration service. Exclude E, because no need
Migration service. Use exclusion method is more easy for this question.
Related links:
- Amazon RDS create database snapshot https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateSnapshot.html
- https://aws.amazon.com/rds/aurora/
upvoted 2 times
The RDS DB snapshot contains backup data in a proprietary format that cannot be directly imported into Aurora.
The mysqldump database dump contains SQL statements that can be imported into Aurora after uploading to S3.
AWS DMS can migrate the dump file from S3 into Aurora.
upvoted 2 times
You can copy the full and incremental backup files from your source MySQL version 5.7 database to an Amazon S3 bucket, and then restore an
Amazon Aurora MySQL DB cluster from those files.
This option can be considerably faster than migrating data using mysqldump, because using mysqldump replays all of the commands to recreate
the schema and data from your source database in your new Aurora MySQL DB cluster.
By copying your source MySQL data files, Aurora MySQL can immediately use those files as the data for an Aurora MySQL DB cluster.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Migrating.ExtMySQL.html
upvoted 3 times
c>- Because Amazon Aurora MySQL is a MySQL-compatible database, you can use the mysqldump utility to copy data from your MySQL or
MariaDB database to an existing Amazon Aurora MySQL DB cluster.
B.- You can copy the source files from your source MySQL version 5.5, 5.6, or 5.7 database to an Amazon S3 bucket, and then restore an Amazon
Aurora MySQL DB cluster from those files.
upvoted 2 times
Selected Answer: BE
A company hosts a multi-tier web application on Amazon Linux Amazon EC2 instances behind an Application Load Balancer. The instances run in
an Auto Scaling group across multiple Availability Zones. The company observes that the Auto Scaling group launches more On-Demand
Instances when the application's end users access high volumes of static web content. The company wants to optimize cost.
A. Update the Auto Scaling group to use Reserved Instances instead of On-Demand Instances.
B. Update the Auto Scaling group to scale by launching Spot Instances instead of On-Demand Instances.
C. Create an Amazon CloudFront distribution to host the static web contents from an Amazon S3 bucket.
D. Create an AWS Lambda function behind an Amazon API Gateway API to host the static website contents.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Selected Answer: C
implementing CloudFront to serve static content is the most cost-optimal architectural change for this use case.
upvoted 3 times
Selected Answer: C
c for me
upvoted 1 times
A company stores several petabytes of data across multiple AWS accounts. The company uses AWS Lake Formation to manage its data lake. The
company's data science team wants to securely share selective data from its accounts with the company's engineering team for analytical
purposes.
Which solution will meet these requirements with the LEAST operational overhead?
A. Copy the required data to a common account. Create an IAM access role in that account. Grant access by specifying a permission policy
that includes users from the engineering team accounts as trusted entities.
B. Use the Lake Formation permissions Grant command in each account where the data is stored to allow the required engineering team users
C. Use AWS Data Exchange to privately publish the required data to the required engineering team accounts.
D. Use Lake Formation tag-based access control to authorize and grant cross-account permissions for the required data to the engineering
team accounts.
Correct Answer: D
Selected Answer: D
By utilizing Lake Formation's tag-based access control, you can define tags and tag-based policies to grant selective access to the required data fo
the engineering team accounts. This approach allows you to control access at a granular level without the need to copy or move the data to a
common account or manage permissions individually in each account. It provides a centralized and scalable solution for securely sharing data
across accounts with minimal operational overhead.
upvoted 16 times
Selected Answer: D
(B) uses the CLI command that has many options: principal, TableName, ColumnNames, LFTag etc providing a way to manage granular access
permissions for different users at the table and column level. That way you don't give full access to the all the data. The problem with (B) is to
implement this in each account has a lot more operational overhead than (D).
upvoted 1 times
Selected Answer: D
Using Lake Formation tag-based access control allows granting cross-account permissions to access data in other accounts based on tags, without
having to copy data or configure individual permissions in each account.
This provides a centralized, tag-based way to share selective data across accounts to authorized users with least operational overhead.
upvoted 2 times
Selected Answer: D
https://aws.amazon.com/blogs/big-data/securely-share-your-data-across-aws-accounts-using-aws-lake-formation/
upvoted 3 times
Question #443 Topic 1
A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the
world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective
C. Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
D. Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Correct Answer: A
Selected Answer: A
The question asks for "a cost-effective solution [ONLY TO] to minimize upload and download latency and maximize performance", not for the
actual application. And the 'cost-effective solution to minimize upload and download latency and maximize performance' is S3 Transfer
Acceleration. Obviously there is more required to host the app, but that is not asked for.
upvoted 12 times
Selected Answer: A
The question is focused on large downloads and uploads. S3 Transfer Acceleration is what fits. CloudFront is for caching which cannot be used
when the data is unique. They aren't as concerned with regular web traffic.
Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of
larger objects.
upvoted 5 times
Selected Answer: A
A for sure
upvoted 1 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
Application users will be able to download and upload UNIQUE data up to gigabytes in size
Selected Answer: A
Downloading data upto gigabytes in size - Cloudfront is a content delivery service that acts as an edge caching layer for images and other data.
Not a service that minimizes upload and download latency.
upvoted 1 times
The question is focused on large downloads and uploads. S3 Transfer Acceleration is what fits. CloudFront is for caching which cannot be used
when the data is unique. They aren't as concerned with regular web traffic.
Selected Answer: C
Amazon S3 with Transfer Acceleration (option A) is designed for speeding up uploads to Amazon S3, and it's not used for hosting scalable web
applications. It doesn't mention using EC2 instances for hosting the application.
upvoted 4 times
My answer is C
upvoted 1 times
Selected Answer: C
Selected Answer: C
Amazon CloudFront is a global content delivery network (CDN) that delivers web content to users with low latency and high transfer speeds. It
does this by caching content at edge locations around the world, which are closer to the users than the origin server.
By using Amazon EC2 with Auto Scaling and Amazon CloudFront, the company can create a scalable and high-performance web application that is
accessible to users from different geographic regions of the world.
upvoted 1 times
Selected Answer: A
I believe it would be A - my thinking maybe wrong but im just thinking specifically about the S3 put allows upto 5gb not sure about cloudfront.
Second way of thinking is that cached content on edge locations but would it not have to go to source still to retrieve if another person wants to
download that content in a different part of the world?
upvoted 2 times
A company has hired a solutions architect to design a reliable architecture for its application. The application consists of one Amazon RDS DB
instance and two manually provisioned Amazon EC2 instances that run web servers. The EC2 instances are located in a single Availability Zone.
An employee recently deleted the DB instance, and the application was unavailable for 24 hours as a result. The company is concerned with the
What should the solutions architect do to maximize reliability of the application's infrastructure?
A. Delete one EC2 instance and enable termination protection on the other EC2 instance. Update the DB instance to be Multi-AZ, and enable
deletion protection.
B. Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and
run them in an EC2 Auto Scaling group across multiple Availability Zones.
C. Create an additional DB instance along with an Amazon API Gateway and an AWS Lambda function. Configure the application to invoke the
Lambda function through API Gateway. Have the Lambda function write the data to the two DB instances.
D. Place the EC2 instances in an EC2 Auto Scaling group that has multiple subnets located in multiple Availability Zones. Use Spot Instances
instead of On-Demand Instances. Set up Amazon CloudWatch alarms to monitor the health of the instances Update the DB instance to be
Correct Answer: B
Selected Answer: B
A: delete one instance, why?. Although takes care of reliability of DB instance however not EC2.
B. seems perfect as takes care of reliability of both EC2 as well as DB
C. DB instance's reliability is not taken care of
D. seems to be trying to address cost alongside reliability of EC2 and DB.
upvoted 2 times
A: Deleting one EC2 instance makes no sense. Why would you do that?
C: API Gateway, Lambda etc are all nice but they don't solve the problem of DB instance deletion
D: EC2 subnet blah blah, what? The problem is reliability, not networking!
B is correct as it solves the DB deletion issue and increases reliability by Multi AZ scaling of EC2 instances
upvoted 4 times
Selected Answer: B
Update the DB instance to be Multi-AZ, and enable deletion protection. Place the EC2 instances behind an Application Load Balancer, and run
them in an EC2 Auto Scaling group across multiple Availability Zones
upvoted 1 times
B for sure.
upvoted 1 times
alexandercamachop 1 year, 4 months ago
Selected Answer: B
Selected Answer: B
A company is storing 700 terabytes of data on a large network-attached storage (NAS) system in its corporate data center. The company has a
After an audit from a regulator, the company has 90 days to move the data to the cloud. The company needs to move the data efficiently and
without disruption. The company still needs to be able to access and update the data during the transfer window.
A. Create an AWS DataSync agent in the corporate data center. Create a data transfer task Start the transfer to an Amazon S3 bucket.
B. Back up the data to AWS Snowball Edge Storage Optimized devices. Ship the devices to an AWS data center. Mount a target Amazon S3
C. Use rsync to copy the data directly from local storage to a designated Amazon S3 bucket over the Direct Connect connection.
D. Back up the data on tapes. Ship the tapes to an AWS data center. Mount a target Amazon S3 bucket on the on-premises file system.
Correct Answer: A
For those who wonders why not B. Snowball Edge Storage Optimized device for data transfer is up to 100TB
https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html
upvoted 10 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
(B) is incorrect bc although Mountpoint for S3 is possible for on-premises NAS, this is not as efficient as AWS DataSync. Data updates made during
the transfer window would have to be resolved later.
upvoted 1 times
so you can calculate easily if you just store these numbers in your head. lets say if question is 1GBPS DirectConnect that meand everything above
should be divide by 8. cool
upvoted 1 times
Critical requirement: "The company needs to move the data efficiently and without disruption."
B: Causes disruption
C: I don't think that is possible without a gateway kind of thing
D: Tape backups? " Mount a target Amazon S3 bucket on the on-premises file system"? This requires some gateway which is not mentioned
A is the answer as DataSync allows transfer without disruption and with 10Gbps, it can be done in 90 days.
upvoted 2 times
Selected Answer: A
AWS DataSync can efficiently transfer large datasets from on-premises NAS to Amazon S3 over Direct Connect.
DataSync allows accessing and updating the data continuously during the transfer process.
upvoted 4 times
Selected Answer: A
AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services.
upvoted 2 times
Selected Answer: A
By leveraging AWS DataSync in combination with AWS Direct Connect, the company can efficiently and securely transfer its 700 terabytes of data t
an Amazon S3 bucket without disruption. The solution allows continued access and updates to the data during the transfer window, ensuring
business continuity throughout the migration process.
upvoted 3 times
Selected Answer: A
A company stores data in PDF format in an Amazon S3 bucket. The company must follow a legal requirement to retain all new and existing data in
Which solution will meet these requirements with the LEAST operational overhead?
A. Turn on the S3 Versioning feature for the S3 bucket. Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor
B. Turn on S3 Object Lock with governance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all
C. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all
D. Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch
Correct Answer: D
Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
upvoted 2 times
Selected Answer: D
https://repost.aws/questions/QUGKrl8XRLTEeuIzUHq0Ikew/s3-object-lock-on-existing-s3-objects
upvoted 4 times
Selected Answer: A
To enable Object Lock on an Amazon S3 bucket, you must first enable versioning on that bucket. other 3 option did not enable versioning first
upvoted 1 times
Selected Answer: D
Recopying offers more control but requires users to manage the process. S3 Batch Operations automates the process at scale but with less granula
control - LEAST operational overhead
upvoted 2 times
moonster 10 months, 3 weeks ago
Its C because you only need to recopy all existing objects one time, so why use S3 batch operations if new datas going to be in compliance
retention mode? I can see why its C although my initial gut answer was D.
upvoted 2 times
Selected Answer: D
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operation
to bring the existing data into compliance.
upvoted 1 times
To replicate existing object/data in S3 Bucket to bring them to compliance, optionally we use "S3 Batch Replication", so option D is the most
appropriate, especially if we have big data in S3.
upvoted 1 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-retention-date.html
upvoted 4 times
Operational complexity: Option C has a straightforward process of recopying existing objects. It is a well-known operation in S3 and doesn't
require additional setup or management. Option D introduces the need to set up and configure S3 Batch Operations, which can involve creating
job definitions, specifying job parameters, and monitoring the progress of batch operations. This additional complexity may increase the
operational overhead.
upvoted 2 times
You need AWS Batch to re-apply certain config to files that were already in S3, like encryption
upvoted 4 times
Question #447 Topic 1
A company has a stateless web application that runs on AWS Lambda functions that are invoked by Amazon API Gateway. The company wants to
deploy the application across multiple AWS Regions to provide Regional failover capabilities.
A. Create Amazon Route 53 health checks for each Region. Use an active-active failover configuration.
B. Create an Amazon CloudFront distribution with an origin for each Region. Use CloudFront health checks to route traffic.
C. Create a transit gateway. Attach the transit gateway to the API Gateway endpoint in each Region. Configure the transit gateway to route
requests.
D. Create an Application Load Balancer in the primary Region. Set the target group to point to the API Gateway endpoint hostnames in each
Region.
Correct Answer: A
Selected Answer: A
Selected Answer: A
A. I'm not an expert in this area, but I still want to express my opinion. After carefully reviewing the question and thinking about it for a long time,
actually don't know the reason. As I mentioned at the beginning, I'm not an expert in this field.
upvoted 18 times
Selected Answer: A
Correct me if I'm wrong but CloudFront DOES NOT have health check capabilities out of the box. Route 53 and Global Accelerator do.
upvoted 1 times
A for sure
upvoted 1 times
B: Caching solution. Not ideal for failover although it will work. Would have been a correct answer if A wasn't an option
C: Transit gateway is for VPC connectivity not AWS API or Lambda
D: Even if it was possible, there is a primary region dependency of ALB
A: correct because R53 health checks can failover across regions
Selected Answer: B
we can set primary and secondry regions in cloud front for failover.
upvoted 2 times
Application is serverless, it doesn't matter where it runs, so can be active-active setup and run wherever the request comes in. Route 53 with health
checks will route to a healthy region.
B, could work too, but CloudFront is for caching which does not seem to help with an API. The goal here is "failover capabilities", not
caching/performance/latency etc.
upvoted 3 times
Selected Answer: A
In activ active failover config, route53 continuously monitors its endpoints and if one of them is unhealthy, it excludes the region/endpoint from its
valid traffic route - Only Sensible option
Cloudfront is a content delivery network - not used to route traffic
Transit gateway for traffic routing - aws devs will hit us with a stick on hearing this option
You cant use a load balancer for cross region load balancing - invalid
upvoted 1 times
Selected Answer: A
Global ,Reduce latency, health checks, failover, Route traffic = Amazon Route 53
upvoted 1 times
Selected Answer: B
"Stateless applications provide one service or function and use content delivery network (CDN), web, or print servers to process these short-term
requests.
https://docs.aws.amazon.com/architecture-diagrams/latest/multi-region-api-gateway-with-cloudfront/multi-region-api-gateway-with-
cloudfront.html
upvoted 1 times
Selected Answer: A
Selected Answer: B
By creating an Amazon CloudFront distribution with origins in each AWS Region where the application is deployed, you can leverage CloudFront's
global edge network to route traffic to the closest available Region. CloudFront will automatically route the traffic based on the client's location an
the health of the origins using CloudFront health checks.
Option A (creating Amazon Route 53 health checks with an active-active failover configuration) is not suitable for this scenario as it is primarily
used for failover between different endpoints within the same Region, rather than routing traffic to different Regions.
upvoted 2 times
https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/
upvoted 3 times
Gooniegoogoo 1 year, 3 months ago
that is from 2017.. i wonder if it is still relevant..
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
upvoted 1 times
To route traffic to multiple AWS Regions and provide regional failover capabilities for a stateless web application running on AWS Lambda
functions invoked by Amazon API Gateway, you can use Amazon Route 53 with an active-active failover configuration.
By creating Amazon Route 53 health checks for each Region and configuring an active-active failover configuration, Route 53 can monitor the
health of the endpoints in each Region and route traffic to healthy endpoints. In the event of a failure in one Region, Route 53 automatically routes
traffic to the healthy endpoints in other Regions.
This setup ensures high availability and failover capabilities for your web application across multiple AWS Regions.
upvoted 2 times
Question #448 Topic 1
A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a
single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections. The
Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?
B. Add a second virtual private gateway and attach it to the Management VPC.
C. Add a second set of VPNs to the Management VPC from a second customer gateway device.
D. Add a second VPC peering connection between the Management VPC and the Production VPC.
Correct Answer: C
Selected Answer: C
The Management VPC currently has a single VPN connection through one customer gateway device. This is a single point of failure.
Adding a second set of VPN connections from the Management VPC to a second customer gateway device provides redundancy and eliminates
this single point of failure.
upvoted 6 times
C,
Selected Answer: C
option D is not a valid solution for mitigating single points of failure in the architecture. I apologize for the confusion caused by the incorrect
information.
To mitigate single points of failure in the architecture, you can consider implementing option C: adding a second set of VPNs to the Management
VPC from a second customer gateway device. This will introduce redundancy at the VPN connection level for the Management VPC, ensuring that
one customer gateway or VPN connection fails, the other connection can still provide connectivity to the data center.
upvoted 3 times
Selected Answer: C
Redundant VPN connections: Instead of relying on a single device in the data center, the Management VPC should have redundant VPN
connections established through multiple customer gateways. This will ensure high availability and fault tolerance in case one of the VPN
connections or customer gateways fails.
upvoted 4 times
Selected Answer: C
https://www.examtopics.com/discussions/amazon/view/53908-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 2 times
Question #449 Topic 1
A company runs its application on an Oracle database. The company plans to quickly migrate to AWS because of limited resources for the
database, backup administration, and data center maintenance. The application uses third-party database features that require privileged access.
Which solution will help the company migrate the database to AWS MOST cost-effectively?
A. Migrate the database to Amazon RDS for Oracle. Replace third-party features with cloud services.
B. Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to support third-party features.
C. Migrate the database to an Amazon EC2 Amazon Machine Image (AMI) for Oracle. Customize the database settings to support third-party
features.
D. Migrate the database to Amazon RDS for PostgreSQL by rewriting the application code to remove dependency on Oracle APEX.
Correct Answer: B
Selected Answer: B
Key constraints: Limited resources for DB admin and cost. 3rd party db features with privileged access.
A: Won't work due to 3rd party features
C: AMI with Oracle may work but again overhead of backed, maintenance etc
D: Too much overhead in rewrite
B: Actually supports Oracle 3rd party features
Caution: If this is only about APEX as suggested in option D, then A is also a possible answer:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.Oracle.Options.APEX.html
upvoted 2 times
Selected Answer: B
"Amazon RDS Custom is a managed database service for applications that require customization of the underlying operating system and database
environment. Benefits of RDS automation with the access needed for legacy, packaged, and custom applications."
Migrate the database to Amazon RDS Custom for Oracle. Customize the database settings to support third-party features.
upvoted 2 times
Selected Answer: B
Selected Answer: B
Most likely B.
upvoted 1 times
Selected Answer: B
Selected Answer: B
https://aws.amazon.com/about-aws/whats-new/2021/10/amazon-rds-custom-oracle/
upvoted 2 times
Selected Answer: B
https://docs.aws.amazon.com/ja_jp/AmazonRDS/latest/UserGuide/Oracle.Resources.html
upvoted 1 times
Selected Answer: C
Should not it be since for Ec2, the company will have full control over the database and this is the reason that they are moving to AWS in the first
place "The company plans to quickly migrate to AWS because of limited resources for the database, backup administration, and data center
maintenance?"
upvoted 1 times
A company has a three-tier web application that is in a single server. The company wants to migrate the application to the AWS Cloud. The
company also wants the application to align with the AWS Well-Architected Framework and to be consistent with AWS recommended best
A. Create a VPC across two Availability Zones with the application's existing architecture. Host the application with existing architecture on an
Amazon EC2 instance in a private subnet in each Availability Zone with EC2 Auto Scaling groups. Secure the EC2 instance with security groups
B. Set up security groups and network access control lists (network ACLs) to control access to the database layer. Set up a single Amazon
C. Create a VPC across two Availability Zones. Refactor the application to host the web tier, application tier, and database tier. Host each tier
on its own private subnet with Auto Scaling groups for the web tier and application tier.
D. Use a single Amazon RDS database. Allow database access only from the application tier security group.
E. Use Elastic Load Balancers in front of the web tier. Control access by using security groups containing references to each layer's security
groups.
F. Use an Amazon RDS database Multi-AZ cluster deployment in private subnets. Allow database access only from application tier security
groups.
The wording on this question makes things ambiguous for C. But, remember well-architected so:
A: Not ideal as it is suggesting using existing architecture but with autoscaling EC2. Doesn't leave room for improvement on scaling or reliability on
each tier.
B: Single RDS, not well-architected
D: Again, single RDS
E,F are good options and C is only remaining good one.
upvoted 7 times
CEF is best
upvoted 1 times
CEF
A: application's existing architecture is wrong (single AZ)
B: single AZ
D: Single AZ
upvoted 2 times
A company is migrating its applications and databases to the AWS Cloud. The company will use Amazon Elastic Container Service (Amazon ECS),
Which activities will be managed by the company's operational team? (Choose three.)
A. Management of the Amazon RDS infrastructure layer, operating system, and platforms
B. Creation of an Amazon RDS DB instance and configuring the scheduled maintenance window
C. Configuration of additional software components on Amazon ECS for monitoring, patch management, log management, and host intrusion
detection
D. Installation of patches for all minor and major database versions for Amazon RDS
E. Ensure the physical security of the Amazon RDS infrastructure in the data center
Just to clarify on F. Direct Connect is an ISP and AWS offering, I consider it as a physical connection just like you get from your ISP at home. There i
not security on it until you build security on the connection. AWS provides Direct Connect but it does not provide encryption level security on data
movement through it by default. It's the customer's responsibility.
upvoted 7 times
B: Creating an RDS instance and configuring the maintenance window is done by the customer.
In question has 3 keyword "Amazon ECS", "AWS Direct Connect", "Amazon RDS". With per Amazon services, choose 1 according answer. Has 6
items, need pick 3 items.
RDS --> Excluse A (by keyword "infrastructure layer"), Choose B. Exclusive D (by keyword "patches for all minor and major database versions for
Amazon RDS"). Exclusive E (by keyword "Ensure the physical security of the Amazon RDS"). Easy question.
upvoted 3 times
Amazon ECS is a fully managed service, the ops team only focus on building their applications, not the environment.
Only option B and F makes sense.
upvoted 1 times
100% BCF.
upvoted 1 times
BCF
B: Mentioned RDS
C: Mentioned ECS
F: Mentioned Direct connect
upvoted 4 times
Yes BCF
upvoted 1 times
Bcf for me
upvoted 2 times
Question #452 Topic 1
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled
interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum
CPU available. The company wants to optimize the costs to run the job.
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS
B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each
hour.
C. Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the
D. Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
Correct Answer: B
Selected Answer: B
Never done it myself but apparently you can run Java in Lambda all the way to latest version
https://docs.aws.amazon.com/lambda/latest/dg/lambda-java.html
upvoted 4 times
This question is intended for Lambda. Just searched for Lambda with Event bridge. I
upvoted 2 times
Lambda allows you to allocate memory for your functions in increments of 1 MB, ranging from a minimum of 128 MB to a maximum of 10,240 MB
(10 GB).
upvoted 1 times
Selected Answer: B
Remember - AWS Lambda function can go up to 10 GB of memory, instead of free tier only allow 512MB.
upvoted 3 times
Selected Answer: B
10 seconds to run, optimize the costs, consumes 1 GB of memory = AWS Lambda function.
upvoted 1 times
AWS Lambda automatically scales resources to handle the workload, so you don't have to worry about managing the underlying infrastructure. It
provisions the necessary compute resources based on the configured memory size (1 GB in this case) and executes the job in a serverless
environment.
By using Amazon EventBridge, you can create a scheduled rule to trigger the Lambda function every hour, ensuring that the job runs on the
desired interval.
upvoted 1 times
Selected Answer: B
Agreed, B Lambda
upvoted 2 times
Question #453 Topic 1
A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements,
the company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.
A. Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
B. Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
C. Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
Correct Answer: D
D, Governance is like the goverment, they can do things you cannot , like delete files or backups :D Compliance, nobody can!
upvoted 35 times
Selected Answer: D
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan
upvoted 2 times
D. Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan
upvoted 2 times
Selected Answer: D
Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan
upvoted 2 times
Selected Answer: D
Compliance mode
upvoted 1 times
Selected Answer: D
Must not alter the files for the duration of the retention period = Compliance Mode
upvoted 1 times
D for sure.
upvoted 1 times
https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html
upvoted 2 times
compliance mode
upvoted 3 times
A company has resources across multiple AWS Regions and accounts. A newly hired solutions architect discovers a previous employee did not
provide details about the resources inventory. The solutions architect needs to build and map the relationship details of the various workloads
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
B. Use AWS Step Functions to collect workload details. Build architecture diagrams of the workloads manually.
D. Use AWS X-Ray to view the workload details. Build architecture diagrams with relationships.
Correct Answer: C
Selected Answer: C
Workload Discovery on AWS (formerly called AWS Perspective) is a tool to visualize AWS Cloud workloads. Use Workload Discovery on AWS to
build, customize, and share detailed architecture diagrams of your workloads based on live data from AWS.
upvoted 2 times
Selected Answer: C
Workload Discovery is purpose-built to automatically generate visual mappings of architectures across accounts and Regions. This makes it the
most operationally efficient way to meet the requirements.
upvoted 3 times
Selected Answer: C
Option A: AWS SSM offers "Software inventory": Collect software catalog and configuration for your instances.
Option C: Workload Discovery on AWS: is a tool for maintaining an inventory of the AWS resources across your accounts and various Regions and
mapping relationships between them, and displaying them in a web UI.
upvoted 4 times
Selected Answer: A
https://aws.amazon.com/blogs/mt/visualizing-resources-with-workload-discovery-on-aws/
upvoted 1 times
AWS Workload Discovery - create diagram, map and visualise AWS resources across AWS accounts and Regions
upvoted 2 times
Selected Answer: C
https://aws.amazon.com/jp/builders-flash/202209/workload-discovery-on-aws/?awsf.filter-name=*all
upvoted 2 times
Selected Answer: C
Workload Discovery on AWS is a service that helps visualize and understand the architecture of your workloads across multiple AWS accounts and
Regions. It automatically discovers and maps the relationships between resources, providing an accurate representation of the architecture.
upvoted 2 times
To efficiently build and map the relationship details of various workloads across multiple AWS Regions and accounts, you can use the AWS Systems
Manager Inventory feature in combination with AWS Resource Groups. Here's a solution that can help you achieve this:
Selected Answer: C
A company uses AWS Organizations. The company wants to operate some of its AWS accounts with different budgets. The company wants to
receive alerts and automatically prevent provisioning of additional resources on AWS accounts when the allocated budget threshold is met during
a specific period.
A. Use AWS Budgets to create a budget. Set the budget amount under the Cost and Usage Reports section of the required AWS accounts.
B. Use AWS Budgets to create a budget. Set the budget amount under the Billing dashboards of the required AWS accounts.
C. Create an IAM user for AWS Budgets to run budget actions with the required permissions.
D. Create an IAM role for AWS Budgets to run budget actions with the required permissions.
E. Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity
created with the appropriate config rule to prevent provisioning of additional resources.
F. Add an alert to notify the company when each account meets its budget threshold. Add a budget action that selects the IAM identity
created with the appropriate service control policy (SCP) to prevent provisioning of additional resources.
I don't see why adf has the most voted when almost everyone has chosen bdf, smh
https://acloudguru.com/videos/acg-fundamentals/how-to-set-up-an-aws-billing-and-budget-alert?utm_source=google&utm_medium=paid-
search&utm_campaign=cloud-transformation&utm_term=ssi-global-acg-core-dsa&utm_content=free-
trial&gclid=Cj0KCQjwmtGjBhDhARIsAEqfDEcDfXdLul2NxgSMxKracIITZimWOtDBRpsJPpx8lS9T4NndKhbUqPIaAlzhEALw_wcB
upvoted 12 times
Currently, AWS does not have a specific feature called "AWS Billing Dashboards."
upvoted 6 times
IN MY EXAM
upvoted 6 times
Selected Answer: DF
https://docs.aws.amazon.com/ja_jp/awsaccountbilling/latest/aboutv2/view-billing-dashboard.html
upvoted 4 times
A company runs applications on Amazon EC2 instances in one AWS Region. The company wants to back up the EC2 instances to a second
Region. The company also wants to provision EC2 resources in the second Region and manage the EC2 instances centrally from one AWS
account.
A. Create a disaster recovery (DR) plan that has a similar number of EC2 instances in the second Region. Configure data replication.
B. Create point-in-time Amazon Elastic Block Store (Amazon EBS) snapshots of the EC2 instances. Copy the snapshots to the second Region
periodically.
C. Create a backup plan by using AWS Backup. Configure cross-Region backup to the second Region for the EC2 instances.
D. Deploy a similar number of EC2 instances in the second Region. Use AWS DataSync to transfer the data from the source Region to the
second Region.
Correct Answer: C
Selected Answer: C
Using AWS Backup, you can create backup plans that automate the backup process for your EC2 instances. By configuring cross-Region backup,
you can ensure that backups are replicated to the second Region, providing a disaster recovery capability. This solution is cost-effective as it
leverages AWS Backup's built-in features and eliminates the need for manual snapshot management or deploying and managing additional EC2
instances in the second Region.
upvoted 6 times
Selected Answer: B
Option B (EBS snapshots with cross-Region copy) is the most cost-effective solution for backing up EC2 instances to a second Region while
allowing for centralized management and easy recovery when needed.
upvoted 1 times
Selected Answer: D
How does AWS Backup address that "The company also wants to provision EC2 resources in the second Region"?
upvoted 3 times
https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-resource.html
upvoted 1 times
AWS Backup provides automated backups across Regions for EC2 instances. This handles the backup requirement.
AWS Backup is more cost-effective for cross-Region EC2 backups than using EBS snapshots manually or DataSync.
upvoted 4 times
Selected Answer: C
AWS backup
upvoted 1 times
A company that uses AWS is building an application to transfer data to a product manufacturer. The company has its own identity provider (IdP).
The company wants the IdP to authenticate application users while the users use the application to transfer data. The company must use
A. Use AWS DataSync to transfer the data. Create an AWS Lambda function for IdP authentication.
B. Use Amazon AppFlow flows to transfer the data. Create an Amazon Elastic Container Service (Amazon ECS) task for IdP authentication.
C. Use AWS Transfer Family to transfer the data. Create an AWS Lambda function for IdP authentication.
D. Use AWS Storage Gateway to transfer the data. Create an Amazon Cognito identity pool for IdP authentication.
Correct Answer: C
Selected Answer: C
Option C stands out stronger because AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage
services using SFTP, FTPS, FTP, and AS2 protocols.
And AWS Lambda can be used to authenticate users with the company's IdP.
upvoted 9 times
To authenticate your users, you can use your existing identity provider with AWS Transfer Family. You integrate your identity provider using an
AWS Lambda function, which authenticates and authorizes your users for access to Amazon S3 or Amazon Elastic File System (Amazon EFS).
https://docs.aws.amazon.com/transfer/latest/userguide/custom-identity-provider-users.html
upvoted 5 times
aws transfer family for data transfer and lamda function for idp authentication
upvoted 2 times
https://aws.amazon.com/about-aws/whats-new/2022/07/aws-transfer-family-support-applicability-statement-2-as2/
upvoted 3 times
To authenticate your users, you can use your existing identity provider with AWS Transfer Family. You integrate your identity provider using an AWS
Lambda function, which authenticates and authorizes your users for access to Amazon S3 or Amazon Elastic File System (Amazon EFS).
upvoted 2 times
Selected Answer: C
Applicability Statement 2 (AS2) is a business-to-business (B2B) messaging protocol used to exchange Electronic Data Interchange (EDI) documents
With AWS Transfer Family’s AS2 capabilities, you can securely exchange AS2 messages at scale while maintaining compliance and interoperability
with your trading partners.
upvoted 1 times
C is correct. AWS Transfer Family supports the AS2 protocol, which is required by the company. Also, AWS Lambda can be used to authenticate
users with the company's IdP, which meets the company's requirement.
upvoted 2 times
By using AWS Storage Gateway, you can set up a gateway that supports the AS2 protocol for data transfer. Additionally, you can configure
authentication using an Amazon Cognito identity pool. Amazon Cognito provides a comprehensive authentication and user management service
that integrates with various identity providers, including your own IdP.
Therefore, Option D is the correct solution as it leverages AWS Storage Gateway for AS2 data transfer and allows authentication using an Amazon
Cognito identity pool integrated with the company's IdP.
upvoted 1 times
https://repost.aws/articles/ARo2ihKKThT2Cue5j6yVUgsQ/articles/ARo2ihKKThT2Cue5j6yVUgsQ/aws-transfer-family-announces-support-for-
sending-as2-messages-over-https?
upvoted 1 times
AWS Storage Gateway supports the AS2 protocol for transferring data. By using AWS Storage Gateway, the company can integrate its own IdP
authentication by creating an Amazon Cognito identity pool. Amazon Cognito provides user authentication and authorization capabilities, allowing
the company to authenticate application users using its own IdP.
AWS Transfer Family does not currently support the AS2 protocol. AS2 is a specific protocol used for secure and reliable data transfer, often used in
business-to-business (B2B) scenarios. In this case, option C, which suggests using AWS Transfer Family, would not meet the requirement of using
the AS2 protocol.
upvoted 3 times
To meet the requirements of using an identity provider (IdP) for user authentication and the AS2 protocol for data transfer, you can implement the
following solution:
AWS Transfer Family: Use AWS Transfer Family, specifically AWS Transfer for SFTP or FTPS, to handle the data transfer using the AS2 protocol. AWS
Transfer for SFTP and FTPS provide fully managed, highly available SFTP and FTPS servers in the AWS Cloud.
The Lambda authorizer authenticates the token with the third-party identity provider.
upvoted 1 times
Both options D and C are valid solutions for the given requirements. The choice between them would depend on additional factors such as
specific preferences, existing infrastructure, and overall architectural considerations.
upvoted 2 times
Question #458 Topic 1
A solutions architect is designing a RESTAPI in Amazon API Gateway for a cash payback service. The application requires 1 GB of memory and 2
GB of storage for its computation resources. The application will require that the data is in a relational format.
Which additional combination ofAWS services will meet these requirements with the LEAST administrative effort? (Choose two.)
A. Amazon EC2
B. AWS Lambda
C. Amazon RDS
D. Amazon DynamoDB
Correct Answer: BC
Selected Answer: BC
"The application will require that the data is in a relational format" so DynamoDB is out. RDS is the choice. Lambda is severless.
upvoted 14 times
Why cant it be AC? We don't know the time of job runs right?
upvoted 2 times
"2 GB of storage for its COMPUTATION resources" the maximum for Lambda is 512MB.
upvoted 3 times
Selected Answer: BC
A company uses AWS Organizations to run workloads within multiple AWS accounts. A tagging policy adds department tags to AWS resources
An accounting team needs to determine spending on Amazon EC2 consumption. The accounting team must determine which departments are
responsible for the costs regardless ofAWS account. The accounting team has access to AWS Cost Explorer for all AWS accounts within the
Which solution meets these requirements in the MOST operationally efficient way?
A. From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one
cost report in Cost Explorer grouping by tag name, and filter by EC2.
B. From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one
cost report in Cost Explorer grouping by tag name, and filter by EC2.
C. From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost
report in Cost Explorer grouping by the tag name, and filter by EC2.
D. From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost
Correct Answer: A
Selected Answer: A
By activating a user-defined cost allocation tag named "department" and creating a cost report in Cost Explorer that groups by the tag name and
filters by EC2, the accounting team will be able to track and attribute costs to specific departments across all AWS accounts within the organization
This approach allows for consistent cost allocation and reporting regardless of the AWS account structure.
upvoted 7 times
Selected Answer: A
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
upvoted 5 times
Selected Answer: A
From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost
report in Cost Explorer grouping by tag name, and filter by EC2.
upvoted 4 times
Selected Answer: A
From the Organizations management account billing console, activate a user-defined cost allocation tag named department. Create one cost
report in Cost Explorer grouping by tag name, and filter by EC2.
upvoted 2 times
https://docs.aws.amazon.com/ja_jp/awsaccountbilling/latest/aboutv2/activating-tags.html
upvoted 3 times
a for me
upvoted 2 times
Question #460 Topic 1
A company wants to securely exchange data between its software as a service (SaaS) application Salesforce account and Amazon S3. The
company must encrypt the data at rest by using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The company must
also encrypt the data in transit. The company has enabled API access for the Salesforce account.
A. Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
B. Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to Amazon S3.
C. Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
D. Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
Correct Answer: C
Selected Answer: C
Amazon AppFlow is a fully managed integration service that allows you to securely transfer data between different SaaS applications and AWS
services. It provides built-in encryption options and supports encryption in transit using SSL/TLS protocols. With AppFlow, you can configure the
data transfer flow from Salesforce to Amazon S3, ensuring data encryption at rest by utilizing AWS KMS CMKs.
upvoted 14 times
Selected Answer: C
° Amazon AppFlow can securely transfer data between Salesforce and Amazon S3.
° AppFlow supports encrypting data at rest in S3 using KMS CMKs.
° AppFlow supports encrypting data in transit using HTTPS/TLS.
° AppFlow provides built-in support and templates for Salesforce and S3, requiring less custom configuration than solutions like Lambda, Step
Functions, or custom connectors.
° So Amazon AppFlow is the easiest way to meet all the requirements of securely transferring data between Salesforce and S3 with encryption at
rest and in transit.
upvoted 7 times
Selected Answer: C
With Amazon AppFlow automate bi-directional data flows between SaaS applications and AWS services in just a few clicks
upvoted 1 times
Selected Answer: C
https://docs.aws.amazon.com/appflow/latest/userguide/what-is-appflow.html
upvoted 2 times
https://docs.aws.amazon.com/appflow/latest/userguide/salesforce.html
upvoted 3 times
A company is developing a mobile gaming app in a single AWS Region. The app runs on multiple Amazon EC2 instances in an Auto Scaling group.
The company stores the app data in Amazon DynamoDB. The app communicates by using TCP traffic and UDP traffic between the users and the
servers. The application will be used globally. The company wants to ensure the lowest possible latency for all users.
A. Use AWS Global Accelerator to create an accelerator. Create an Application Load Balancer (ALB) behind an accelerator endpoint that uses
Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB.
B. Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB) behind an accelerator endpoint that uses
Global Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB.
C. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create a Network Load Balancer (NLB) behind the endpoint and
listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB. Update CloudFront to use the NLB as the
origin.
D. Create an Amazon CloudFront content delivery network (CDN) endpoint. Create an Application Load Balancer (ALB) behind the endpoint
and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the ALB. Update CloudFront to use the ALB as
the origin.
Correct Answer: B
Selected Answer: B
Selected Answer: B
UDP == NLB
NLB can't be used with Cloudfront, so we have to play with AWS Global accelerator
upvoted 3 times
Use AWS Global Accelerator to create an accelerator. Create a Network Load Balancer (NLB) behind an accelerator endpoint that uses Global
Accelerator integration and listening on the TCP and UDP ports. Update the Auto Scaling group to register instances on the NLB
upvoted 3 times
Selected Answer: B
Selected Answer: B
Clearly B.
upvoted 1 times
Selected Answer: B
NLB + Accelerator
upvoted 3 times
AWS Global Accelerator is a better solution for the mobile gaming app than CloudFront
upvoted 3 times
Question #462 Topic 1
A company has an application that processes customer orders. The company hosts the application on an Amazon EC2 instance that saves the
orders to an Amazon Aurora database. Occasionally when traffic is high the workload does not process orders fast enough.
What should a solutions architect do to write the orders reliably to the database as quickly as possible?
A. Increase the instance size of the EC2 instance when traffic is high. Write orders to Amazon Simple Notification Service (Amazon SNS).
B. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application
Load Balancer to read from the SQS queue and process orders into the database.
C. Write orders to Amazon Simple Notification Service (Amazon SNS). Subscribe the database endpoint to the SNS topic. Use EC2 instances
in an Auto Scaling group behind an Application Load Balancer to read from the SNS topic.
D. Write orders to an Amazon Simple Queue Service (Amazon SQS) queue when the EC2 instance reaches CPU threshold limits. Use scheduled
scaling of EC2 instances in an Auto Scaling group behind an Application Load Balancer to read from the SQS queue and process orders into
the database.
Correct Answer: B
Selected Answer: B
By decoupling the write operation from the processing operation using SQS, you ensure that the orders are reliably stored in the queue, regardless
of the processing capacity of the EC2 instances. This allows the processing to be performed at a scalable rate based on the available EC2 instances,
improving the overall reliability and speed of order processing.
upvoted 11 times
IN MY EXAM
upvoted 6 times
Selected Answer: B
Decoupling the order processing from the application using Amazon SQS and leveraging Auto Scaling to handle the processing of orders based on
the workload in the SQS queue is indeed the most efficient and scalable approach. This architecture addresses both reliability and performance
concerns during traffic spikes.
upvoted 3 times
Selected Answer: B
Write orders to an Amazon Simple Queue Service (Amazon SQS) queue. Use EC2 instances in an Auto Scaling group behind an Application Load
Balancer to read from the SQS queue and process orders into the database.
upvoted 2 times
100% B.
upvoted 2 times
An IoT company is releasing a mattress that has sensors to collect data about a user’s sleep. The sensors will send data to an Amazon S3 bucket.
The sensors collect approximately 2 MB of data every night for each mattress. The company must process and summarize the data for each
mattress. The results need to be available as soon as possible. Data processing will require 1 GB of memory and will finish within 30 seconds.
Correct Answer: C
Selected Answer: C
AWS Lambda charges you based on the number of invocations and the execution time of your function. Since the data processing job is relatively
small (2 MB of data), Lambda is a cost-effective choice. You only pay for the actual usage without the need to provision and maintain infrastructure
upvoted 6 times
Note: Lambda allocates CPU power in proportion to the amount of memory configured. You can increase or decrease the memory and CPU
power allocated to your function using the Memory (MB) setting. At 1,769 MB, a function has the equivalent of one vCPU.
4 KB, for all environment variables associated with the function, in aggregate
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
upvoted 2 times
Selected Answer: C
"processing will require 1 GB of memory and will finish within 30 seconds", perfect for AWS Lambda.
upvoted 3 times
Selected Answer: C
The data processing is lightweight, only requiring 1 GB memory and finishing in under 30 seconds. Lambda is designed for short, transient
workloads like this.
Lambda scales automatically, invoking the function as needed when new data arrives. No servers to manage.
Lambda has a very low cost. You only pay for the compute time used to run the function, billed in 100ms increments. Much cheaper than
provisioning EMR or Glue.
Processing can begin as soon as new data hits the S3 bucket by triggering the Lambda function. Provides low latency.
upvoted 4 times
A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Single-AZ DB instance. Management
wants to eliminate single points of failure and has asked a solutions architect to recommend an approach to minimize database downtime
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
B. Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the
snapshot.
C. Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute
D. Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53
Correct Answer: A
"minimize database downtime" so why create a new DB just modify the existing one so no time is wasted.
upvoted 5 times
Selected Answer: A
The instance doesn't automatically convert to Multi-AZ immediately. By default it will convert at next maintenance window but you can convert it
immediately. Compared to B this is much better. CD are too many changes overall so unsuitable.
upvoted 2 times
Selected Answer: A
A. Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option
upvoted 4 times
Selected Answer: A
Selected Answer: A
A) https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZSingleStandby.html#Concepts.MultiAZ.Migrating
upvoted 1 times
Compared to other solutions that involve creating new instances, restoring snapshots, or setting up replication manually, converting to a Multi-AZ
deployment is a simpler and more streamlined approach with lower overhead.
Overall, option A offers a cost-effective and efficient way to minimize database downtime without requiring significant changes or additional
complexities.
upvoted 2 times
Selected Answer: A
i guess aa
upvoted 3 times
Question #465 Topic 1
A company is developing an application to support customer demands. The company wants to deploy the application on multiple Amazon EC2
Nitro-based instances within the same Availability Zone. The company also wants to give the application the ability to write to multiple block
storage volumes in multiple EC2 Nitro-based instances simultaneously to achieve higher application availability.
A. Use General Purpose SSD (gp3) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
B. Use Throughput Optimized HDD (st1) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
D. Use General Purpose SSD (gp2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
Correct Answer: C
Selected Answer: C
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes.
upvoted 9 times
Selected Answer: C
hdd<gp2<gp3<io2
upvoted 6 times
Selected Answer: C
AWS IO2 does support Multi-Attach. Multi-Attach allows you to share access to an EBS data volume between up to 16 Nitro-based EC2 instances
within the same Availability Zone. Each attached instance has full read and write permission to the shared volume. This feature is intended to make
it easier to achieve higher application availability for customers that want to deploy applications that manage storage consistency from multiple
writers in shared storage infrastructure. However, please note that Multi-Attach on io2 is available in certain regions only.
upvoted 5 times
Selected Answer: C
C. Use Provisioned IOPS SSD (io2) EBS volumes with Amazon Elastic Block Store (Amazon EBS) Multi-Attach
upvoted 4 times
Selected Answer: C
Multi-Attach is supported exclusively on Provisioned IOPS SSD (io1 and io2) volumes.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-
multi.html#:~:text=Multi%2DAttach%20is%20supported%20exclusively%20on%20Provisioned%20IOPS%20SSD%20(io1%20and%20io2)%20volum
s.
upvoted 2 times
Selected Answer: C
Option D suggests using General Purpose SSD (gp2) EBS volumes with Amazon EBS Multi-Attach. While gp2 volumes support multi-attach, gp3
volumes offer a more cost-effective solution with enhanced performance characteristics.
upvoted 1 times
Multi-Attach enabled volumes can be attached to up to 16 instances built on the Nitro System that are in the same Availability Zone. Multi-
Attach is supported exclusively on Provisioned IOPS SSD (io1 or io2) volumes.
upvoted 2 times
While both option C and option D can support Amazon EBS Multi-Attach, using Provisioned IOPS SSD (io2) EBS volumes provides higher
performance and lower latency compared to General Purpose SSD (gp2) volumes. This makes io2 volumes better suited for demanding and
mission-critical applications where performance is crucial.
If the goal is to achieve higher application availability and ensure optimal performance, using Provisioned IOPS SSD (io2) EBS volumes with Multi-
Attach will provide the best results.
upvoted 2 times
Selected Answer: C
c is right
Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the same
Availability Zone.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volumes-multi.html
nothing about gp
upvoted 2 times
Selected Answer: D
Given that the scenario does not mention any specific requirements for high-performance or specific IOPS needs, using General Purpose SSD (gp2)
EBS volumes with Amazon EBS Multi-Attach (option D) is typically the more cost-effective and suitable choice. General Purpose SSD (gp2) volumes
provide a good balance of performance and cost, making them well-suited for general-purpose workloads.
upvoted 1 times
plus fyi, gp3 is the one gives a good balance of performance and cost. so gp2 is wrong in every way
upvoted 1 times
Question #466 Topic 1
A company designed a stateless two-tier application that uses Amazon EC2 in a single Availability Zone and an Amazon RDS Multi-AZ DB
instance. New company management wants to ensure the application is highly available.
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
B. Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region
C. Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application
D. Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
Correct Answer: A
Selected Answer: A
it's A
upvoted 5 times
Selected Answer: A
A. Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
upvoted 1 times
Highly available = Multi-AZ EC2 Auto Scaling and Application Load Balancer.
upvoted 2 times
Most likely A.
upvoted 1 times
By combining Multi-AZ EC2 Auto Scaling and an Application Load Balancer, you achieve high availability for the EC2 instances hosting your
stateless two-tier application.
upvoted 4 times
Question #467 Topic 1
A company uses AWS Organizations. A member account has purchased a Compute Savings Plan. Because of changes in the workloads inside the
member account, the account no longer receives the full benefit of the Compute Savings Plan commitment. The company uses less than 50% of
A. Turn on discount sharing from the Billing Preferences section of the account console in the member account that purchased the Compute
Savings Plan.
B. Turn on discount sharing from the Billing Preferences section of the account console in the company's Organizations management
account.
C. Migrate additional compute workloads from another AWS account to the account that has the Compute Savings Plan.
D. Sell the excess Savings Plan commitment in the Reserved Instance Marketplace.
Correct Answer: B
Selected Answer: B
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-off.html
Sign in to the AWS Management Console and open the AWS Billing console at https://console.aws.amazon.com/billing/
.
Note
Selected Answer: D
I'd go with D, due to "The company uses less than 50% of its purchased compute power". Like, why are you sharing it between other accounts of
the company, if the company itself doesn't need it? If you provisioned too much you can sell the overprovisioned capacity on the market. I'd
understand B if it was about the account using about 50% of the plan and other accounts running similar workloads, but no such thing is stated.
upvoted 1 times
in the question, it does not clarify then number of accounts the company has, if they only has one account, I think it is D,
upvoted 1 times
B, it's a generic Compute Savings Plan that can be used for compute workloads in the other accounts.
A doesn't work, discount sharing must be enabled for all accounts (at least for those that provide and share the discounts).
C is not possible, there's a reason why the workloads are in different accounts.
D would be a last resort if there wouldn't be any other workloads in the own organization, but here are.
upvoted 3 times
Selected Answer: D
I saw similar question in older exam one can sell on the market unused capacity
upvoted 1 times
Selected Answer: B
B. Turn on discount sharing from the Billing Preferences section of the account console in the company's Organizations management account
upvoted 2 times
Selected Answer: D
"For example, you might want to sell Reserved Instances after moving instances to a new AWS Region, changing to a new instance type, ending
projects before the term expiration, when your business needs change, or if you have unneeded capacity."
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html
upvoted 2 times
Selected Answer: B
answer is B.
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/ri-turn-
off.html#:~:text=choose%20Save.-,Turning%20on%20shared%20reserved%20instances%20and%20Savings%20Plans%20discounts,-
You%20can%20use
upvoted 1 times
The company uses less than 50% of its purchased compute power.
For this reason i believe D is the best solution : https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ri-market-general.html
upvoted 3 times
To summarize, option C (Migrate additional compute workloads from another AWS account to the account that has the Compute Savings Plan) is a
valid solution to address the underutilization of the Compute Savings Plan. However, it involves workload migration and may require careful
planning and coordination. Consider the feasibility and impact of migrating workloads before implementing this solution.
upvoted 2 times
A company is developing a microservices application that will provide a search catalog for customers. The company must use REST APIs to
present the frontend of the application to users. The REST APIs must access the backend services that the company hosts in containers in private
VPC subnets.
A. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a
private subnet. Create a private VPC link for API Gateway to access Amazon ECS.
B. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private
subnet. Create a private VPC link for API Gateway to access Amazon ECS.
C. Design a WebSocket API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a
private subnet. Create a security group for API Gateway to access Amazon ECS.
D. Design a REST API by using Amazon API Gateway. Host the application in Amazon Elastic Container Service (Amazon ECS) in a private
subnet. Create a security group for API Gateway to access Amazon ECS.
Correct Answer: B
Selected Answer: B
REST API with Amazon API Gateway: REST APIs are the appropriate choice for providing the frontend of the microservices application. Amazon API
Gateway allows you to design, deploy, and manage REST APIs at scale.
Amazon ECS in a Private Subnet: Hosting the application in Amazon ECS in a private subnet ensures that the containers are securely deployed
within the VPC and not directly exposed to the public internet.
Private VPC Link: To enable the REST API in API Gateway to access the backend services hosted in Amazon ECS, you can create a private VPC link.
This establishes a private network connection between the API Gateway and ECS containers, allowing secure communication without traversing the
public internet.
upvoted 13 times
Question itself says: "The company must use REST APIs", hence WebSocket APIs are not applicable and such options are eliminated straight away.
upvoted 8 times
Selected Answer: B
"VPC links enable you to create private integrations that connect your HTTP API routes to private resources in a VPC, such as Application Load
Balancers or Amazon ECS container-based applications."
upvoted 1 times
Selected Answer: B
Selected Answer: B
To allow the REST APIs to securely access the backend, a private VPC link should be created from API Gateway to the ECS containers. A private VPC
link provides private connectivity between API Gateway and the VPC without using public IP addresses or requiring an internet gateway/NAT
upvoted 3 times
Selected Answer: B
https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-private-integration.html
upvoted 1 times
A VPC link is a resource in Amazon API Gateway that allows for connecting API routes to private resources inside a VPC.
upvoted 2 times
Selected Answer: B
A company stores raw collected data in an Amazon S3 bucket. The data is used for several types of analytics on behalf of the company's
customers. The type of analytics requested determines the access pattern on the S3 objects.
The company cannot predict or control the access pattern. The company wants to reduce its S3 costs.
A. Use S3 replication to transition infrequently accessed objects to S3 Standard-Infrequent Access (S3 Standard-IA)
B. Use S3 Lifecycle rules to transition objects from S3 Standard to Standard-Infrequent Access (S3 Standard-IA)
D. Use S3 Inventory to identify and transition objects that have not been accessed from S3 Standard to S3 Intelligent-Tiering
Correct Answer: C
Selected Answer: C
Selected Answer: C
A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The applications must initiate communications with other
external applications using the internet. However the company’s security policy states that any external service cannot initiate a connection to the
EC2 instances.
A. Create a NAT gateway and make it the destination of the subnet's route table
B. Create an internet gateway and make it the destination of the subnet's route table
C. Create a virtual private gateway and make it the destination of the subnet's route table
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
Correct Answer: D
For exam,
egress-only internet gateway: IPv6
NAT gateway: IPv4
upvoted 49 times
"An egress-only internet gateway is for use with IPv6 traffic only. To enable outbound-only internet communication over IPv4, use a NAT
gateway instead."
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
upvoted 1 times
Selected Answer: D
An egress-only internet gateway (EIGW) is specifically designed for IPv6-only VPCs and provides outbound IPv6 internet access while blocking
inbound IPv6 traffic. It satisfies the requirement of preventing external services from initiating connections to the EC2 instances while allowing the
instances to initiate outbound communications.
upvoted 8 times
Selected Answer: D
"An egress-only internet gateway is for use with IPv6 traffic only. To enable outbound-only internet communication over IPv4, use a NAT gateway
instead."
https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html
upvoted 1 times
Selected Answer: D
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
upvoted 1 times
Selected Answer: D
Outbound traffic only = Create an egress-only internet gateway and make it the destination of the subnet's route table
upvoted 1 times
Selected Answer: D
A company is creating an application that runs on containers in a VPC. The application stores and accesses data in an Amazon S3 bucket. During
the development phase, the application will store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs and
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC
D. Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route tables in the VPC
Correct Answer: C
Amazon S3 supports both gateway endpoints and interface endpoints. With a gateway endpoint, you can access Amazon S3 from your VPC,
without requiring an internet gateway or NAT device for your VPC, and with no additional cost. However, gateway endpoints do not allow access
from on-premises networks, from peered VPCs in other AWS Regions, or through a transit gateway. For those scenarios, you must use an interface
endpoint, which is available for an additional cost.
upvoted 10 times
Selected Answer: C
Gateway VPC Endpoint: A gateway VPC endpoint enables private connectivity between a VPC and Amazon S3. It allows direct access to Amazon S3
without the need for internet gateways, NAT devices, VPN connections, or AWS Direct Connect.
Minimize Internet Traffic: By creating a gateway VPC endpoint for Amazon S3 and associating it with all route tables in the VPC, the traffic between
the VPC and Amazon S3 will be kept within the AWS network. This helps in minimizing data transfer costs and prevents the need for traffic to
traverse the internet.
Cost-Effective: With a gateway VPC endpoint, the data transfer between the application running in the VPC and the S3 bucket stays within the AWS
network, reducing the need for data transfer across the internet. This can result in cost savings, especially when dealing with large amounts of data
upvoted 6 times
Selected Answer: C
https://aws.amazon.com/blogs/architecture/choosing-your-vpc-endpoint-strategy-for-amazon-s3/
The only reason why option D is wrong is because "Associate this endpoint with all route tables in the VPC" makes no sense.
upvoted 1 times
C. Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC
upvoted 1 times
Selected Answer: C
Prevent traffic from traversing the internet = Gateway VPC endpoint for S3.
upvoted 1 times
A company has a mobile chat application with a data store based in Amazon DynamoDB. Users would like new messages to be read with as little
latency as possible. A solutions architect needs to design an optimal solution that requires minimal application changes.
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint.
B. Add DynamoDB read replicas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.
C. Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.
D. Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of
DynamoDB.
Correct Answer: A
Selected Answer: A
B and C do not reduce latency. D would reduce latency but require significant application changes.
upvoted 1 times
Selected Answer: C
0 code change @C
ABD. In memory cache, read replica, elasticache. Chat application and content is dynamic, cache will still pull data from prod database
upvoted 1 times
Selected Answer: A
A. Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAX endpoint.
upvoted 1 times
Read replica does improve the read speed, but it cannot improve the latency because there is always latency between replicas. So A works and B
not work.
upvoted 1 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
Amazon DynamoDB Accelerator (DAX): DAX is an in-memory cache for DynamoDB that provides low-latency access to frequently accessed data. By
configuring DAX for the new messages table, read requests for the table will be served from the DAX cache, significantly reducing the latency.
Minimal Application Changes: With DAX, the application code can be updated to use the DAX endpoint instead of the standard DynamoDB
endpoint. This change is relatively minimal and does not require extensive modifications to the application's data access logic.
Low Latency: DAX caches frequently accessed data in memory, allowing subsequent read requests for the same data to be served with minimal
latency. This ensures that new messages can be read by users with minimal delay.
upvoted 3 times
a is valid
upvoted 2 times
Question #473 Topic 1
A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB). The website serves static content. Website
traffic is increasing, and the company is concerned about a potential increase in cost.
B. Create an Amazon ElastiCache cluster. Connect the ALB to the ElastiCache cluster to serve cached files
C. Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache static files
D. Create a second ALB in an alternative AWS Region. Route user traffic to the closest Region to minimize data transfer costs
Correct Answer: A
Selected Answer: A
The problem with this question is that no sane AWS architect will chose any of these options and go for S3 caching. But given the choices, A is the
only one which will solve the problem within reasonable cost.
upvoted 3 times
Selected Answer: A
Selected Answer: A
Amazon CloudFront: CloudFront is a content delivery network (CDN) service that caches content at edge locations worldwide. By creating a
CloudFront distribution, static content from the website can be cached at edge locations, reducing the load on the EC2 instances and improving
the overall performance.
Caching Static Files: Since the website serves static content, caching these files at CloudFront edge locations can significantly reduce the number of
requests forwarded to the EC2 instances. This helps to lower the overall cost by offloading traffic from the instances and reducing the data transfer
costs.
upvoted 4 times
a for me
upvoted 2 times
Question #474 Topic 1
A company has multiple VPCs across AWS Regions to support and run workloads that are isolated from workloads in other Regions. Because of a
recent application launch requirement, the company’s VPCs must communicate with all other VPCs across all Regions.
Which solution will meet these requirements with the LEAST amount of administrative effort?
A. Use VPC peering to manage VPC communication in a single Region. Use VPC peering across Regions to manage VPC communications.
B. Use AWS Direct Connect gateways across all Regions to connect VPCs across regions and manage VPC communications.
C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions to manage VPC
communications.
D. Use AWS PrivateLink across all Regions to connect VPCs across Regions and manage VPC communications
Correct Answer: C
The correct answer is: C. Use AWS Transit Gateway to manage VPC communication in a single Region and Transit Gateway peering across Regions
to manage VPC communications.
AWS Transit Gateway is a network hub that you can use to connect your VPCs and on-premises networks. It provides a single point of control for
managing your network traffic, and it can help you to reduce the number of connections that you need to manage.
Transit Gateway peering allows you to connect two Transit Gateways in different Regions. This can help you to create a global network that spans
multiple Regions.
To use Transit Gateway to manage VPC communication in a single Region, you would create a Transit Gateway in each Region. You would then
attach your VPCs to the Transit Gateway.
To use Transit Gateway peering to manage VPC communication across Regions, you would create a Transit Gateway peering connection between
the Transit Gateways in each Region.
upvoted 23 times
Selected Answer: C
AWS Transit Gateway: Transit Gateway is a highly scalable service that simplifies network connectivity between VPCs and on-premises networks. By
using a Transit Gateway in a single Region, you can centralize VPC communication management and reduce administrative effort.
Transit Gateway Peering: Transit Gateway supports peering connections across AWS Regions, allowing you to establish connectivity between VPCs
in different Regions without the need for complex VPC peering configurations. This simplifies the management of VPC communications across
Regions.
upvoted 7 times
Selected Answer: C
C is like a managed solution for A. A can work but with a lot of overhead (CIDR blocks uniqueness requirement). B and D are not the right products
upvoted 1 times
Selected Answer: C
Selected Answer: C
Definitely C.
Very well explained by @Felix_br
upvoted 1 times
A company is designing a containerized application that will use Amazon Elastic Container Service (Amazon ECS). The application needs to
access a shared file system that is highly durable and can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours.
The file system needs to provide a mount target m each Availability Zone within a Region.
A solutions architect wants to use AWS Backup to manage the replication to another Region.
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
Correct Answer: C
Selected Answer: C
https://aws.amazon.com/efs/faq/
Q: What is Amazon EFS Replication?
EFS Replication can replicate your file system data to another Region or within the same Region without requiring additional infrastructure or a
custom process. Amazon EFS Replication automatically and transparently replicates your data to a second file system in a Region or AZ of your
choice. You can use the Amazon EFS console, AWS CLI, and APIs to activate replication on an existing file system. EFS Replication is continual and
provides a recovery point objective (RPO) and a recovery time objective (RTO) of minutes, helping you meet your compliance and business
continuity goals.
upvoted 11 times
Selected Answer: C
The key thing to notice here in the question is "with a recovery point objective (RPO) of 8 hours", as it is 8 hours of time it recovery can be easily
managed by EFS, no need to go for costlier and not built for this use-case(share file system) options like NetApp ONTAP(proprietary data cluster),
OpenZFS(not a built in filesystem in AWS) or FSx for windows(file system for windows compatible workloads)
upvoted 2 times
In the absence of this information, we can only make an assumption based on the provided requirements. The requirement for a shared file system
that can recover data to another AWS Region with a recovery point objective (RPO) of 8 hours, and the need for a mount target in each Availability
Zone within a Region, are all natively supported by Amazon EFS with the Standard storage class.
While Amazon FSx for NetApp ONTAP does provide shared file systems and supports both Windows and Linux, it does not natively support
replication to another region through AWS Backup.
upvoted 2 times
Selected Answer: C
C. Amazon Elastic File System (Amazon EFS) with the Standard storage class
upvoted 1 times
Selected Answer: B
B or C, but since question didn't mention operating system type, I guess we should go with B because it is more versatile (EFS supports Linux only)
although ECS containers do support windows instances...
upvoted 1 times
https://aws.amazon.com/efs/faq/#:~:text=What%20is%20Amazon%20EFS%20Replication%3F
https://aws.amazon.com/fsx/netapp-
ontap/faqs/#:~:text=How%20do%20I%20configure%20cross%2Dregion%20replication%20for%20the%20data%20in%20my%20file%20system%3F
upvoted 1 times
C: EFS
upvoted 2 times
AWS Backup can manage replication of EFS to another region as mentioned below
https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html
upvoted 1 times
During a disaster or fault within an AZ affecting all copies of your data, you might experience loss of data that has not been replicated using
Amazon EFS Replication. EFS Replication is designed to meet a recovery point objective (RPO) and recovery time objective (RTO) of minutes. You
can use AWS Backup to store additional copies of your file system data and restore them to a new file system in an AZ or Region of your choice.
Amazon EFS file system backup data created and managed by AWS Backup is replicated to three AZs and is designed for 99.999999999% (11 nines
durability.
upvoted 1 times
Selected Answer: B
shared file system that is highly durable and can recover data
upvoted 2 times
A company is expecting rapid growth in the near future. A solutions architect needs to configure existing users and grant permissions to new
users on AWS. The solutions architect has decided to create IAM groups. The solutions architect will add the new users to IAM groups based on
department.
Which additional action is the MOST secure way to grant permissions to the new users?
B. Create IAM roles that have least privilege permission. Attach the roles to the IAM groups
C. Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
D. Create IAM roles. Associate the roles with a permissions boundary that defines the maximum permissions
Correct Answer: C
Selected Answer: C
Option B is incorrect because IAM roles are not directly attached to IAM groups.
upvoted 9 times
Selected Answer: C
Agreed with C
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_attach-policy.html
Selected Answer: C
"Manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources."
"An IAM role is an identity within your AWS account that has specific permissions. It's similar to an IAM user, but isn't associated with a specific
person."
"IAM roles do not have any permanent credentials associated with them and are instead assumed by IAM users, AWS services, or applications that
need temporary security credentials to access AWS resources"
upvoted 1 times
https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html
https://blog.awsfundamentals.com/aws-iam-roles-terms-concepts-and-examples
upvoted 1 times
Selected Answer: C
A is wrong
SCPs are mainly used along with AWS Organizations organizational units (OUs). SCPs do not replace IAM Policies such that they do not provide
actual permissions. To perform an action, you would still need to grant appropriate IAM Policy permissions.
upvoted 2 times
Selected Answer: C
Create an IAM policy that grants least privilege permission. Attach the policy to the IAM groups
upvoted 1 times
Selected Answer: C
An IAM policy is an object in AWS that, when associated with an identity or resource, defines their permissions. Permissions in the policies
determine whether a request is allowed or denied. You manage access in AWS by creating policies and attaching them to IAM identities (users,
groups of users, or roles) or AWS resources.
So, option B will also work.
But Since I can only choose one, C would be it.
upvoted 2 times
Selected Answer: C
Selected Answer: C
should be b
upvoted 2 times
IAM policies are attached to IAM roles, IAM groups or IAM users. IAM roles are used by services.
upvoted 1 times
Question #477 Topic 1
A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket. An administrator has created the following IAM
policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company
Which statement should a solutions architect add to the policy to correct bucket access?
A.
B.
C.
D.
Correct Answer: D
Selected Answer: D
option B action is S3:*. this means all actions. The company follows least-privilege access rules. Hence option D
upvoted 5 times
TariqKipkemei Most Recent 1 year, 3 months ago
Selected Answer: D
D is the answer
upvoted 3 times
Selected Answer: D
D for sure
upvoted 1 times
Selected Answer: D
d work
upvoted 4 times
A law firm needs to share information with the public. The information includes hundreds of files that must be publicly readable. Modifications or
deletions of the files by anyone before a designated future date are prohibited.
Which solution will meet these requirements in the MOST secure way?
A. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS
B. Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated
date. Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
C. Create a new Amazon S3 bucket with S3 Versioning enabled. Configure an event trigger to run an AWS Lambda function in case of object
modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
D. Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object
Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the
S3 bucket.
Correct Answer: B
Selected Answer: B
Option A allows the files to be modified or deleted by anyone with read-only IAM permissions. Option C allows the files to be modified or deleted
by anyone who can trigger the AWS Lambda function.
Option D allows the files to be modified or deleted by anyone with read-only IAM permissions to the S3 bucket
upvoted 5 times
Selected Answer: B
Selected Answer: B
Selected Answer: B
S3 bucket policy
upvoted 3 times
Selected Answer: B
Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date.
Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
upvoted 2 times
Create a new Amazon S3 bucket with S3 Versioning enabled. Use S3 Object Lock with a retention period in accordance with the designated date.
Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objects.
upvoted 3 times
Selected Answer: B
Clearly B.
upvoted 2 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html
upvoted 4 times
Question #479 Topic 1
A company is making a prototype of the infrastructure for its new website by manually provisioning the necessary infrastructure. This
infrastructure includes an Auto Scaling group, an Application Load Balancer and an Amazon RDS database. After the configuration has been
thoroughly validated, the company wants the capability to immediately deploy the infrastructure for development and production use in two
A. Use AWS Systems Manager to replicate and provision the prototype infrastructure in two Availability Zones
B. Define the infrastructure as a template by using the prototype infrastructure as a guide. Deploy the infrastructure with AWS CloudFormation.
C. Use AWS Config to record the inventory of resources that are used in the prototype infrastructure. Use AWS Config to deploy the prototype
D. Use AWS Elastic Beanstalk and configure it to use an automated reference to the prototype infrastructure to automatically deploy new
Correct Answer: B
Selected Answer: A
The difference between CloudFormation and Beanstalk might be trick, but just for the exam think:
A: Wrong product
C: Wrong product
D: EBS can only handle EC2 so RDS won't be replicated automatically
B: CloudFormation = IaaC
upvoted 2 times
Selected Answer: B
Selected Answer: B
Clearly B.
upvoted 2 times
AWS CloudFormation is a service that allows you to define and provision infrastructure as code. This means that you can create a template that
describes the resources you want to create, and then use CloudFormation to deploy those resources in an automated fashion.
In this case, the solutions architect should define the infrastructure as a template by using the prototype infrastructure as a guide. The template
should include resources for an Auto Scaling group, an Application Load Balancer, and an Amazon RDS database. Once the template is created, the
solutions architect can use CloudFormation to deploy the infrastructure in two Availability Zones.
upvoted 3 times
Selected Answer: B
b obvious
upvoted 4 times
Question #480 Topic 1
A business application is hosted on Amazon EC2 and uses Amazon S3 for encrypted object storage. The chief information security officer has
directed that no application traffic between the two services should traverse the public internet.
Which capability should the solutions architect use to meet the compliance requirements?
B. VPC endpoint
C. Private subnet
Correct Answer: B
Selected Answer: B
A VPC endpoint enables you to privately access AWS services without requiring internet gateways, NAT gateways, VPN connections, or AWS Direct
Connect connections. It allows you to connect your VPC directly to supported AWS services, such as Amazon S3, over a private connection within
the AWS network.
By creating a VPC endpoint for Amazon S3, the traffic between your EC2 instances and S3 will stay within the AWS network and won't traverse the
public internet. This provides a more secure and compliant solution, as the data transfer remains within the private network boundaries.
upvoted 9 times
Selected Answer: B
Prevent traffic from traversing the internet = VPC endpoint for S3.
upvoted 3 times
B for sure
upvoted 2 times
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZAmazon RDS for MySQL server forms the database layer Amazon
ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item
to the database. The data in the cache must always match the data in the database.
Correct Answer: B
Selected Answer: B
In the write-through caching strategy, when a customer adds or updates an item in the database, the application first writes the data to the
database and then updates the cache with the same data. This ensures that the cache is always synchronized with the database, as every write
operation triggers an update to the cache.
upvoted 27 times
Adding TTL (Time-to-Live) caching strategy (option C) involves setting an expiration time for cached data. It is useful for scenarios where the
data can be considered valid for a specific period, but it does not guarantee that the data in the cache is always in sync with the database.
AWS AppConfig caching strategy (option D) is a service that helps you deploy and manage application configurations. It is not specifically
designed for caching data synchronization between a database and cache layer.
upvoted 32 times
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/Strategies.html#Strategies.WriteThrough
upvoted 3 times
write-through caching strategy updates the cache at the same time as the database
upvoted 2 times
Question #482 Topic 1
A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits
per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store
Which solution will meet these requirements with the LEAST operational overhead?
A. Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket
B. Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
D. Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3
bucket
Correct Answer: B
Selected Answer: B
AWS DataSync is a fully managed data transfer service that simplifies and automates the process of moving data between on-premises storage and
Amazon S3. It provides secure and efficient data transfer with built-in encryption, ensuring that the data is encrypted in transit.
By using AWS DataSync, the company can easily migrate the 100 GB of historical data from their on-premises location to an S3 bucket. DataSync
will handle the encryption of data in transit and ensure secure transfer.
upvoted 11 times
Selected Answer: B
Selected Answer: A
Assertions:
- needs to encrypt the data in transit to the S3 bucket.
- The company will store new data directly in Amazon S3.
Requirements:
- with the LEAST operational overhead
Even Though options A and B could do the job, option A requires VM maintenance because it is not a once-off migration (The company will store
new data directly in Amazon S3)
NB: According to me, we must stuck to the question and avoid to interpret
upvoted 2 times
Selected Answer: A
By default, all data transmitted from the client computer running the AWS CLI and AWS service endpoints is encrypted by sending everything
through a HTTPS/TLS connection. You don't need to do anything to enable the use of HTTPS/TLS. It is always enabled unless you explicitly disable
it for an individual command by using the --no-verify-ssl command line option.
This is simpler compared to datasync, which will cost operational overhead to configure.
upvoted 1 times
Selected Answer: B
storage data (including metadata) is encrypted in transit, but how it's encrypted throughout the transfer depends on your source and destination
locations.
upvoted 1 times
Happy to be corrected!
upvoted 1 times
Selected Answer: B
Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket
upvoted 3 times
Selected Answer: A
B is a good option but as the volume is not large and the speed is not bad, A requires less operational overhead
upvoted 4 times
Selected Answer: B
Answer A and B both are correct and with least operational overhead. But since the question says from an "On-premise Location" hence I would go
with DataSync.
upvoted 1 times
AWS DataSync is a secure, online service that automates and accelerates moving data between on premises and AWS Storage services.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html
upvoted 3 times
Selected Answer: B
Using DataSync, the company can easily migrate the 100 GB of historical data to an S3 bucket. DataSync will handle the encryption of data in
transit, so the company does not need to set up a VPN or worry about managing encryption keys.
Option A, using the s3 sync command in the AWS CLI to move the data directly to an S3 bucket, would require more operational overhead as the
company would need to manage the encryption of data in transit themselves. Option D, setting up an IPsec VPN from the on-premises location to
AWS, would also require more operational overhead and would be overkill for this scenario. Option C, using AWS Snowball, could work but would
require more time and resources to order and set up the physical device.
upvoted 4 times
A company containerized a Windows job that runs on .NET 6 Framework under a Windows container. The company wants to run this job in the
AWS Cloud. The job runs every 10 minutes. The job’s runtime varies between 1 minute and 3 minutes.
A. Create an AWS Lambda function based on the container image of the job. Configure Amazon EventBridge to invoke the function every 10
minutes.
B. Use AWS Batch to create a job that uses AWS Fargate resources. Configure the job scheduling to run every 10 minutes.
C. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a scheduled task based on the container image
D. Use Amazon Elastic Container Service (Amazon ECS) on AWS Fargate to run the job. Create a standalone task based on the container
image of the job. Use Windows task scheduler to run the job every
10 minutes.
Correct Answer: C
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
upvoted 12 times
Selected Answer: C
By using Amazon ECS on AWS Fargate, you can run the job in a containerized environment while benefiting from the serverless nature of Fargate,
where you only pay for the resources used during the job's execution. Creating a scheduled task based on the container image of the job ensures
that it runs every 10 minutes, meeting the required schedule. This solution provides flexibility, scalability, and cost-effectiveness.
upvoted 8 times
AAAAAA
upvoted 1 times
Selected Answer: A
https://aws.amazon.com/about-aws/whats-new/2022/02/aws-lambda-adds-support-net6/
upvoted 2 times
I guess this is an old question from before August 2023, when AWS Batch did not support Windows containers, while ECS already did since
September 2021. Thus it would be C, though now B does also work. Since both Batch and ECS are free, we'd pay only for the Fargate resources
(which are identical in both cases), now B and C would be correct.
A doesn't work because Lambda still does not support Windows containeres.
D doesn't make sense because the container would have to run 24/7
upvoted 6 times
Selected Answer: B
Selected Answer: B
Selected Answer: C
C works. For A, the lambda support container image, but the container image much implement the Lambda Runtime API.
upvoted 1 times
As they support Batch on Fargate now (Aug 2023), the correct answer should be B?
upvoted 3 times
Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/csharp-image.html#csharp-image-clients
upvoted 1 times
C is the most cost-effective solution for running a short-lived Windows container job on a schedule.
Using Amazon ECS scheduled tasks on Fargate eliminates the need to provision EC2 resources. You pay only for the duration the task runs.
Scheduled tasks handle scheduling the jobs and scaling resources automatically. This is lower cost than managing your own scaling via Lambda or
Batch.
ECS also supports Windows containers natively unlike Lambda (option A).
Option D still requires provisioning and paying for full time EC2 resources to run a task scheduler even when tasks are not running.
upvoted 2 times
https://docs.aws.amazon.com/batch/latest/userguide/fargate.html#when-to-use-fargate
upvoted 1 times
A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture. The company plans to create many
new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized
Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)
A. Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
B. Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito
authentication.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS Single Sign-On) to AWS Directory
Service.
D. Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service
directly.
E. Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's
Correct Answer: AE
Selected Answer: AE
A. By creating a new organization in AWS Organizations, you can establish a consolidated multi-account architecture. This allows you to create and
manage multiple AWS accounts for different business units under a single organization.
E. Setting up AWS IAM Identity Center (AWS Single Sign-On) within the organization enables you to integrate it with the company's corporate
directory service. This integration allows for centralized authentication, where users can sign in using their corporate credentials and access the
AWS accounts within the organization.
Together, these actions create a centralized, multi-account architecture that leverages AWS Organizations for account management and AWS IAM
Identity Center (AWS Single Sign-On) for authentication and access control.
upvoted 10 times
Selected Answer: AE
A) Using AWS Organizations allows centralized management of multiple AWS accounts in a single organization. New accounts can easily be created
within the organization.
E) Integrating AWS IAM Identity Center (AWS SSO) with the company's corporate directory enables federated single sign-on. Users can log in once
to access accounts and resources across AWS.
Together, Organizations and IAM Identity Center provide consolidated management and authentication for multiple accounts using existing
corporate credentials.
upvoted 2 times
A:AWS Organization
E:Authentication because option C (SCP) for Authorization
upvoted 3 times
Selected Answer: AE
Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's
corporate directory service.
AWS IAM Identity Center (successor to AWS Single Sign-On) helps you securely create or connect your workforce identities and manage their
access centrally across AWS accounts and applications.
https://aws.amazon.com/iam/identity-
center/#:~:text=AWS%20IAM%20Identity%20Center%20(successor%20to%20AWS%20Single%20Sign%2DOn)%20helps%20you%20securely%20cre
ate%20or%20connect%20your%20workforce%20identities%20and%20manage%20their%20access%20centrally%20across%20AWS%20accounts%2
0and%20applications.
upvoted 1 times
A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will
rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).
Correct Answer: A
Selected Answer: A
By choosing Expedited retrievals in Amazon S3 Glacier, you can reduce the retrieval time to minutes, making it suitable for scenarios where quick
access is required. Expedited retrievals come with a higher cost per retrieval compared to standard retrievals but provide faster access to your
archived data.
upvoted 11 times
Selected Answer: A
The most cost-effective solution that also meets the requirement of having the files available within a maximum of five minutes when needed is:
A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
Amazon S3 Glacier is designed for long-term storage of data archives, providing a highly durable and secure solution at a low cost. With Expedited
retrievals, data can be retrieved within a few minutes, which meets the requirement of having the files available within five minutes when needed.
This option provides the balance between cost-effectiveness and retrieval speed, making it the best choice for the company's needs.
upvoted 2 times
Occasional cost for retrieval from Glacier is nothing compared to the huge storage cost savings compared to C. Still meets the five minute
requirement.
upvoted 1 times
Selected Answer: C
The retrieval price will play an important role here. I selected the "C" option because in "Glacier and use Expedited retrievals" its around $0.004 per
GB/month and for STD-IA $0.0125 per GB/month
https://www.cloudforecast.io/blog/aws-s3-pricing-and-optimization-guide/
upvoted 1 times
Selected Answer: A
I am going with option A, but it is a poorly written question. "For all but the largest archives (more than 250 MB), data accessed by using Expedited
retrievals is typically made available within 1–5 minutes. "
upvoted 1 times
Guru4Cloud 1 year, 1 month ago
Selected Answer: A
Answer - A
Fast availability: Although retrieval times for objects stored in Amazon S3 Glacier typically range from minutes to hours, you can use the Expedited
retrievals option to expedite access to your archives. By using Expedited retrievals, the files can be made available in a maximum of five minutes
when needed. However, Expedited retrievals do incur higher costs compared to standard retrievals.
upvoted 1 times
Expedited retrievals are designed for urgent requests and can provide access to data in as little as 1-5 minutes for most archive objects. Standard
retrievals typically finish within 3-5 hours for objects stored in the S3 Glacier Flexible Retrieval storage class or S3 Intelligent-Tiering Archive Access
tier. These retrievals typically finish within 12 hours for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep
Archive Access tier. So A.
upvoted 2 times
Selected Answer: A
Expedited retrievals allow you to quickly access your data that's stored in the S3 Glacier Flexible Retrieval storage class or the S3 Intelligent-Tiering
Archive Access tier when occasional urgent requests for restoring archives are required. Data accessed by using Expedited retrievals is typically
made available within 1–5 minutes.
upvoted 1 times
Selected Answer: A
A for sure!
upvoted 1 times
Selected Answer: A
Expedited retrieval typically takes 1-5 minutes to retrieve data, making it suitable for the company's requirement of having the files available in a
maximum of five minutes.
upvoted 4 times
Glacier expedite
upvoted 2 times
A company is building a three-tier application on AWS. The presentation tier will serve a static website The logic tier is a containerized application.
This application will store data in a relational database. The company wants to simplify deployment and to reduce operational costs.
A. Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a
B. Use Amazon CloudFront to host static content. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 for compute power.
C. Use Amazon S3 to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate for compute power. Use a
D. Use Amazon EC2 Reserved Instances to host static content. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 for
compute power. Use a managed Amazon RDS cluster for the database.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a managed
Amazon RDS cluster for the database.
upvoted 2 times
Use Amazon S3 to host static content. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate for compute power. Use a managed
Amazon RDS cluster for the database
upvoted 2 times
cloudenthusiast 1 year, 4 months ago
Selected Answer: A
Amazon S3 is a highly scalable and cost-effective storage service that can be used to host static website content. It provides durability, high
availability, and low latency access to the static files.
Amazon ECS with AWS Fargate eliminates the need to manage the underlying infrastructure. It allows you to run containerized applications withou
provisioning or managing EC2 instances. This reduces operational overhead and provides scalability.
By using a managed Amazon RDS cluster for the database, you can offload the management tasks such as backups, patching, and monitoring to
AWS. This reduces the operational burden and ensures high availability and durability of the database.
upvoted 4 times
Question #487 Topic 1
A company seeks a storage solution for its application. The solution must be highly available and scalable. The solution also must function as a
file system be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements.
The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC.
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
Correct Answer: C
Selected Answer: C
A. Amazon FSx Multi-AZ deployments Amazon FSx is a managed file system service that provides access to file systems that are hosted on Amazon
EC2 instances. Amazon FSx does not support native protocols, such as NFS.
B. Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes Amazon EBS is a block storage service that provides durable, block-level storage
volumes for use with Amazon EC2 instances. Amazon EBS Multi-Attach volumes can be attached to multiple EC2 instances at the same time, but
they cannot be mounted by multiple Linux instances through native protocols, such as NFS.
D. Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points A single mount target can only be used to
mount the file system on a single EC2 instance. Multiple access points are used to provide access to the file system from different VPCs.
upvoted 11 times
This is clearly wrong. You can have exactly one EFS mount target per subnet (AZ), and of course this mount target can be used by many clients
(EC2 instances, containers etc.) - see diagram here for example: https://docs.aws.amazon.com/efs/latest/ug/accessing-fs.html
Selected Answer: C
Amazon EFS is a fully managed file system service that provides scalable, shared storage for Amazon EC2 instances. It supports the Network File
System version 4 (NFSv4) protocol, which is a native protocol for Linux-based systems. EFS is designed to be highly available, durable, and scalable
upvoted 8 times
Selected Answer: C
Selected Answer: C
C. Amazon Elastic File System (Amazon EFS) with multiple mount targets
upvoted 2 times
A 4-year-old media company is using the AWS Organizations all features feature set to organize its AWS accounts. According to the company's
finance team, the billing information on the member accounts must not be accessible to anyone, including the root user of the member accounts.
A. Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
B. Attach an identity-based policy to deny access to the billing information to all users, including the root user.
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
D. Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Correct Answer: C
Selected Answer: C
Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-grained permissions on the organizational
units (OUs) within your AWS Organization. SCPs provide central control over the maximum permissions that can be granted to member accounts,
including the root user.
Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing information for
all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including billing-related services.
Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to billing
information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
upvoted 6 times
but SCP do not apply to the management account (full admin power)?
upvoted 5 times
Selected Answer: C
Selected Answer: C
C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU)
upvoted 2 times
Selected Answer: C
Service control policy are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control
over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your
organization’s access control guidelines. SCPs are available only in an organization that has all features enabled.
upvoted 3 times
Selected Answer: C
c for me
upvoted 1 times
Question #489 Topic 1
An ecommerce company runs an application in the AWS Cloud that is integrated with an on-premises warehouse solution. The company uses
Amazon Simple Notification Service (Amazon SNS) to send order messages to an on-premises HTTPS endpoint so the warehouse application can
process the orders. The local data center team has detected that some of the order messages were not received.
A solutions architect needs to retain messages that are not delivered and analyze the messages for up to 14 days.
Which solution will meet these requirements with the LEAST development effort?
A. Configure an Amazon SNS dead letter queue that has an Amazon Kinesis Data Stream target with a retention period of 14 days.
B. Add an Amazon Simple Queue Service (Amazon SQS) queue with a retention period of 14 days between the application and Amazon SNS.
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service (Amazon SQS) target with a retention period of 14
days.
D. Configure an Amazon SNS dead letter queue that has an Amazon DynamoDB target with a TTL attribute set for a retention period of 14
days.
Correct Answer: C
Selected Answer: C
B, an SQS queue "between the application and Amazon SNS" would change the application logic. SQS cannot push messages to the "on-premises
https endpoint", rather the destination would have to retrieve messages from the queue. Besides, option B would eventually deliver the messages
that failed on the first attempt, which is NOT what is asked for. The goal is to retain undeliverable messages for analysis (NOT to deliver them), and
this is typically achieved with a dead letter queue.
upvoted 8 times
A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers
successfully.https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 2 times
Problem here SNS dead letter queue is a SQS queue, so technically speaking both B and C are right. But I suppose that they want us to speak abou
SNS dead letter queue, that nobody do... meh, frustrating.
upvoted 2 times
So with B == you place the SQS queue between the application and the SNS topic
with C == you place the SQS queue as a DLQ for the SNS topic
Of course it's C !
upvoted 5 times
Selected Answer: C
https://docs.aws.amazon.com/sns/latest/dg/sns-configure-dead-letter-queue.html
upvoted 2 times
Selected Answer: C
I like (B) since it is put SQS before SNS so we could prepare for retention. (C) dead letter Queue is kind of "rescue" effort. Also (C) should mention
reprocessing dead letter.
upvoted 1 times
Selected Answer: C
C is the answer
upvoted 1 times
Selected Answer: C
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 1 times
Selected Answer: C
C. Configure an Amazon SNS dead letter queue that has an Amazon Simple Queue Service (Amazon SQS) target with a retention period of 14 days
By using an Amazon SQS queue as the target for the dead letter queue, you ensure that the undelivered messages are reliably stored in a queue
for up to 14 days. Amazon SQS allows you to specify a retention period for messages, which meets the retention requirement without additional
development effort.
upvoted 1 times
Selected Answer: B
In SNS, DLQs store the messages that failed to be delivered to subscribed endpoints. For more information, see Amazon SNS Dead-Letter Queues.
In SQS, DLQs store the messages that failed to be processed by your consumer application. This failure mode can happen when producers and
consumers fail to interpret aspects of the protocol that they use to communicate. In that case, the consumer receives the message from the queue
but fails to process it, as the message doesn’t have the structure or content that the consumer expects. The consumer can’t delete the message
from the queue either. After exhausting the receive count in the redrive policy, SQS can sideline the message to the DLQ. For more information, see
Amazon SQS Dead-Letter Queues.
https://aws.amazon.com/blogs/compute/designing-durable-serverless-apps-with-dlqs-for-amazon-sns-amazon-sqs-aws-lambda/
upvoted 2 times
A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers
successfully. "
upvoted 1 times
"A dead-letter queue is an Amazon SQS queue that an Amazon SNS subscription can target for messages that can't be delivered to subscribers
successfully. Messages that can't be delivered due to client errors or server errors are held in the dead-letter queue for further analysis or
reprocessing."
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-
queues.html#:~:text=A%20dead%2Dletter%20queue%20is%20an%20Amazon%20SQS%20queue
upvoted 1 times
Amazon SQS is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverles
applications. Amazon SQS queues can be configured to have a retention period, which is the amount of time that messages will be kept in the
queue before they are deleted.
To meet the requirements of the company, you can configure an Amazon SNS dead letter queue that has an Amazon SQS target with a retention
period of 14 days. This will ensure that any messages that are not delivered to the on-premises warehouse application will be stored in the Amazon
SQS queue for up to 14 days. The company can then analyze the messages in the Amazon SQS queue to determine why they were not delivered.
upvoted 2 times
Selected Answer: C
https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html
upvoted 2 times
Question #490 Topic 1
A gaming company uses Amazon DynamoDB to store user information such as geographic location, player data, and leaderboards. The company
needs to configure continuous backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not affect availability of the
application and must not affect the read capacity units (RCUs) that are defined for the table.
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to Amazon S3.
B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3
bucket.
D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time
Correct Answer: B
Selected Answer: B
Continuous backups is a native feature of DynamoDB, it works at any scale without having to manage servers or clusters and allows you to export
data across AWS Regions and accounts to any point-in-time in the last 35 days at a per-second granularity. Plus, it doesn’t affect the read capacity
or the availability of your production tables.
https://aws.amazon.com/blogs/aws/new-export-amazon-dynamodb-table-data-to-data-lake-amazon-s3/
upvoted 11 times
Selected Answer: B
A: Impacts RCU
C: Requires coding of Lambda to read from stream to S3
D: More coding in Lambda
B: AWS Managed solution with no coding
upvoted 3 times
Selected Answer: B
DynamoDB export to S3 is a fully managed solution for exporting DynamoDB data to an Amazon S3 bucket at scale.
upvoted 3 times
Selected Answer: B
Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table.
upvoted 2 times
Selected Answer: C
Selected Answer: B
Using DynamoDB table export, you can export data from an Amazon DynamoDB table from any time within your point-in-time recovery window to
an Amazon S3 bucket. Exporting a table does not consume read capacity on the table, and has no impact on table performance and availability.
upvoted 1 times
https://repost.aws/knowledge-center/back-up-dynamodb-s3
https://aws.amazon.com/blogs/aws/new-amazon-dynamodb-continuous-backups-and-point-in-time-recovery-pitr/
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.html
There is no edit
upvoted 2 times
Continuous Backups: DynamoDB provides a feature called continuous backups, which automatically backs up your table data. Enabling continuous
backups ensures that your table data is continuously backed up without the need for additional coding or manual interventions.
Export to Amazon S3: With continuous backups enabled, DynamoDB can directly export the backups to an Amazon S3 bucket. This eliminates the
need for custom coding to export the data.
Minimal Coding: Option B requires the least amount of coding effort as continuous backups and the export to Amazon S3 functionality are built-in
features of DynamoDB.
No Impact on Availability and RCUs: Enabling continuous backups and exporting data to Amazon S3 does not affect the availability of your
application or the read capacity units (RCUs) defined for the table. These operations happen in the background and do not impact the table's
performance or consume additional RCUs.
upvoted 3 times
Selected Answer: B
A solutions architect is designing an asynchronous application to process credit card data validation requests for a bank. The application must be
A. Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS
Key Management Service (SSE-KMS) for encryption. Add the kms:Decrypt permission for the Lambda execution role.
B. Use AWS Lambda event source mapping. Use Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use SQS
managed encryption keys (SSE-SQS) for encryption. Add the encryption key invocation permission for the Lambda function.
C. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) FIFO queues as the event source. Use AWS
KMS keys (SSE-KMS). Add the kms:Decrypt permission for the Lambda execution role.
D. Use the AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use
AWS KMS keys (SSE-KMS) for encryption. Add the encryption key invocation permission for the Lambda function.
Correct Answer: A
Selected Answer: A
I would still go with the standard because of the keyword "at least once" because FIFO process "exactly once". That leaves us with A and D, I believ
that lambda function only needs to decrypt so I would choose A
upvoted 10 times
Selected Answer: A
"Process each request at least once" = Standard queue, rules out B and C which use more expensive FIFO queue
Permissions are added to Lambda execution roles, not Lambda functions, thus D is out.
upvoted 9 times
Selected Answer: A
Use AWS Lambda event source mapping. Set Amazon Simple Queue Service (Amazon SQS) standard queues as the event source. Use AWS Key
Management Service (SSE-KMS) for encryption. Add the kms
permission for the Lambda execution role.
upvoted 1 times
Selected Answer: B
With the SSE-SQS encryption type, you do not need to create, manage, or pay for SQS-managed encryption keys.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/standard-queues.html
upvoted 4 times
Selected Answer: B
Using SQS FIFO queues ensures each message is processed at least once in order. SSE-SQS provides encryption that is handled entirely by SQS
without needing decrypt permissions.
Using KMS keys (Options C and D) requires providing the Lambda role with decrypt permissions, adding complexity.
SQS FIFO queues with SSE-SQS encryption provide orderly, secure, server-side message processing that Lambda can consume without needing to
manage decryption. This is the most efficient and cost-effective approach.
upvoted 8 times
Selected Answer: B
Considering this is credit card validation process, there needs to be a strict 'process exactly once' policy offered by the SQS FIFO, and also SQS
already supports server-side encryption with customer-provided encryption keys using the AWS Key Management Service (SSE-KMS) or using SQS
owned encryption keys (SSE-SQS). Both encryption options greatly reduce the operational burden and complexity involved in protecting data.
Additionally, with the SSE-SQS encryption type, you do not need to create, manage, or pay for SQS-managed encryption keys.
Therefore option B stands out for me.
upvoted 1 times
https://aws.amazon.com/sqs/pricing/#:~:text=SQS%20requests%20priced%3F
upvoted 2 times
https://docs.aws.amazon.com/zh_tw/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-least-privilege-policy.html
upvoted 1 times
Selected Answer: A
Selected Answer: B
Solution B is the most cost-effective solution to meet the requirements of the application.
Amazon Simple Queue Service (SQS) FIFO queues are a good choice for this application because they guarantee that messages are processed in
the order in which they are received. This is important for credit card data validation because it ensures that fraudulent transactions are not
processed before legitimate transactions.
SQS managed encryption keys (SSE-SQS) are a good choice for encrypting the messages in the SQS queue because they are free to use. AWS Key
Management Service (KMS) keys (SSE-KMS) are also a good choice for encrypting the messages, but they do incur a cost.
upvoted 2 times
should be A. Key word - at least once and cost effective suggests SQS standard
upvoted 2 times
Besides, it is about "credit card data validation", NOT payments. Nothing happens if they check twice is your credit card is valid.
upvoted 1 times
Selected Answer: A
I guess A
upvoted 1 times
A company has multiple AWS accounts for development work. Some staff consistently use oversized Amazon EC2 instances, which causes the
company to exceed the yearly budget for the development accounts. The company wants to centrally restrict the creation of AWS resources in
these accounts.
Which solution will meet these requirements with the LEAST development effort?
A. Develop AWS Systems Manager templates that use an approved EC2 creation process. Use the approved Systems Manager templates to
B. Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service control policy (SCP) to control
C. Configure an Amazon EventBridge rule that invokes an AWS Lambda function when an EC2 instance is created. Stop disallowed EC2
instance types.
D. Set up AWS Service Catalog products for the staff to create the allowed EC2 instance types. Ensure that staff can deploy EC2 instances
Correct Answer: B
Selected Answer: B
Anytime you see Multiple AWS Accounts, and needs to consolidate is AWS Organization. Also anytime we need to restrict anything in an
organization, it is SCP policies.
upvoted 5 times
Selected Answer: B
B. Multiple AWS account, consolidate under one AWS Organization, top down policy (SCP) to all member account to restrict EC2 Type.
upvoted 2 times
Selected Answer: B
Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service control policy (SCP) to control the
usage of EC2 instance types.
upvoted 2 times
I have a question regarding this answer, what do they mean by "development effort"?:
If they mean the work it takes to implement the solution (using develop as implement), option B achieves the constraint with little administrative
overhead (there is less to do to configure this option).
If by "development effort", they mean less effort for the development team, when development team try to deploy instances and gets errors
because they are not allowed, this generates overhead. In this case the best option is D.
What did you think?
upvoted 1 times
Selected Answer: B
Use AWS Organizations to organize the accounts into organizational units (OUs). Define and attach a service control policy (SCP) to control the
usage of EC2 instance types
upvoted 2 times
I would choose B
The other options would require some level of programming or custom resource creation:
A. Developing Systems Manager templates requires development effort
C. Configuring EventBridge rules and Lambda functions requires development effort
D. Creating Service Catalog products requires development effort to define the allowed EC2 configurations.
Option B - Using Organizations service control policies - requires no custom development. It involves:
Organizing accounts into OUs
Creating an SCP that defines allowed/disallowed EC2 instance types
Attaching the SCP to the appropriate OUs
This is a native AWS service with a simple UI for defining and managing policies. No coding or resource creation is needed.
So option B, using Organizations service control policies, will meet the requirements with the least development effort.
upvoted 3 times
Selected Answer: B
AWS Organizations: AWS Organizations is a service that helps you centrally manage multiple AWS accounts. It enables you to group accounts into
organizational units (OUs) and apply policies across those accounts.
Service Control Policies (SCPs): SCPs in AWS Organizations allow you to define fine-grained permissions and restrictions at the account or OU level
By attaching an SCP to the development accounts, you can control the creation and usage of EC2 instance types.
Least Development Effort: Option B requires minimal development effort as it leverages the built-in features of AWS Organizations and SCPs. You
can define the SCP to restrict the use of oversized EC2 instance types and apply it to the appropriate OUs or accounts.
upvoted 4 times
A company wants to use artificial intelligence (AI) to determine the quality of its customer service calls. The company currently manages calls in
four different languages, including English. The company will offer new languages in the future. The company does not have the resources to
The company needs to create written sentiment analysis reports from the customer service call recordings. The customer service call recording
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
Amazon Transcribe will convert the audio recordings into text, Amazon Translate will translate the text into English, and Amazon Comprehend will
perform sentiment analysis on the translated text to generate sentiment analysis reports.
upvoted 5 times
D. Use Amazon Transcribe to convert the audio recordings in any language into text.
E. Use Amazon Translate to translate text in any language to English.
F. Use Amazon Comprehend to create the sentiment analysis reports.
upvoted 2 times
Amazon Transcribe to convert speech to text. Amazon Translate to translate text to english. Amazon Comprehend to perform sentiment analysis on
translated text.
upvoted 1 times
A company uses Amazon EC2 instances to host its internal systems. As part of a deployment operation, an administrator tries to use the AWS CLI
to terminate an EC2 instance. However, the administrator receives a 403 (Access Denied) error message.
The administrator is using an IAM role that has the following IAM policy attached:
C. The "Action" field does not grant the actions that are required to terminate the EC2 instance.
D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0.113.0/24.
Correct Answer: D
Selected Answer: D
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:TerminateInstances",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "ec2:TerminateInstances",
"Condition": {
"NotIpAddress": {
"aws:SourceIp" : [
"192.0.2.0/24",
"203.0.113.0/24"
]
}
},
"Resource": "*"
}
]
}
upvoted 1 times
Selected Answer: D
If you want to read more about this, see how it works: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny
ip.html
Same policy as in this question with almost same use case.
D is correct answer.
upvoted 3 times
the command is coming from a source IP which is not in the allowed range.
upvoted 4 times
Selected Answer: D
" aws:SourceIP " indicates the IP address that is trying to perform the action.
upvoted 1 times
Selected Answer: D
d for sure
upvoted 2 times
Question #495 Topic 1
A company is conducting an internal audit. The company wants to ensure that the data in an Amazon S3 bucket that is associated with the
company’s AWS Lake Formation data lake does not contain sensitive customer or employee data. The company wants to discover personally
identifiable information (PII) or financial information, including passport numbers and credit card numbers.
A. Configure AWS Audit Manager on the account. Select the Payment Card Industry Data Security Standards (PCI DSS) for auditing.
B. Configure Amazon S3 Inventory on the S3 bucket Configure Amazon Athena to query the inventory.
C. Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data types.
Correct Answer: C
Selected Answer: C
Selected Answer: C
Configure Amazon Macie to run a data discovery job that uses managed identifiers for the required data types.
upvoted 2 times
Amazon Macie is a data security service that uses machine learning (ML) and pattern matching to discover and help protect your sensitive data.
upvoted 2 times
agree with C
upvoted 4 times
Selected Answer: C
Amazon Macie is a service that helps discover, classify, and protect sensitive data stored in AWS. It uses machine learning algorithms and managed
identifiers to detect various types of sensitive information, including personally identifiable information (PII) and financial information. By
configuring Amazon Macie to run a data discovery job with the appropriate managed identifiers for the required data types (such as passport
numbers and credit card numbers), the company can identify and classify any sensitive data present in the S3 bucket.
upvoted 4 times
Question #496 Topic 1
A company uses on-premises servers to host its applications. The company is running out of storage capacity. The applications use both block
storage and NFS storage. The company needs a high-performing solution that supports local caching without re-architecting its existing
applications.
Which combination of actions should a solutions architect take to meet these requirements? (Choose two.)
D. Deploy an AWS Storage Gateway volume gateway to replace the block storage.
E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to on-premises servers.
Correct Answer: BD
Selected Answer: BD
By combining the deployment of an AWS Storage Gateway file gateway and an AWS Storage Gateway volume gateway, the company can address
both its block storage and NFS storage needs, while leveraging local caching capabilities for improved performance.
upvoted 6 times
Selected Answer: BD
A: Not possible
C: Snowball edge is snowball with computing. It's not a NAS!
E: Technically yes but requires VPN or Direct Connect so re-architecture
B & D both use Storage Gateway which can be used as NFS and Block storage
https://aws.amazon.com/storagegateway/
upvoted 3 times
Selected Answer: BD
Selected Answer: BD
A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is
deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the
public subnet. However, the company wants a solution that will reduce the data output costs.
A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network
B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network
C. Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3
traffic.
D. Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3
traffic.
Correct Answer: C
Selected Answer: C
A VPC gateway endpoint allows you to privately access Amazon S3 from within your VPC without using a NAT gateway or NAT instance. By
provisioning a VPC gateway endpoint for S3, the service in the private subnet can directly communicate with S3 without incurring data transfer
costs for traffic going through a NAT gateway.
upvoted 9 times
Selected Answer: C
As a rule of thumb, EC2<->S3 in your workload should always try to use a VPC gateway unless there is an explicit restriction (account etc.) which
disallows it.
upvoted 3 times
Selected Answer: C
Using a VPC endpoint for S3 allows the EC2 instances to access S3 directly over the Amazon network without traversing the internet. This
significantly reduces data output charges.
upvoted 2 times
Selected Answer: C
use VPC gateway endpoint to route traffic internally and save on costs.
upvoted 1 times
private subnet needs to communicate with S3 --> VPC endpoint right away
upvoted 2 times
Question #498 Topic 1
A company uses Amazon S3 to store high-resolution pictures in an S3 bucket. To minimize application changes, the company stores the pictures
as the latest version of an S3 object. The company needs to retain only the two most recent versions of the pictures.
The company wants to reduce costs. The company has identified the S3 bucket as a large expense.
Which solution will reduce the S3 costs with the LEAST operational overhead?
A. Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
B. Use an AWS Lambda function to check for older versions and delete all but the two most recent versions.
C. Use S3 Batch Operations to delete noncurrent object versions and retain only the two most recent versions.
D. Deactivate versioning on the S3 bucket and retain the two most recent versions.
Correct Answer: A
Selected Answer: A
S3 Lifecycle policies allow you to define rules that automatically transition or expire objects based on their age or other criteria. By configuring an
S3 Lifecycle policy to delete expired object versions and retain only the two most recent versions, you can effectively manage the storage costs
while maintaining the desired retention policy. This solution is highly automated and requires minimal operational overhead as the lifecycle
management is handled by S3 itself.
upvoted 5 times
Selected Answer: A
Selected Answer: A
Use S3 Lifecycle to delete expired object versions and retain the two most recent versions.
upvoted 2 times
A --> "you can also provide a maximum number of noncurrent versions to retain."
https://docs.aws.amazon.com/AmazonS3/latest/userguide/intro-lifecycle-rules.html
upvoted 4 times
Selected Answer: A
A is correct.
upvoted 2 times
Selected Answer: A
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. The company's average connection utilization is less than
10%. A solutions architect must recommend a solution that will reduce the cost without compromising security.
A. Set up a new 1 Gbps Direct Connect connection. Share the connection with another AWS account.
B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console.
C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share the connection with another AWS account.
D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for an existing AWS account.
Correct Answer: D
Selected Answer: D
Selected Answer: D
No, you cannot directly adjust the speed of an existing Direct Connect connection through the AWS Management Console.
To adjust the speed of an existing Direct Connect connection, you typically need to contact your Direct Connect service provider. They can assist
you in modifying the speed of your connection based on your requirements. Depending on the provider, this process may involve submitting a
request or contacting their support team to initiate the necessary changes. Keep in mind that adjusting the speed of your Direct Connect
connection may also involve contractual and billing considerations.
upvoted 3 times
Selected Answer: D
If you already have an existing AWS Direct Connect connection configured at 1 Gbps, and you wish to reduce the connection bandwidth to 200
Mbps to minimize costs, you should indeed contact your AWS Direct Connect Partner and request to lower the connection speed to 200 Mbps.
upvoted 3 times
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect.html
upvoted 4 times
By opting for a lower capacity 200 Mbps connection instead of the 1 Gbps connection, the company can significantly reduce costs. This solution
ensures a dedicated and secure connection while aligning with the company's low utilization, resulting in cost savings.
upvoted 3 times
For Dedicated Connections, 1 Gbps, 10 Gbps, and 100 Gbps ports are available. For Hosted Connections, connection speeds of 50 Mbps, 100 Mbps
200 Mbps, 300 Mbps, 400 Mbps, 500 Mbps, 1 Gbps, 2 Gbps, 5 Gbps and 10 Gbps may be ordered from approved AWS Direct Connect Partners. Se
AWS Direct Connect Partners for more information.
upvoted 4 times
Selected Answer: D
A hosted connection is a lower-cost option that is offered by AWS Direct Connect Partners
upvoted 4 times
A company has multiple Windows file servers on premises. The company wants to migrate and consolidate its files into an Amazon FSx for
Windows File Server file system. File permissions must be preserved to ensure that access rights do not change.
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system.
B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI. Schedule AWS DataSync tasks to transfer the data to the
C. Remove the drives from each file server. Ship the drives to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the
D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync agents on the device. Schedule
DataSync tasks to transfer the data to the FSx for Windows File Server file system.
E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the on-premises network. Copy data to the device by using
the AWS CLI. Ship the device back to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for
Correct Answer: AD
Selected Answer: AD
A This option involves deploying DataSync agents on your on-premises file servers and using DataSync to transfer the data directly to the FSx for
Windows File Server. DataSync ensures that file permissions are preserved during the migration process.
D
This option involves using an AWS Snowcone device, a portable data transfer device. You would connect the Snowcone device to your on-premises
network, launch DataSync agents on the device, and schedule DataSync tasks to transfer the data to FSx for Windows File Server. DataSync handles
the migration process while preserving file permissions.
upvoted 7 times
Selected Answer: AD
B, C and E would copy the files to S3 first where permissions would be lost
upvoted 6 times
Selected Answer: BD
Selected Answer: AD
the key is file permissions are preserved during the migration process. only datasync supports that
upvoted 4 times
Option B would require copy the data to Amazon S3 before transferring it to Amazon FSx for Windows File Server
Option C would require the company to remove the drives from each file server and ship them to AWS
upvoted 2 times
A company wants to ingest customer payment data into the company's data lake in Amazon S3. The company receives payment data every minute
on average. The company wants to analyze the payment data in real time. Then the company wants to ingest the data into the data lake.
Which solution will meet these requirements with the MOST operational efficiency?
A. Use Amazon Kinesis Data Streams to ingest data. Use AWS Lambda to analyze the data in real time.
B. Use AWS Glue to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
C. Use Amazon Kinesis Data Firehose to ingest data. Use Amazon Kinesis Data Analytics to analyze the data in real time.
D. Use Amazon API Gateway to ingest data. Use AWS Lambda to analyze the data in real time.
Correct Answer: C
Kinesis Data Firehose is near real time (min. 60 sec). - The question is focusing on real time processing/analysis + efficiency -> Kinesis Data Stream
is real time ingestion.
https://www.amazonaws.cn/en/kinesis/data-firehose/#:~:text=Near%20real%2Dtime,is%20sent%20to%20the%20service.
upvoted 11 times
Selected Answer: C
By leveraging the combination of Amazon Kinesis Data Firehose and Amazon Kinesis Data Analytics, you can efficiently ingest and analyze the
payment data in real time without the need for manual processing or additional infrastructure management. This solution provides a streamlined
and scalable approach to handle continuous data ingestion and analysis requirements.
upvoted 10 times
Selected Answer: C
Selected Answer: C
Data is stored on S3 so real-time data analytics can be done with Kinesis Data Analytics which rules out Lambda solutions (A and D) as they are
more operationally complex.
B is not useful it is more of ETL.
Firehose is actually to distribute data but given that company is already receiving data somehow so Firehose can basically distribute it to S3 with
minimum latency. I have to admit this was confusing. I would have used Kinesis Streams to store on S3 and Data Analytics but combination is
confusing!
upvoted 2 times
Selected Answer: A
I think this is A. The purpose of Firehose is to ingest and deliver to a data store, no to an analytics service. And in fact you can use lambda for real
time analysis, such I find A more aligned.
upvoted 2 times
Selected Answer: C
Kinesis Data Streams focuses on ingesting and storing data streams while Kinesis Data Firehose focuses on delivering data streams to select
destinations, as the motive of the question is to do analytics, the answer should be C.
upvoted 2 times
Kinesis Data Streams focuses on ingesting and storing data streams while Kinesis Data Firehose focuses on delivering data streams to select
destinations, as the motive of the question is to do analytics, the answer should be C.
upvoted 1 times
Selected Answer: C
Quote “Connect with 30+ fully integrated AWS services and streaming destinations such as Amazon Simple Storage Service (S3)” at
https://aws.amazon.com/kinesis/data-firehose/ . Amazon Kinesis Data Analystics https://aws.amazon.com/kinesis/data-analytics/
upvoted 1 times
Use Kinesis Firehose to capture and deliver the data to Kinesis Analytics to perform analytics.
upvoted 1 times
A company runs a website that uses a content management system (CMS) on Amazon EC2. The CMS runs on a single EC2 instance and uses an
Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block Store (Amazon EBS) volume
Which combination of actions should a solutions architect take to improve the performance and resilience of the website? (Choose two.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2 instance
B. Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances.
C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance.
D. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application
Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application
Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an
Correct Answer: CE
Selected Answer: CE
By combining the use of Amazon EFS for shared file storage and Amazon CloudFront for content delivery, you can achieve improved performance
and resilience for the website.
upvoted 13 times
Selected Answer: CE
First of all you should understand, a website using CMS is a dynamic one not static, so A is out, B is more complicated than C, so C, and between
global accelerator and cloudfront, Cloudfront suits better as there is no legacy protocols data(UDP, etc) that needs to be accessed, hence E
upvoted 3 times
Selected Answer: CE
Not A because you can't mount an S3 bucket on an EC2 instance. You could use a file gateway and share an S3 bucket via NFS and mount that on
EC2, but that is not mentioned here and would also not make sense.
upvoted 3 times
You can mount EFS file systems to multiple Amazon EC2 instances remotely and securely without having to log in to the instances by using the
AWS Systems Manager Run Command.
upvoted 2 times
https://bluexp.netapp.com/blog/ebs-efs-amazons3-best-cloud-storage-system
upvoted 2 times
Selected Answer: CE
Not A - S3 cannot be mounted (up until few months ago). Exam does not test for the updates in last 6 months.
upvoted 3 times
Selected Answer: AE
You have summarized the reasons why options A and E are the best choices very well.
Migrating static website assets like images to Amazon S3 enables high scalability, durability and shared access across instances. This improves
performance.
Using Auto Scaling with load balancing provides elasticity and resilience. Adding a CloudFront distribution further boosts performance through
caching and content delivery.
upvoted 2 times
Selected Answer: AE
Both options AE and CE would work, but I choose AE, because, on my opinion, S3 is best suited for performance and resilience.
upvoted 3 times
Selected Answer: CE
EFS, unlike EBS, can be mounted across multiple EC2 instances and hence C over A.
upvoted 1 times
Selected Answer: AE
Technically both options AE and CE would work. But S3 is best suited for unstructured data, and the key benefit of mounting S3 on EC2 is that it
provides a cost-effective alternative of using object storage for applications dealing with large files, as compared to expensive file or block storage
At the same time it provides more performant, scalable and highly available storage for these applications.
Even though there is no mention of 'cost efficient' in this question, in the real world cost is the no.1 factor.
In the exam I believe both options would be a pass.
https://aws.amazon.com/blogs/storage/mounting-amazon-s3-to-an-amazon-ec2-instance-using-a-private-connection-to-s3-file-gateway/
upvoted 4 times
Selected Answer: CE
Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS provides a
scalable and fully managed file storage solution that can be accessed concurrently from multiple EC2 instances. This ensures that the website
images can be accessed efficiently and consistently by all instances, improving performance
In Option E The Auto Scaling group maintains a minimum of two instances, ensuring resilience by automatically replacing any unhealthy instances.
Additionally, configuring an Amazon CloudFront distribution for the website further improves performance by caching content at edge locations
closer to the end-users, reducing latency and improving content delivery.
Hence combining these actions, the website's performance is improved through efficient image storage and content delivery
upvoted 2 times
A company runs an infrastructure monitoring service. The company is building a new feature that will enable the service to monitor data in
customer AWS accounts. The new feature will call AWS APIs in customer accounts to describe Amazon EC2 instances and read Amazon
CloudWatch metrics.
What should the company do to obtain access to customer accounts in the MOST secure way?
A. Ensure that the customers create an IAM role in their account with read-only EC2 and CloudWatch permissions and a trust policy to the
company’s account.
B. Create a serverless API that implements a token vending machine to provide temporary AWS credentials for a role with read-only EC2 and
CloudWatch permissions.
C. Ensure that the customers create an IAM user in their account with read-only EC2 and CloudWatch permissions. Encrypt and store
D. Ensure that the customers create an Amazon Cognito user in their account to use an IAM role with read-only EC2 and CloudWatch
permissions. Encrypt and store the Amazon Cognito user and password in a secrets management system.
Correct Answer: A
Selected Answer: A
By having customers create an IAM role with the necessary permissions in their own accounts, the company can use AWS Identity and Access
Management (IAM) to establish cross-account access. The trust policy allows the company's AWS account to assume the customer's IAM role
temporarily, granting access to the specified resources (EC2 instances and CloudWatch metrics) within the customer's account. This approach
follows the principle of least privilege, as the company only requests the necessary permissions and does not require long-term access keys or use
credentials from the customers.
upvoted 14 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
Not B (would be about access to the company's account, not the customers' accounts)
Not C (storing credentials in a custom system is a big nono)
Not D (Cognito has nothing to do here and "user and password" is terrible)
upvoted 2 times
Selected Answer: D
The company's infrastructure monitoring service needs to call aws API's in the MOST secure way. So you have to focus on restricting access to the
APIs and there is where cognito comes in to play.
upvoted 2 times
Selected Answer: A
Having customers create a cross-account IAM role with the appropriate permissions, and configuring the trust policy to allow the monitoring
service principal account access, implements secure delegation and least privilege access.
upvoted 1 times
Question #504 Topic 1
A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The company's networking team has its
A. Set up VPC peering connections between each VPC. Update each associated subnet’s route table
B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet
C. Create an AWS Transit Gateway in the networking team’s AWS account. Configure static routes from each VPC.
D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team’s AWS account to connect to each VPC.
Correct Answer: C
Selected Answer: C
The main difference between AWS Transit Gateway and VPC peering is that AWS Transit Gateway is designed to connect multiple VPCs together in
a hub-and-spoke model, while VPC peering is designed to connect two VPCs together in a peer-to-peer model.
As we have several VPCs here, the answer should be C.
upvoted 16 times
Selected Answer: C
AWS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-premises networks, and remote networks. It
simplifies network connectivity by providing a single entry point and reducing the number of connections required. In this scenario, deploying an
AWS Transit Gateway in the networking team's AWS account allows for efficient management and control over the network connectivity across
multiple VPCs.
upvoted 6 times
Selected Answer: C
A: This option is suggesting hundreds of peering connection for EACH VPC. Nope!
B: NAT gateway is for network translation not VPC interconnectivity so this is wrong
C: Transit GW + static routes will connect all VPCs https://aws.amazon.com/transit-gateway/
D: VPN gateway is for on-prem to VPN for a VPC. There is no on-prem here so this is wrong
upvoted 1 times
Selected Answer: C
Connect, Monitor and Manage Multiple VPCs in one place = AWS Transit Gateway
upvoted 2 times
Selected Answer: C
C is the most operationally efficient solution for connecting a large number of VPCs across accounts.
Using AWS Transit Gateway allows all the VPCs to connect to a central hub without needing to create a mesh of VPC peering connections between
each VPC pair.
This significantly reduces the operational overhead of managing the network topology as new VPCs are added or changed.
The networking team can centrally manage the Transit Gateway routing and share it across accounts using Resource Access Manager.
upvoted 2 times
Selected Answer: C
I voted for c
upvoted 2 times
A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-
Demand billing. If a job fails on one instance, another instance will reprocess the job. The batch jobs run between 12:00 AM and 06:00 AM local
Which solution will provide EC2 instances to meet these requirements MOST cost-effectively?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the
C. Create a new launch template for the Auto Scaling group. Set the instances to Spot Instances. Set a policy to scale out based on CPU
usage.
D. Create a new launch template for the Auto Scaling group. Increase the instance size. Set a policy to scale out based on CPU usage.
Correct Answer: C
Selected Answer: C
Purchasing a 1-year Savings Plan (option A) or a 1-year Reserved Instance (option B) may provide cost savings, but they are more suitable for long
running, steady-state workloads. Since your batch jobs run for a specific period each day, using Spot Instances with the ability to scale out based
on CPU usage is a more cost-effective choice.
upvoted 12 times
Selected Answer: C
Using Spot Instances allows EC2 capacity to be purchased at significant discounts compared to On-Demand prices. The auto scaling group can
scale out to add Spot Instances when needed for the batch jobs.
If Spot Instances become unavailable, regular On-Demand Instances will be launched instead to maintain capacity. The potential for interruptions i
acceptable since failed jobs can be re-run.
upvoted 6 times
Selected Answer: C
Selected Answer: C
You don't need any scaling really as the job runs on another EC2 instance if it fails on first one. A. B. D are all more expensive than C due to spot
instance being cheaper than reserved instances.
upvoted 2 times
Since your batch jobs run for a specific period each day, using Spot Instances with the ability to scale out based on CPU usage is a more cost-
effective choice.
upvoted 1 times
c for me
upvoted 1 times
Question #506 Topic 1
A social media company is building a feature for its website. The feature will give users the ability to upload photos. The company expects
significant increases in demand during large events and must ensure that the website can handle the upload traffic from users.
A. Upload files from the user's browser to the application servers. Transfer the files to an Amazon S3 bucket.
B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user's browser to the file gateway.
C. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser into an S3 bucket.
D. Provision an Amazon Elastic File System (Amazon EFS) file system. Upload files directly from the user's browser to the file system.
Correct Answer: C
Selected Answer: C
This approach allows users to upload files directly to S3 without passing through the application servers, reducing the load on the application and
improving scalability. It leverages the client-side capabilities to handle the file uploads and offloads the processing to S3.
upvoted 15 times
Selected Answer: C
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html
"You can also use presigned URLs to allow someone to upload a specific object to your Amazon S3 bucket. This allows an upload without requiring
another party to have AWS security credentials or permissions. "
upvoted 1 times
Selected Answer: A
S3 presigned url is used for sharing objects from an s3 bucket and not for uploading to an s3 bucket
upvoted 2 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html
upvoted 3 times
Selected Answer: C
Generating S3 presigned URLs allows users to upload directly to S3 instead of application servers. This removes the application servers as a
bottleneck for upload traffic.
S3 can scale to handle very high volumes of uploads with no limits on storage or throughput. Using presigned URLs leverages this scalability.
upvoted 4 times
You may use presigned URLs to allow someone to upload an object to your Amazon S3 bucket. Using a presigned URL will allow an upload withou
requiring another party to have AWS security credentials or permissions.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
upvoted 1 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html
upvoted 2 times
User wants to upload picture -> server generates presigned URL and sends it to the app -> app uploads file
upvoted 2 times
Selected Answer: C
the most scalable because it allows users to upload files directly to Amazon S3,
upvoted 3 times
Question #507 Topic 1
A company has a web application for travel ticketing. The application is based on a database that runs in a single data center in North America.
The company wants to expand the application to serve a global user base. The company needs to deploy the application to multiple AWS Regions.
Average latency must be less than 1 second on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions. However, the company must maintain a single
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in
B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each Region. Use the correct Regional
C. Migrate the database to an Amazon RDS for MySQL database. Deploy MySQL read replicas in each Region. Use the correct Regional
D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database to each Region. Use the correct
Regional endpoint in each Regional deployment to access the database. Use AWS Lambda functions to process event streams in each Region
Correct Answer: A
Selected Answer: A
Using DynamoDB's global tables feature, you can achieve a globally consistent reservation database with low latency on updates, making it suitabl
for serving a global user base. The automatic replication provided by DynamoDB eliminates the need for manual synchronization between Regions
upvoted 17 times
Selected Answer: B
The question asks "Average latency must be less than 1 second on updates to the reservation database."
A is incorrect:
" Changes to a DynamoDB global tables are replicated asynchronously, with typical latency of between 0.5 - 2.5 seconds between AWS Regions in
the same geographic area."
B is the answer:
"All Aurora Replicas return the same data for query results with minimal replica lag. This lag is usually much less than 100 milliseconds after the
primary instance has written an update."
upvoted 1 times
https://community.aws/content/2drxEND7MtTOb2bWs2J0NlCGewP/ddb-globaltables-lag?lang=en
upvoted 1 times
Selected Answer: A
How can you update your database in the different regions with read replicas? You need to be able to read and write to the database from the
different regions.
upvoted 2 times
In my Opinion it is A. The reason is that Aurora Read Replicas support up to 5 Read replicas in different regions . We don't have that limitation with
Dynamo DB Global tables, hence I vote for A.
upvoted 1 times
Selected Answer: B
C is out because RDS has higher replication delay, only Aurora can guarantee "less than one second". So we'd have "a single primary reservation
database that is globally consistent" in one region, and we'd have read replicas with "less than 1 second on updates" latency in other regions.
upvoted 4 times
Dynamo DB global table acts as a single table. It does not consist of primary and standby databases. It is one single global table which is
synchronously updated. Users can write to any of the regional endpoints and the write will be automatically updated across regions. To have a
single primary database that is consistent does not align with dynamo db global tables.
Option B is even more dumb compared to A since read replicas does not provide failover capability or fast updates from the primary database.
The answer almost close to the requirement is Option A even though it is a misfit
upvoted 1 times
The question mentions that the average latency on updates to the regional reservation databases should be less than 1sec. Read replicas provide
asynchronous replication and hence the update times will be higher. Hence we can easily scrap all the options containing read replicas from the
options. Moreover, a globally consistent database with millisecond latencies screams dynamo db global
upvoted 2 times
I think the real difference is that DynamoDB is by default only eventually consistent however it has to be consistent. So it's B.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadConsistency.html
upvoted 4 times
Selected Answer: B
Selected Answer: A
Amazon DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database. Global tables provide you 99.999%
availability, increased application resiliency, and improved business continuity. As global tables replicate your Amazon DynamoDB tables
automatically across your choice of AWS Regions, you can achieve fast, local read and write performance.
upvoted 1 times
Selected Answer: B
Amazon Aurora provides global databases that replicate your data with low latency to multiple regions. By using Aurora Read Replicas in each
Region, the company can achieve low-latency access to the data while maintaining global consistency. The use of regional endpoints ensures that
each deployment accesses the appropriate local replica, reducing latency. This solution allows the company to meet the requirement of serving a
global user base while keeping average latency less than 1 second.
upvoted 1 times
Selected Answer: B
Aurora Global DB provides native multi-master replication and automatic failover for high availability across regions.
Read replicas in each region ensure low read latency by promoting a local replica to handle reads.
A single Aurora primary region handles all writes to maintain data consistency.
Data replication and sync is managed automatically by Aurora Global DB.
Regional endpoints minimize cross-region latency.
Automatic failover promotes a replica to be the new primary if the current primary region goes down.
upvoted 1 times
Selected Answer: B
"the company must maintain a single primary reservation database that is globally consistent." --> Relational database, because it only allow writes
from one regional endpoint
DynamoDB global table allow BOTH reads and writes on all regions (“last writer wins”), so it is not single point of entry. You can set up IAM identity
based policy to restrict write access for global tables that are not in NA but it is not mentioned.
upvoted 1 times
Selected Answer: B
Global reads with local latency – If you have offices around the world, you can use an Aurora global database to keep your main sources of
information updated in the primary AWS Region. Offices in your other Regions can access the information in their own Region, with local latency.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html
D. although D is also using Aurora Global Database, there is no need for Lambda function to sync data.
upvoted 1 times
In real life, I would use Aurora Global Database. Because 1. it achieve less than 1 sec latency, 2. And ticketing system is a very typical traditional
relational system.
While, in the exam I would vote for A. Because Option B isn't using global database which means you have to provide the endpoint of primary
region to a remote region for update and even the typical back and forth latency is 400ms but you have to have a lot of professional network setu
to guarantee it, which option B doesn't mention.
upvoted 3 times
Question #508 Topic 1
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2 instances that run in the us-west-1 Region. The company
In the event of a natural disaster in the us-west-1 Region, the company wants to recover workloads quickly in the us-west-2 Region. The company
wants no more than 24 hours of data loss on the EC2 instances. The company also wants to automate any backups of the EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Choose two.)
A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run
B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run
C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values.
Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2.
D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define
the destination for the copy as us-west-2. Specify the backup schedule to run twice daily.
E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify
Correct Answer: BD
Selected Answer: BD
Option B suggests using an EC2-backed Amazon Machine Image (AMI) lifecycle policy to automate the backup process. By configuring the policy
to run twice daily and specifying the copy to the us-west-2 Region, the company can ensure regular backups are created and copied to the
alternate region.
Option D proposes using AWS Backup, which provides a centralized backup management solution. By creating a backup vault and backup plan
based on tag values, the company can automate the backup process for the EC2 instances. The backup schedule can be set to run twice daily, and
the destination for the copy can be defined as the us-west-2 Region.
upvoted 9 times
Selected Answer: BD
BD is the only choice. Although D seems to cover for B also, happy to be corrected.
upvoted 5 times
B and D are the options that meet the requirements with the least administrative effort.
B uses EC2 image lifecycle policies to automatically create AMIs of the instances twice daily and copy them to the us-west-2 region. This automate
regional backups.
D leverages AWS Backup to define a backup plan that runs twice daily and copies backups to us-west-2. AWS Backup automates EC2 instance
backups.
Together, these options provide automated, regional EC2 backup capabilities with minimal administrative overhead.
upvoted 1 times
Selected Answer: BD
Selected Answer: BD
Selected Answer: BD
solutions are both automated and require no manual intervention to create or copy backups
upvoted 4 times
Question #509 Topic 1
A company operates a two-tier application for image processing. The application uses two Availability Zones, each with one public subnet and one
private subnet. An Application Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the application tier use
Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is
receiving millions of illegitimate requests from a small number of IP addresses. A solutions architect needs to resolve the immediate performance
A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
Correct Answer: B
Selected Answer: B
Selected Answer: B
In this scenario, the security audit reveals that the application is receiving millions of illegitimate requests from a small number of IP addresses. To
address this issue, it is recommended to modify the network ACL (Access Control List) for the web tier subnets.
By adding an inbound deny rule specifically targeting the IP addresses that are consuming resources, the network ACL can block the illegitimate
traffic at the subnet level before it reaches the web servers. This will help alleviate the excessive load on the web tier and improve the application's
performance.
upvoted 8 times
Selected Answer: B
A: Wrong as SG cannot deny. By default everything is deny in SG and you allow stuff
CD: App tier is not under attack so these are irrelevant options
B: Correct as NACL is exactly for this access control list to define rules for CIDR or IP addresses
upvoted 2 times
Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
upvoted 2 times
A is wrong
Security groups act at the network interface level, not the subnet level, and they support Allow rules only.
upvoted 2 times
Selected Answer: A
Since the bad requests are targeting the web tier, adding ACL deny rules for those IP addresses on the web subnets will block the traffic before it
reaches the instances.
Security group changes (Options A and C) would not be effective since the requests are not even reaching those resources.
Modifying the application tier ACL (Option D) would not stop the bad traffic from hitting the web tier.
upvoted 2 times
Selected Answer: B
Selected Answer: B
Selected Answer: A
A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-
west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2.
A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1
application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group.
B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables. Create an
inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.
C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPUpdate the subnet route tables. Create an
inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses.
D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are
properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the
Correct Answer: C
Selected Answer: C
Answer: C -->"You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC."
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
upvoted 37 times
Selected Answer: C
"You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC."
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html
upvoted 10 times
Selected Answer: C
After establishing the VPC peering connection, the subnet route tables need to be updated in both VPCs to route traffic to the other VPC's CIDR
blocks through the peering connection.
upvoted 2 times
Selected Answer: C
VPC Peering Connection: This allows communication between instances in different VPCs as if they are on the same network. It's a straightforward
approach to connect the two VPCs.
Subnet Route Tables: After establishing the VPC peering connection, the subnet route tables need to be updated in both VPCs to route traffic to
the other VPC's CIDR blocks through the peering connection.
Inbound Rule in Database Security Group: By creating an inbound rule in the ap-southeast-2 database security group that allows traffic from the
eu-west-1 application server IP addresses, you ensure that only the specified application servers from the eu-west-1 VPC can access the database
servers in the ap-southeast-2 VPC.
upvoted 3 times
B) Configure VPC peering between ap-southeast-2 and eu-west-1 VPCs. Update routes. Allow traffic in ap-southeast-2 database SG from eu-west-
application server SG.
This option establishes the correct network connectivity for the applications in eu-west-1 to reach the databases in ap-southeast-2:
Selected Answer: C
In the exam both option B and C would be a pass. In the real world both option will work.
upvoted 3 times
https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html#:~:text=You%20cannot-,reference,-
the%20security%20group
upvoted 3 times
therefore, still C because we cannot reference SG ID of diff VPC, we should use the CIDR block
upvoted 1 times
B is wrong because It is in a different region, so reference to the security group ID will not work. A is wrong because you need to update the route
table. The answer should be C.
upvoted 1 times
I think the answer is C because the security groups are in different VPCs. When the question wants to allow traffic from app vpc to database vpc i
think using peering connection you will be able to add the security groups rules using private ip addresses of app servers. I don't think the
database VPC will identify the security group id of another VPC.
upvoted 1 times
Selected Answer: B
Selected Answer: B
Option B suggests configuring a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. By establishing this peering
connection, the VPCs can communicate with each other over their private IP addresses.
Additionally, updating the subnet route tables is necessary to ensure that the traffic destined for the remote VPC is correctly routed through the
VPC peering connection.
To secure the communication, an inbound rule is created in the ap-southeast-2 database security group. This rule references the security group ID
of the application servers in the eu-west-1 VPC, allowing traffic only from those instances. This approach ensures that only the authorized
application servers can access the databases in the ap-southeast-2 VPC.
upvoted 4 times
Question #511 Topic 1
A company is developing software that uses a PostgreSQL database schema. The company needs to configure multiple development
environments and databases for the company's developers. On average, each development environment is used for half of the 8-hour workday.
A. Configure each development environment with its own Amazon Aurora PostgreSQL database
B. Configure each development environment with its own Amazon RDS for PostgreSQL Single-AZ DB instances
C. Configure each development environment with its own Amazon Aurora On-Demand PostgreSQL-Compatible database
D. Configure each development environment with its own Amazon S3 bucket by using Amazon S3 Object Select
Correct Answer: C
Selected Answer: C
Option C suggests using Amazon Aurora On-Demand PostgreSQL-Compatible databases for each development environment. This option provides
the benefits of Amazon Aurora, which is a high-performance and scalable database engine, while allowing you to pay for usage on an on-demand
basis. Amazon Aurora On-Demand instances are typically more cost-effective for individual development environments compared to the
provisioned capacity options.
upvoted 11 times
Selected Answer: C
Guys, when you use the pricing calculator the cost between option B and C is really close. I doubt anyone wants to test on your knowledge of exac
pricings in your region. I think that "On Demand" being explicitly specified in option C and not being specified in option B is the main difference
here the exam wants to test. In that case I'd assume that option B means a constantly running instance and not "On Demand" which would make
the choice pretty obvious. Again, I don't think AWS exam will test you on knowing that a single AZ is cheaper by 0,005 cents than Aurora :D
upvoted 7 times
Selected Answer: B
Selected Answer: B
Selected Answer: B
1 instance(s) x 0.245 USD hourly x (4 / 24 hours in a day) x 730 hours in a month = 29.8083 USD ---> Amazon RDS PostgreSQL instances cost
(monthly)
1 instance(s) x 0.26 USD hourly x (4 / 24 hours in a day) x 730 hours in a month = 31.6333 USD ---> Amazon Aurora PostgreSQL-Compatible DB
instances cost (monthly)
upvoted 2 times
C is correct because B is cheaper but they don't mention to stop the DB when not in use
upvoted 2 times
awsgeek75 8 months, 3 weeks ago
Selected Answer: C
Selected Answer: C
We have environments that are used on average 4 hours per workday = 20 hours per week. So with option C (Aurora on-demand aka serverless)
we pay for 20 hours per week. With option B (RDS) we pay for 168 hours per week (the answer does not mention anything about automating
shutdown etc.).
So even if Aurora Serverless is slightly more expensive than RDS, C is cheaper because we pay only 20 (not 168) hours per week.
upvoted 2 times
Selected Answer: B
(B)
upvoted 1 times
AWS Services Calculator is showing B cheaper by less than a dollar for the same settings for both. I used "db.r6g.large" for RDS (Single-AZ) and
Aurora and put 4 hours/day.
upvoted 7 times
Selected Answer: B
Selected Answer: B
Aurora instances will cost you ~20% more than RDS MySQL Given the running hours the same.
Also Aurora is HA.
upvoted 1 times
Selected Answer: C
Aurora allows you to pay for the hours used. 4 hour every day, you only need 1/6 cost of 24 hours per day. You can check the Aurora pricing
calculator.
upvoted 3 times
RDS Single-AZ instances only run the DB instance when in use, minimizing costs for dev environments not used full-time
RDS charges by the hour for DB instance hours used, versus Aurora clusters that have hourly uptime charges
PostgreSQL is natively supported by RDS so no compatibility issues
S3 Object Select (Option D) does not provide full database functionality
Aurora (Options A and C) has higher minimum costs than RDS even when not fully utilized
upvoted 2 times
Selected Answer: C
Putting into consideration that the environments will only run 4 hours everyday and the need to save on costs, then Amazon Aurora would be
suitable because it supports auto-scaling configuration where the database automatically starts up, shuts down, and scales capacity up or down
based on your application's needs. So for the rest of the 4 hours everyday when not in use the database shuts down automatically when there is no
activity.
Option C would be best, as this is the name of the service from the aws console.
upvoted 2 times
A company uses AWS Organizations with resources tagged by account. The company also uses AWS Backup to back up its AWS infrastructure
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Config to identify all untagged resources. Tag the identified resources programmatically. Use tags in the backup plan.
B. Use AWS Config to identify all resources that are not running. Add those resources to the backup vault.
C. Require all AWS account owners to review their resources to identify the resources that need to be backed up.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Use AWS config to deploy the tag rule and remediate resources that are not compliant.
upvoted 3 times
AWS Config continuously evaluates resource configurations and can identify untagged resources
Resources can be programmatically tagged via the AWS SDK based on Config data
Backup plans can use tag criteria to automatically back up newly tagged resources
No manual review or resource discovery needed
upvoted 2 times
Vote A
upvoted 2 times
a valid for me
upvoted 3 times
Selected Answer: A
This solution allows you to leverage AWS Config to identify any untagged resources within your AWS Organizations accounts. Once identified, you
can programmatically apply the necessary tags to indicate the backup requirements for each resource. By using tags in the backup plan
configuration, you can ensure that only the tagged resources are included in the backup process, reducing operational overhead and ensuring all
necessary resources are backed up.
upvoted 4 times
Question #513 Topic 1
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a
solution that automatically resizes the images so that the images can be displayed on multiple device types. The application experiences
unpredictable traffic patterns throughout the day. The company is seeking a highly available solution that maximizes scalability.
A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to resize the images and store the images in an Amazon
S3 bucket.
B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an
C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance. Configure a process that runs on the EC2 instance
D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize
job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the
resize jobs.
Correct Answer: A
Selected Answer: A
By using Amazon S3 and AWS Lambda together, you can create a serverless architecture that provides highly scalable and available image resizing
capabilities. Here's how the solution would work:
How can end user upload an image to S3 bucket with static hosting. I believe it should be dynamic website (Answer D)
upvoted 2 times
S3 static website provides high availability and auto scaling to handle unpredictable traffic
Lambda functions invoked from the S3 site can resize images on the fly
Storing images in S3 buckets provides durability, scalability and high throughput
Serverless approach with S3 and Lambda maximizes scalability and availability
upvoted 1 times
Selected Answer: A
A company is running a microservices application on Amazon EC2 instances. The company wants to migrate the application to an Amazon Elastic
Kubernetes Service (Amazon EKS) cluster for scalability. The company must configure the Amazon EKS control plane with endpoint private access
set to true and endpoint public access set to false to maintain security compliance. The company must also put the data plane in private subnets.
However, the company has received error notifications because the node cannot join the cluster.
A. Grant the required permission in AWS Identity and Access Management (IAM) to the AmazonEKSNodeRole IAM role.
B. Create interface VPC endpoints to allow nodes to access the control plane.
C. Recreate nodes in the public subnet. Restrict security groups for EC2 nodes.
Correct Answer: B
Selected Answer: A
Also, EKS does not require VPC endpoints. This is not the right use case for EKS
upvoted 19 times
"Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched."
upvoted 4 times
Selected Answer: B
By creating interface VPC endpoints, you can enable the necessary communication between the Amazon EKS control plane and the nodes in
private subnets. This solution ensures that the control plane maintains endpoint private access (set to true) and endpoint public access (set to false
for security compliance.
upvoted 18 times
Selected Answer: A
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 1 times
When Amazon EKS nodes cannot join the cluster, especially when the control plane is set to private access only, the issue typically revolves around
networking and connectivity. When the EKS control plane is configured with private access only, the nodes must communicate with the control
plane over private IP addresses. Creating VPC endpoints (specifically, com.amazonaws.<region>.eks) allows traffic between the EKS nodes and the
control plane to be routed privately within the VPC, which resolves the connectivity issue.
upvoted 2 times
I think is B.
upvoted 1 times
MandAsh 3 months, 2 weeks ago
Selected Answer: B
Error they have mentioned is at network level. They are not saying authorisation is failed rather noce is enable to connect to cluster aka
connectivity issue. So answer it must be B
upvoted 1 times
https://docs.aws.amazon.com/eks/latest/userguide/private-clusters.html
"Any self-managed nodes must be deployed to subnets that have the VPC interface endpoints that you require. If you create a managed node
group, the VPC interface endpoint security group must allow the CIDR for the subnets, or you must add the created node security group to the
VPC interface endpoint security group."
upvoted 1 times
Selected Answer: B
B is good to go
upvoted 2 times
Selected Answer: A
Before can launch nodes and register nodes into a EKS cluster, must create an IAM role for those nodes to use when they are launched.
upvoted 2 times
Selected Answer: B
In Amazon EKS, nodes need to communicate with the EKS control plane. When the Amazon EKS control plane endpoint access is set to private, you
need to create interface VPC endpoints in the VPC where your nodes are running. This allows the nodes to access the control plane privately
without needing public internet access.
upvoted 2 times
Selected Answer: A
https://repost.aws/knowledge-center/eks-worker-nodes-cluster
https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html
upvoted 3 times
Selected Answer: B
Since the EKS control plane has public access disabled and is in private subnets, the EKS nodes in the private subnets need interface VPC endpoints
to reach the control plane API.
Creating these interface endpoints allows the EKS nodes to communicate with the control plane privately within the VPC to join the cluster.
upvoted 3 times
VPC Endpoints: When the control plane is set to private access, you need to set up VPC endpoints for the Amazon EKS service so that the nodes
in your private subnets can communicate with the EKS control plane without going through the public internet. These are known as interface
VPC endpoints.
upvoted 2 times
A company is migrating an on-premises application to AWS. The company wants to use Amazon Redshift as a solution.
Which use cases are suitable for Amazon Redshift in this scenario? (Choose three.)
A. Supporting data APIs to access data with traditional, containerized, and event-driven applications
C. Building analytics workloads during specified hours and when the application is not active
E. Scaling globally to support petabytes of data and tens of millions of requests per minute
F. Creating a secondary replica of the cluster by using the AWS Management Console
(B) is correct. You have the following options of protecting data at rest in Amazon Redshift. Use server-side encryption OR use client-side
encryption
https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html
upvoted 1 times
JackyCCK 5 months, 4 weeks ago
Redshift is OLAP(online analytical processing) so D is wrong, "when the application is not active"
upvoted 1 times
CE are easy
Between AB, I chose A because Redshift supports data API and client-side encryption is not Redshift specific
upvoted 3 times
A: source https://aws.amazon.com/blogs/big-data/using-the-amazon-redshift-data-api-to-interact-with-amazon-redshift-clusters/
B: source: https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html
C: not sure, but you can configure scheduled queries, but the remark " and when the application is not active " , that is not relevant.
D: source https://docs.aws.amazon.com/redshift/latest/dg/c_challenges_achieving_high_performance_queries.html
E: Scaling globally is not supported; redshift is only a regional service.
F: only read replica is supported. So not a secondary replica of the cluster.
upvoted 2 times
A: https://aws.amazon.com/de/blogs/big-data/get-started-with-the-amazon-redshift-data-api/
B: https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html
D: https://docs.aws.amazon.com/redshift/latest/dg/c_challenges_achieving_high_performance_queries.html#result-caching
Not C: Redshift is a Data Warehouse; you can use that for analytics, but it is not directly related to an "application"
Not E: "Petabytes of data" yes, but "tens of millions of requests per minute" is not a typical feature of Redshift
Nor F: Replicas are not a Redshift feature
upvoted 1 times
Technically both options A and B apply, this is from the links below:
A. You can access your Amazon Redshift database using the built-in Amazon Redshift Data API.
https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html#:~:text=in%20Amazon%20Redshift-,Data%20API,-.%20Using%20this%20API
B. You can encrypt data client-side and upload the encrypted data to Amazon Redshift. In this case, you manage the encryption process, the
encryption keys, and related tools.
https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html#:~:text=Use-,client%2Dside,-
encryption%20%E2%80%93%20You%20can
upvoted 2 times
Amazon Redshift provides a Data API that you can use to painlessly access data from Amazon Redshift with all types of traditional, cloud-native,
and containerized, serverless web services-based and event-driven applications.
Amazon Redshift supports up to 500 concurrent queries per cluster, which may be expanded by adding more nodes to the cluster.
upvoted 3 times
To reduce query runtime and improve system performance, Amazon Redshift caches the results of certain types of queries in memory on the
leader node. When a user submits a query, Amazon Redshift checks the results cache for a valid, cached copy of the query results. If a match is
found in the result cache, Amazon Redshift uses the cached results and doesn't run the query. Result caching is transparent to the user.
upvoted 1 times
The key use cases for Amazon Redshift that fit this scenario are:
B) Redshift supports both client-side and server-side encryption to protect sensitive data.
C) Redshift is well suited for running batch analytics workloads during off-peak times without affecting OLTP systems.
E) Redshift can scale to massive datasets and concurrent users to support large analytics workloads.
upvoted 2 times
Why E lol? It's a data warehouse! it has no need to support millions of requests, it is not mentioned anywhere
(https://aws.amazon.com/redshift/features)
In fact Redshift editor supports max 500 connections and workgroup support max 2000 connections at once, see it's quota page
Redshift has a cache layer, D is correct
upvoted 3 times
https://docs.aws.amazon.com/redshift/latest/mgmt/security-encryption.html
upvoted 1 times
Quote: "The Data API enables you to seamlessly access data from Redshift Serverless with all types of traditional, cloud-native, and containerized
serverless web service-based applications and event-driven applications." at https://aws.amazon.com/blogs/big-data/use-the-amazon-redshift-
data-api-to-interact-with-amazon-redshift-serverless/ (28/4/2023). Choose A. B and C are next chosen correct answers.
upvoted 2 times
https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html
upvoted 2 times
B. Supporting client-side and server-side encryption: Amazon Redshift supports both client-side and server-side encryption for improved data
security.
C. Building analytics workloads during specified hours and when the application is not active: Amazon Redshift is optimized for running complex
analytic queries against very large datasets, making it a good choice for this use case.
E. Scaling globally to support petabytes of data and tens of millions of requests per minute: Amazon Redshift is designed to handle petabytes of
data, and to deliver fast query and I/O performance for virtually any size dataset.
upvoted 4 times
A company provides an API interface to customers so the customers can retrieve their financial information. Еhe company expects a larger
The company requires the API to respond consistently with low latency to ensure customer satisfaction. The company needs to provide a
Which solution will meet these requirements with the LEAST operational overhead?
A. Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
B. Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
C. Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
D. Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Correct Answer: B
Selected Answer: B
In the context of the given scenario, where the company wants low latency and consistent performance for their API during peak usage times, it
would be more suitable to use provisioned concurrency. By allocating a specific number of concurrent executions, the company can ensure that
there are enough function instances available to handle the expected load and minimize the impact of cold starts. This will result in lower latency
and improved performance for the API.
upvoted 11 times
Selected Answer: B
Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/lambda-concurrency.html#reserved-and-provisioned
Consistency decreases if you exceed your provisioned instance. Lets say you have 1000 (default) provisioned instances and the load is 1500. The
new 500 will have to wait until the first 1000 concurrent calls finish. This is solved by increasing the provisioned concurrency to 1500.
upvoted 2 times
Selected Answer: A
So I have my doubts here. The question also states ;"The company needs to provide a compute host for the API." Imho this implies to have some
sort of physical host which has to be provided by the customer. Translating this further to aws this would mean an EC2 instance. And then when I
would go for ECS in stead of EKS.
Please share your opinion.
upvoted 6 times
API Gateway handles the API requests and integration with Lambda
Lambda automatically scales compute without managing servers
Provisioned concurrency ensures consistent low latency by keeping functions initialized
No need to manage containers or orchestration platforms as with ECS/EKS
upvoted 2 times
Selected Answer: B
The company requires the API to respond consistently with low latency to ensure customer satisfaction especially during high peak periods, there i
no mention of cost efficient. Hence provisioned concurrency is the best option.
Provisioned concurrency is the number of pre-initialized execution environments you want to allocate to your function. These execution
environments are prepared to respond immediately to incoming function requests. Configuring provisioned concurrency incurs charges to your
AWS account.
https://docs.aws.amazon.com/lambda/latest/dg/provisioned-concurrency.html#:~:text=for%20a%20function.-,Provisioned%20concurrency,-
%E2%80%93%20Provisioned%20concurrency%20is
upvoted 1 times
Selected Answer: B
AWS Lambda provides a highly scalable and distributed infrastructure that automatically manages the underlying compute resources. It
automatically scales your API based on the incoming request load, allowing it to respond consistently with low latency, even during peak times.
AWS Lambda takes care of infrastructure provisioning, scaling, and resource management, allowing you to focus on writing the code for your API
logic.
upvoted 3 times
Question #517 Topic 1
A company wants to send all AWS Systems Manager Session Manager logs to an Amazon S3 bucket for archival purposes.
Which solution will meet this requirement with the MOST operational efficiency?
A. Enable S3 logging in the Systems Manager console. Choose an S3 bucket to send the session data to.
B. Install the Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Export the logs to an S3 bucket from the group for archival
purposes.
C. Create a Systems Manager document to upload all server logs to a central S3 bucket. Use Amazon EventBridge to run the Systems Manager
D. Install an Amazon CloudWatch agent. Push all logs to a CloudWatch log group. Create a CloudWatch logs subscription that pushes any
incoming log events to an Amazon Kinesis Data Firehose delivery stream. Set Amazon S3 as the destination.
Correct Answer: A
Selected Answer: A
send logs to Amazon S3 from AWS Systems Manager Session Manager. Here are the steps to do so:
Enable S3 Logging: Open the AWS Systems Manager console. In the navigation pane, choose Session Manager. Choose the Preferences tab, and
then choose Edit. Select the check box next to Enable under S3 logging.
Create an S3 Bucket: To store the Session Manager logs, create an S3 bucket to hold the audit logs from the Session Manager interactive shell
usage.
Configure IAM Role: AWS Systems Manager Agent (SSM Agent) uses the same AWS Identity and Access Management (IAM) role to activate itself
and upload logs to Amazon S3. You can use either an IAM instance profile that’s attached to an Amazon Elastic Compute Cloud (Amazon EC2)
instance or the IAM role that’s configured for the Default Host Management Configuration.
upvoted 6 times
A, You can choose to store session log data in a specified Amazon Simple Storage Service (Amazon S3) bucket for debugging and troubleshooting
purposes.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html#session-manager-logging-s3
upvoted 1 times
Selected Answer: A
Selected Answer: A
You can choose to store session log data in a specified Amazon Simple Storage Service (Amazon S3) bucket for debugging and troubleshooting
purposes.
upvoted 1 times
You can config the log archived to S3 in the Session Manager - > preference tab. Another option is CloudWatch log.
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html#session-manager-logging-s3
upvoted 1 times
Selected Answer: A
°Simplicity - Enabling S3 logging requires just a simple configuration in the Systems Manager console to specify the destination S3 bucket. No
other services need to be configured.
°Direct integration - Systems Manager has native support to send session logs to S3 through this feature. No need for intermediary services.
°Automated flow - Once S3 logging is enabled, the session logs automatically flow to the S3 bucket without manual intervention.
°Easy management - The S3 bucket can be managed independently for log storage and archival purposes without impacting Systems Manager.
°Cost-effectiveness - No charges for intermediate CloudWatch or Kinesis services. Just basic S3 storage costs.
°Minimal overhead - No ongoing management of complex pipeline of services. Direct logs to S3 minimizes overhead.
upvoted 2 times
Selected Answer: A
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 1 times
Answer A. https://aws-labs.net/winlab5-manageinfra/sessmgrlog.html
upvoted 1 times
Selected Answer: A
B could be an option, by installing a logging package on alle managed systems/ECs etc. https://docs.aws.amazon.com/systems-
manager/latest/userguide/distributor-working-with-packages-deploy.html
Selected Answer: A
It should be "A".
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-logging.html
upvoted 1 times
Selected Answer: B
BBBBBBBBB
upvoted 1 times
The option 'A' says "Enable S3 logging in the Systems Manager console." This means that you will enable the logs !! FOR !! S3 events and its is not
what the question asks. My vote is for Option B, based on this article: https://docs.aws.amazon.com/AmazonS3/latest/userguide/logging-with-
S3.html
upvoted 1 times
option A does not involve CloudWatch, while option D does. Therefore, in terms of operational overhead, option A would generally have less
complexity and operational overhead compared to option D.
Option A simply enables S3 logging in the Systems Manager console, allowing you to directly send session logs to an S3 bucket. This approach is
straightforward and requires minimal configuration.
On the other hand, option D involves installing and configuring the Amazon CloudWatch agent, creating a CloudWatch log group, setting up a
CloudWatch Logs subscription, and configuring an Amazon Kinesis Data Firehose delivery stream to store logs in an S3 bucket. This requires
additional setup and management compared to option A.
So, if minimizing operational overhead is a priority, option A would be a simpler and more straightforward choice.
upvoted 4 times
Question #518 Topic 1
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to
Which solution meets these requirements with the LEAST amount of effort?
D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
Correct Answer: A
Selected Answer: A
Enabling storage autoscaling allows RDS to automatically adjust the storage capacity based on the application's needs. When the storage usage
exceeds a predefined threshold, RDS will automatically increase the allocated storage without requiring manual intervention or causing downtime.
This ensures that the RDS database has sufficient disk space to handle the increasing storage requirements.
upvoted 11 times
Selected Answer: A
Selected Answer: A
Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, Amazon RDS for SQL Server and Amazon RDS for Oracle support
RDS Storage Auto Scaling. RDS Storage Auto Scaling automatically scales storage capacity in response to growing database workloads, with zero
downtime.
upvoted 2 times
Selected Answer: A
Selected Answer: A
RDS Storage Auto Scaling continuously monitors actual storage consumption, and scales capacity up automatically when actual utilization
approaches provisioned storage capacity. Auto Scaling works with new and existing database instances. You can enable Auto Scaling with just a few
clicks in the AWS Management Console. There is no additional cost for RDS Storage Auto Scaling. You pay only for the RDS resources needed to
run your applications.
https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-
scaling/#:~:text=of%20the%20rest.-,RDS%20Storage%20Auto%20Scaling,-continuously%20monitors%20actual
upvoted 2 times
Selected Answer: A
Quote "Amazon RDS now supports Storage Auto Scaling" and "... with zero downtime." (Jun 20th 2019) at https://aws.amazon.com/about-
aws/whats-new/2019/06/rds-storage-auto-scaling/
upvoted 2 times
See “Amazon RDS now supports Storage Auto Scaling. Posted On: Jun 20, 2019. Starting today, Amazon RDS for MariaDB, Amazon RDS for MySQL
Amazon RDS for PostgreSQL, Amazon RDS for SQL Server and Amazon RDS for Oracle support RDS Storage Auto Scaling. RDS Storage Auto
Scaling automatically scales storage capacity in response to growing database workloads, with zero downtime.” at https://aws.amazon.com/about-
aws/whats-new/2019/06/rds-storage-auto-scaling/
upvoted 2 times
Selected Answer: A
A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to
expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for
Correct Answer: B
Selected Answer: B
AWS Service Catalog allows you to create and manage catalogs of IT services that can be deployed within your organization. With Service Catalog,
you can define a standardized set of products (solutions and tools in this case) that customers can self-service provision. By creating Service
Catalog products, you can control and enforce the deployment of approved and validated solutions and tools.
upvoted 9 times
Selected Answer: B
Centralized management - Products can be maintained in a single catalog for easy discovery and governance.
Self-service access - Customers can deploy the solutions on their own without manual intervention.
Standardization - Products provide pre-defined templates for consistent deployment.
Access control - Granular permissions can be applied to restrict product visibility and access.
Reporting - Service Catalog provides detailed analytics on product usage and deployments.
upvoted 4 times
Selected Answer: B
Selected Answer: B
AWS Service Catalog lets you centrally manage your cloud resources to achieve governance at scale of your infrastructure as code (IaC) templates,
written in CloudFormation or Terraform. With AWS Service Catalog, you can meet your compliance requirements while making sure your customer
can quickly deploy the cloud resources they need.
https://aws.amazon.com/servicecatalog/#:~:text=How%20it%20works-,AWS%20Service%20Catalog,-lets%20you%20centrally
upvoted 1 times
Selected Answer: B
https://docs.aws.amazon.com/servicecatalog/latest/adminguide/introduction.html
upvoted 2 times
Question #520 Topic 1
A company is designing a new web application that will run on Amazon EC2 Instances. The application will use Amazon DynamoDB for backend
data storage. The application traffic will be unpredictable. The company expects that the application read and write throughput to the database
will be moderate to high. The company needs to scale in response to application traffic.
Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard table class. Set DynamoDB auto scaling to a
B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
C. Configure DynamoDB with provisioned read and write by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table
D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard Infrequent Access (DynamoDB Standard-IA) table class.
Correct Answer: B
B for me. Provisioned if we know how much traffic will come, but its unpredictable, so we have to go for on-demand
upvoted 11 times
Selected Answer: B
Configure DynamoDB in on-demand mode by using the DynamoDB Standard table class.
This option allows DynamoDB to automatically adjust to varying traffic patterns, which is ideal for unpredictable workloads. The Standard table
class is suitable for applications with moderate to high read and write throughput, and on-demand mode ensures that you are billed based on the
actual usage, providing cost efficiency for variable traffic patterns.
upvoted 1 times
Selected Answer: B
Selected Answer: B
On demand
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
"With on-demand capacity mode, DynamoDB charges you for the data reads and writes your application performs on your tables. You do not need
to specify how much read and write throughput you expect your application to perform because DynamoDB instantly accommodates your
workloads as they ramp up or down."
upvoted 2 times
Selected Answer: B
Leaning towards B, it's hard to predict the capacity for A, and autoscaling doesn't respond fast
upvoted 2 times
Selected Answer: A
it's A.
remember that :
he company expects that the application read and write throughput to the database will be moderate to high
Selected Answer: D
Unpredictable= on demand
upvoted 2 times
Selected Answer: B
With On-Demand mode, you only pay for what you use instead of over-provisioning capacity. This avoids idle capacity costs.
DynamoDB Standard provides the fastest performance needed for moderate-high traffic apps vs Standard-IA which is for less frequent access.
Auto scaling with provisioned capacity can also work but requires more administrative effort to tune the scaling thresholds.
upvoted 1 times
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html
upvoted 1 times
Technically both options A and B will work. But this statement 'traffic will be unpredictable' rules out option A, because 'provisioned mode' was
made for scenarios where traffic is predictable.
So I will stick with B, because 'on-demand mode' is made for unpredictable traffic and instantly accommodates workloads as they ramp up or
down.
upvoted 2 times
Selected Answer: A
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html
upvoted 4 times
Selected Answer: C
Not B for sure, "The company needs to scale in response to application traffic."
Between A and C, I would choose C. Because it's a new application, and the traffic will be from moderate to high. So by choosing C, it's both cost-
effecitve and scalable
upvoted 1 times
"With provisioned capacity mode, you specify the number of reads and writes per second that you expect your application to require, and you are
billed based on that. Furthermore if you can forecast your capacity requirements you can also reserve a portion of DynamoDB provisioned capacity
and optimize your costs even further.
With provisioned capacity you can also use auto scaling to automatically adjust your table’s capacity based on the specified utilization rate to
ensure application performance, and also to potentially reduce costs. To configure auto scaling in DynamoDB, set the minimum and maximum
levels of read and write capacity in addition to the target utilization percentage."
https://docs.aws.amazon.com/wellarchitected/latest/serverless-applications-lens/capacity.html
upvoted 3 times
Question #521 Topic 1
A retail company has several businesses. The IT team for each business manages its own AWS account. Each team account is part of an
organization in AWS Organizations. Each team monitors its product inventory levels in an Amazon DynamoDB table in the team's own AWS
account.
The company is deploying a central inventory reporting application into a shared AWS account. The application must be able to read items from all
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account. Configure the application to use the correct secret
from Secrets Manager to authenticate and read the DynamoDB table. Schedule secret rotation for every 30 days.
B. In every business account, create an IAM user that has programmatic access. Configure the application to use the correct IAM user access
key ID and secret access key to authenticate and read the DynamoDB table. Manually rotate IAM access keys every 30 days.
C. In every business account, create an IAM role named BU_ROLE with a policy that gives the role access to the DynamoDB table and a trust
policy to trust a specific role in the inventory application account. In the inventory account, create a role named APP_ROLE that allows access
to the STS AssumeRole API operation. Configure the application to use APP_ROLE and assume the crossaccount role BU_ROLE to read the
DynamoDB table.
D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identity certificates to authenticate DynamoDB. Configure the
application to use the correct certificate to authenticate and read the DynamoDB table.
Correct Answer: C
Selected Answer: C
IAM Roles: IAM roles provide a secure way to grant permissions to entities within AWS. By creating an IAM role in each business account named
BU_ROLE with the necessary permissions to access the DynamoDB table, the access can be controlled at the IAM role level.
Cross-Account Access: By configuring a trust policy in the BU_ROLE that trusts a specific role in the inventory application account (APP_ROLE), you
establish a trusted relationship between the two accounts.
Least Privilege: By creating a specific IAM role (BU_ROLE) in each business account and granting it access only to the required DynamoDB table,
you can ensure that each team's table is accessed with the least privilege principle.
Security Token Service (STS): The use of STS AssumeRole API operation in the inventory application account allows the application to assume the
cross-account role (BU_ROLE) in each business account.
upvoted 28 times
Using cross-account IAM roles and role chaining allows the inventory application to securely access resources in other accounts. The roles provide
temporary credentials and can be permissions controlled.
upvoted 2 times
Selected Answer: C
A company runs container applications by using Amazon Elastic Kubernetes Service (Amazon EKS). The company's workload is not consistent
throughout the day. The company wants Amazon EKS to scale in and out according to the workload.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Choose two.)
C. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster.
Correct Answer: BC
Selected Answer: BC
Using the Kubernetes Metrics Server (B) enables horizontal pod autoscaling to dynamically scale pods based on CPU/memory usage. This allows
scaling at the application tier level.
The Kubernetes Cluster Autoscaler (C) automatically adjusts the number of nodes in the EKS cluster in response to pod resource requirements and
events. This allows scaling at the infrastructure level.
upvoted 6 times
Selected Answer: BC
Selected Answer: BC
Selected Answer: BC
By combining the Kubernetes Cluster Autoscaler (option C) to manage the number of nodes in the cluster and enabling horizontal pod autoscaling
(option B) with the Kubernetes Metrics Server, you can achieve automatic scaling of your EKS cluster and container applications based on workload
demand. This approach minimizes operational overhead as it leverages built-in Kubernetes functionality and automation mechanisms.
upvoted 4 times
Selected Answer: BC
b and c is right
upvoted 1 times
Question #523 Topic 1
A company runs a microservice-based serverless web application. The application must be able to retrieve data from multiple Amazon DynamoDB
tables A solutions architect needs to give the application the ability to retrieve the data with no impact on the baseline performance of the
application.
Which solution will meet these requirements in the MOST operationally efficient way?
Correct Answer: D
just passed yesterday 30-05-23, around 75% of the exam came from here, some with light changes.
upvoted 30 times
Selected Answer: A
Is there anyone who has recently passed the exam who can tell me approximately how many of the original questions are in the actual exam?
upvoted 1 times
Selected Answer: D
https://docs.amazonaws.cn/en_us/athena/latest/ug/connect-to-a-data-source.html
upvoted 5 times
I'll go with D as ABC looks too much work or irrelevant. Although not sure how AFQ actually achieves the read without impacting performance.
upvoted 2 times
Selected Answer: D
Not A - Pipe Resolvers require coding, would not consider that 'operationally efficient'
Not B - CloudFront caches web content at the edge, not DynamoDB query results for apps
Not C - Neither API Gateway or Lambda have anything to do with DynamoDB performance
D - Can do exactly that
upvoted 7 times
Selected Answer: A
Selected Answer: A
For an operationally efficient solution that minimizes impact on baseline performance in a microservice-based serverless web application retrieving
data from multiple DynamoDB tables, Amazon CloudFront with Lambda@Edge functions (Option B) is often the most suitable choice
upvoted 1 times
A is correct.
https://aws.amazon.com/blogs/mobile/appsync-pipeline-resolvers-2/
upvoted 1 times
Selected Answer: A
https://aws.amazon.com/pm/appsync/?trk=66d9071f-eec2-471d-9fc0-c374dbda114d&sc_channel=ps&ef_id=CjwKCAjww7KmBhAyEiwA5-
PUSi9OTSRu78WOh7NuprwbbfjyhVXWI4tBlPquEqRlXGn-
HLFh5qOqfRoCOmMQAvD_BwE:G:s&s_kwcid=AL!4422!3!646025317347!e!!g!!aws%20appsync!19610918335!148058250160
upvoted 1 times
Selected Answer: D
I like D) the most. D. Amazon Athena Federated Query with a DynamoDB connector.
I don't like A) since this is not a GraphQL query.
I don't like B). Since Query multiple tables in DynamoDB from Lambda may not be efficient.
upvoted 1 times
Question #524 Topic 1
A company wants to analyze and troubleshoot Access Denied errors and Unauthorized errors that are related to IAM permissions. The company
Which solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors.
B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors.
C. Search CloudTrail logs with Amazon Athena queries to identify the errors.
D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Correct Answer: C
Selected Answer: C
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html
When troubleshooting you will want to query specific things in the log and Athena provides query language for that.
Quick Sight is data analytics and visualisation tool. You can use it to aggregate data and maybe make a dashboard for number of errors by type etc
but that doesn't help you troubleshoot anything.
C is correct
upvoted 2 times
Selected Answer: C
"Search CloudTrail logs with Amazon QuickSight", that doesn't work. QuickSight can visualize Athena query results, so "search CloudTrail logs with
Amazon Athena, then create a dashboard with Amazon QuickSight" would make sense. But QuickSight without Athena won't work.
upvoted 3 times
Selected Answer: D
The question asks specifically to "analyze and troubleshoot". While Athena is easy to get the data, you then just have a list of logs. Not very useful
to troubleshoot...
upvoted 1 times
Selected Answer: C
Athena allows you to run SQL queries on data in Amazon S3, including CloudTrail logs. It is the easiest way to query the logs and identify specific
errors without needing to write any custom code or scripts.
With Athena, you can write simple SQL queries to filter the CloudTrail logs for the "AccessDenied" and "UnauthorizedOperation" error codes. This
will return the relevant log entries that you can then analyze.
upvoted 4 times
Selected Answer: C
C for me. Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service activity. For example, you can use queries to
identify trends and further isolate activity by attributes, such as source IP address or user.
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#:~:text=CloudTrail%20Lake%20documentation.-,Using%20Athena,-
with%20CloudTrail%20logs
upvoted 1 times
Selected Answer: C
Selected Answer: C
Amazon Athena is an interactive query service provided by AWS that enables you to analyze data , is a little bit more suitable integrated with cloud
trail that permit to verify WHO accessed the service.
upvoted 1 times
I am struggling for the C and D for a long time, and ask the chatGPT. The chatGPT says D is better, since Athena requires more expertise on SQL.
upvoted 1 times
Selected Answer: D
Amazon QuickSight supports logging the following actions as events in CloudTrail log files:
- Whether the request was made with root or AWS Identity and Access Management user credentials
- Whether the request was made with temporary security credentials for an IAM role or federated user
- Whether the request was made by another AWS service
https://docs.aws.amazon.com/quicksight/latest/user/logging-using-cloudtrail.html
upvoted 1 times
Selected Answer: C
Selected Answer: C
"Using Athena with CloudTrail logs is a powerful way to enhance your analysis of AWS service activity."
https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html
upvoted 1 times
Selected Answer: D
It specifies analyze, not query logs.
Which is why option D is the best one as it provides dashboards to analyze the logs.
upvoted 3 times
Question #525 Topic 1
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs to recommend a solution that
will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast
Which solution will meet these requirements with the LEAST operational overhead?
A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Correct Answer: A
Selected Answer: A
Keyword
12 months, API Support
https://docs.aws.amazon.com/cost-management/latest/userguide/ce-what-is.html
upvoted 5 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
Access usage cost-related data by using the AWS Cost Explorer API with pagination
upvoted 2 times
Selected Answer: A
Selected Answer: A
Answer is: A
says dashboard = Cost Explorer, therefor C & D are eliminated.
also says programmatically, means non manual intervention therefor API.
upvoted 4 times
Selected Answer: A
A solutions architect is reviewing the resilience of an application. The solutions architect notices that a database administrator recently failed
over the application's Amazon Aurora PostgreSQL database writer instance as part of a scaling exercise. The failover resulted in 3 minutes of
Which solution will reduce the downtime for scaling exercises with the LEAST operational overhead?
A. Create more Aurora PostgreSQL read replicas in the cluster to handle the load during failover.
B. Set up a secondary Aurora PostgreSQL cluster in the same AWS Region. During failover, update the application to use the secondary
C. Create an Amazon ElastiCache for Memcached cluster to handle the load during failover.
D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
Correct Answer: D
Selected Answer: D
Selected Answer: D
"RDS Proxy reduces client recovery time after failover by up to 79% for Amazon Aurora MySQL "
https://aws.amazon.com/de/blogs/database/improving-application-availability-with-amazon-rds-proxy/
upvoted 2 times
Selected Answer: B
D. Set up an Amazon RDS proxy for the database. Update the application to use the proxy endpoint.
upvoted 2 times
Selected Answer: C
Availability is the main requirement here. Even if RDS proxy is used, it will still find the writer instance unavailable during the scaling exercise.
Best option is to create an Amazon ElastiCache for Memcached cluster to handle the load during the scaling operation.
upvoted 1 times
A company has a regional subscription-based streaming service that runs in a single AWS Region. The architecture consists of web servers and
application servers on Amazon EC2 instances. The EC2 instances are in Auto Scaling groups behind Elastic Load Balancers. The architecture
includes an Amazon Aurora global database cluster that extends across multiple Availability Zones.
The company wants to expand globally and to ensure that its application has minimal downtime.
A. Extend the Auto Scaling groups for the web tier and the application tier to deploy instances in Availability Zones in a second Region. Use an
Aurora global database to deploy the database in the primary Region and the second Region. Use Amazon Route 53 health checks with a
B. Deploy the web tier and the application tier to a second Region. Add an Aurora PostgreSQL cross-Region Aurora Replica in the second
Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the secondary to primary as needed.
C. Deploy the web tier and the application tier to a second Region. Create an Aurora PostgreSQL database in the second Region. Use AWS
Database Migration Service (AWS DMS) to replicate the primary database to the second Region. Use Amazon Route 53 health checks with a
D. Deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the database in the
primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region. Promote the
Correct Answer: D
Selected Answer: D
Auto Scaling groups can span Availability Zones, but not AWS regions.
Hence the best option is to deploy the web tier and the application tier to a second Region. Use an Amazon Aurora global database to deploy the
database in the primary Region and the second Region. Use Amazon Route 53 health checks with a failover routing policy to the second Region.
Promote the secondary to primary as needed.
upvoted 19 times
Selected Answer: D
EC2 Auto Scaling groups are regional constructs. They can span Availability Zones, but not AWS regions
upvoted 2 times
Selected Answer: D
Using an Aurora global database that spans both the primary and secondary regions provides automatic replication and failover capabilities for the
database tier.
Deploying the web and application tiers to a second region provides fault tolerance for those components.
Using Route53 health checks and failover routing will route traffic to the secondary region if the primary region becomes unavailable.
This provides fault tolerance across all tiers of the architecture while minimizing downtime. Promoting the secondary database to primary ensures
the second region can continue operating if needed.
A is close, but doesn't provide an automatic database failover capability.
B and C provide database replication, but not automatic failover.
So D is the most comprehensive and fault tolerant architecture.
upvoted 3 times
Selected Answer: D
Answer D
upvoted 1 times
Selected Answer: D
B is correct!
upvoted 1 times
A is the only answer remain using ELB, both Web/App/DB has been taking care with replicating in 2nd region, lastly route 53 for failover over
multiple regions
upvoted 1 times
A data analytics company wants to migrate its batch processing system to AWS. The company receives thousands of small data files periodically
during the day through FTP. An on-premises batch job processes the data files overnight. However, the batch job takes hours to finish running.
The company wants the AWS solution to process incoming data files as soon as possible with minimal changes to the FTP clients that send the
files. The solution must delete the incoming data files after the files have been processed successfully. Processing for each file needs to take 3-8
minutes.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use an Amazon EC2 instance that runs an FTP server to store incoming files as objects in Amazon S3 Glacier Flexible Retrieval. Configure a
job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the objects nightly from S3 Glacier Flexible Retrieval.
Delete the objects after the job has processed the objects.
B. Use an Amazon EC2 instance that runs an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume.
Configure a job queue in AWS Batch. Use Amazon EventBridge rules to invoke the job to process the files nightly from the EBS volume. Delete
C. Use AWS Transfer Family to create an FTP server to store incoming files on an Amazon Elastic Block Store (Amazon EBS) volume.
Configure a job queue in AWS Batch. Use an Amazon S3 event notification when each file arrives to invoke the job in AWS Batch. Delete the
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to
process the files and to delete the files after they are processed. Use an S3 event notification to invoke the Lambda function when the files
arrive.
Correct Answer: D
Selected Answer: D
Obviously we choose AWS Transfer Family over hosting the FTP server ourselves on an EC2 instance. And "process incoming data files as soon as
possible" -> trigger Lambda when files arrive. Lambda functions can run up to 15 minutes, it takes "3-8 minutes" per file -> works.
AWS Batch just schedules jobs, but these still need to run somewhere (Lambda, Fargate, EC2).
upvoted 3 times
Selected Answer: D
Use AWS Transfer Family for the FTP server to receive files directly into S3. This avoids managing FTP servers.
Process each file as soon as it arrives using Lambda triggered by S3 events. Lambda provides fast processing time per file.
Lambda can also delete files after processing succeeds.
Options A, B, C involve more operational overhead of managing FTP servers and batch jobs. Processing latency would be higher waiting for batch
windows.
Storing files in Glacier (Option A) adds latency for retrieving files.
upvoted 1 times
Selected Answer: D
Processing for each file needs to take 3-8 minutes clearly indicates Lambda functions.
upvoted 1 times
TariqKipkemei 1 year, 2 months ago
Selected Answer: D
Process incoming data files with minimal changes to the FTP clients that send the files = AWS Transfer Family.
Process incoming data files as soon as possible = S3 event notification.
Processing for each file needs to take 3-8 minutes = AWS Lambda function.
Delete file after processing = AWS Lambda function.
upvoted 3 times
Most likely D.
upvoted 1 times
Selected Answer: D
You cannot setup AWS Transfer Family to save files into EBS.
upvoted 3 times
Selected Answer: D
D. Because
1. process immediate when file transfer to S3 not wait for process several file in one time.
2. takes 3-8 can use Lamda.
C. Wrong because AWS Batch is use for run large-scale or large amount of data in one time.
upvoted 1 times
D. Use AWS Transfer Family to create an FTP server to store incoming files in Amazon S3 Standard. Create an AWS Lambda function to process the
files and delete them after processing. Use an S3 event notification to invoke the Lambda function when the files arrive.
upvoted 1 times
It should be D as lambda is more operationally viable solution given the fact each processing takes 3-8 minutes that lambda can handle
upvoted 1 times
Selected Answer: C
"The company wants the AWS solution to process incoming data files <b>as soon as possible</b> with minimal changes to the FTP clients that
send the files."
upvoted 3 times
Question #529 Topic 1
A company is migrating its workloads to AWS. The company has transactional and sensitive data in its databases. The company wants to use
AWS Cloud solutions to increase security and reduce operational overhead for the databases.
A. Migrate the databases to Amazon EC2. Use an AWS Key Management Service (AWS KMS) AWS managed key for encryption.
C. Migrate the data to Amazon S3 Use Amazon Macie for data security and protection
D. Migrate the database to Amazon RDS. Use Amazon CloudWatch Logs for data security and protection.
Correct Answer: B
B is the answer
Why not C - Option C suggests migrating the data to Amazon S3 and using Amazon Macie for data security and protection. While Amazon Macie
provides advanced security features for data in S3, it may not be directly applicable or optimized for databases, especially for transactional and
sensitive data. Amazon RDS provides a more suitable environment for managing databases.
upvoted 10 times
Selected Answer: B
Selected Answer: B
Selected Answer: B
Reduce Ops = Migrate the databases to Amazon RDS Configure encryption at rest
upvoted 3 times
Selected Answer: B
B for sure.
First the correct is Amazon RDS, then encryption at rest makes the database secure.
upvoted 3 times
Selected Answer: B
A company has an online gaming application that has TCP and UDP multiplayer gaming capabilities. The company uses Amazon Route 53 to point
the application traffic to multiple Network Load Balancers (NLBs) in different AWS Regions. The company needs to improve application
performance and decrease latency for the online game in preparation for user growth.
A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control max-age parameter.
B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to use latency-based routing.
C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Accelerator endpoint to use the correct listener ports.
D. Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Override method caching for the different stages.
Correct Answer: C
Selected Answer: C
The application uses TCP and UDP for multiplayer gaming, so Network Load Balancers (NLBs) are appropriate.
AWS Global Accelerator can be added in front of the NLBs to improve performance and reduce latency by intelligently routing traffic across AWS
Regions and Availability Zones.
Global Accelerator provides static anycast IP addresses that act as a fixed entry point to application endpoints in the optimal AWS location. This
improves availability and reduces latency.
The Global Accelerator endpoint can be configured with the correct NLB listener ports for TCP and UDP.
upvoted 5 times
Selected Answer: C
only b and c handle TCP/UDP, and C comes with accelerator to enhance performance
upvoted 1 times
Selected Answer: C
UDP and TCP is AWS Global accelarator as it works in the Transportation layer.
Now this with NLB is perfect.
upvoted 2 times
A company needs to integrate with a third-party data feed. The data feed sends a webhook to notify an external service when new data is ready for
consumption. A developer wrote an AWS Lambda function to retrieve data when the company receives a webhook callback. The developer must
make the Lambda function available for the third party to call.
Which solution will meet these requirements with the MOST operational efficiency?
A. Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the webhook.
B. Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook.
C. Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname
D. Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function. Provide the public hostname of
Correct Answer: A
Selected Answer: A
A function URL is a dedicated HTTP(S) endpoint for your Lambda function. When you create a function URL, Lambda automatically generates a
unique URL endpoint for you.
upvoted 8 times
Selected Answer: A
Selected Answer: A
AWS Lambda can provide a URL to call using Function URLs. This is a relatively new feature in AWS Lambda that allows you to create HTTPS
endpoints for your Lambda functions, making it easy to invoke the function directly over the web.
Key Features of Lambda Function URLs:
Direct Access: Provides a simple and direct way to call a Lambda function via an HTTP(S) request.
Easy Configuration: You can create a function URL for a Lambda function using the AWS Management Console, AWS CLI, or AWS SDKs.
Managed Service: AWS manages the infrastructure for you, handling scaling, patching, and maintenance.
Security: You can configure authentication and authorization using AWS IAM or AWS Lambda function URL settings.
upvoted 1 times
Selected Answer: A
Apart from simplest and most operational, I think A is the only option that will work!
BCD cannot even be implemented in real world imho. Happy to be corrected
upvoted 2 times
Selected Answer: A
The key points:
Selected Answer: A
Selected Answer: A
https://docs.aws.amazon.com/lambda/latest/dg/lambda-urls.html
upvoted 1 times
Selected Answer: A
It's A
upvoted 1 times
A company has a workload in an AWS Region. Customers connect to and access the workload by using an Amazon API Gateway REST API. The
company uses Amazon Route 53 as its DNS provider. The company wants to provide individual and secure URLs for all customers.
Which combination of steps will meet these requirements with the MOST operational efficiency? (Choose three.)
A. Register the required domain in a registrar. Create a wildcard custom domain name in a Route 53 hosted zone and record in the zone that
B. Request a wildcard certificate that matches the domains in AWS Certificate Manager (ACM) in a different Region.
C. Create hosted zones for each customer as required in Route 53. Create zone records that point to the API Gateway endpoint.
D. Request a wildcard certificate that matches the custom domain name in AWS Certificate Manager (ACM) in the same Region.
F. Create a custom domain name in API Gateway for the REST API. Import the certificate from AWS Certificate Manager (ACM).
Step A involves registering the required domain in a registrar and creating a wildcard custom domain name in a Route 53 hosted zone. This allows
you to map individual and secure URLs for all customers to your API Gateway endpoints.
Step D is to request a wildcard certificate from AWS Certificate Manager (ACM) that matches the custom domain name you created in Step A. This
wildcard certificate will cover all subdomains and ensure secure HTTPS communication.
Step F is to create a custom domain name in API Gateway for your REST API. This allows you to associate the custom domain name with your API
Gateway endpoints and import the certificate from ACM for secure communication.
upvoted 7 times
Using a wildcard domain and certificate avoids managing individual domains/certs per customer. This is more efficient.
The domain, hosted zone, and certificate should all be in the same region as the API Gateway REST API for simplicity.
Creating multiple API endpoints per customer (Option E) adds complexity and is not required.
Option B and C add unnecessary complexity by separating domains, certificates, and hosted zones.
upvoted 6 times
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/AboutHZWorkingWith.html
upvoted 3 times
It's ADF
upvoted 2 times
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (PII). The company
recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and to notify
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon
B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an
Amazon Simple Notification Service (Amazon SNS) notification to the security team.
C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S3Object/Personal event type from Macie findings and to
send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an
Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
Amazon SQS is typically used for decoupling and managing messages between distributed application components. It's not typically used for
sending notifications directly to humans. On my opinion C isn't a best practice
upvoted 1 times
Selected Answer: C
there are different type of sensitive data: https://docs.aws.amazon.com/macie/latest/user/findings-types.html. if the question only focus on PII,
then C is the answer. however, in reality, you will use A, because you will not want bank card, credential...etc all sensitive data , not only PII
upvoted 3 times
Selected Answer: A
Selected Answer: C
There are different types of Sensitive Data. Here we are only referring to PII. Hence SensitiveData:S3Object/Personal. to use SNS, the security team
must subscribe. SQS sends the information as designed
upvoted 1 times
SensitiveData:S3Object/Personal
upvoted 1 times
Selected Answer: A
A company wants to build a logging solution for its multiple AWS accounts. The company currently stores the logs from all accounts in a
centralized account. The company has created an Amazon S3 bucket in the centralized account to store the VPC flow logs and AWS CloudTrail
logs. All logs must be highly available for 30 days for frequent analysis, retained for an additional 60 days for backup purposes, and deleted 90
A. Transition objects to the S3 Standard storage class 30 days after creation. Write an expiration action that directs Amazon S3 to delete
B. Transition objects to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class 30 days after creation. Move all objects to the S3
Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
C. Transition objects to the S3 Glacier Flexible Retrieval storage class 30 days after creation. Write an expiration action that directs Amazon
D. Transition objects to the S3 One Zone-Infrequent Access (S3 One Zone-IA) storage class 30 days after creation. Move all objects to the S3
Glacier Flexible Retrieval storage class after 90 days. Write an expiration action that directs Amazon S3 to delete objects after 90 days.
Correct Answer: C
Selected Answer: C
Also it says deletion after 90 days, so all answers specifying a transition after 90 days makes no sense.
upvoted 16 times
Selected Answer: A
The Glacier min storage duration is 90 days. All the options using Glacier are wrong. Only A is feasible.
upvoted 9 times
Even with the early deletion fee, it appears to me that answer 'A' would still be cheaper.
upvoted 2 times
Selected Answer: C
C: Lowest cost
upvoted 1 times
awsgeek75 8 months, 3 weeks ago
A: Standard storage is default so this is wrong.
B: Looks wrong because it moves object to S3GFR after 90 days when they could just be deleted so extra cost
D: Same problem as B
upvoted 1 times
Selected Answer: C
Not A: Objects are created in S3 Standard, so it doesn't make sense to 'transition' them there "30 days after creation"
Not B or C: No need to "move all objects to the S3 Glacier Flexible Retrieval storage class after 90 days" because we want to delete, not archive,
them. Even if we would delete them right after moving, we would pay 90 days minimum storage duration. Plus, we are using "Infrequent Access"
classes here, but we have no access at all.
upvoted 2 times
C is most cost-effective
upvoted 2 times
Things to note are: 30 days frequent access and 90 days after creation, so you only need to do 2 things, not 3. Objects in S3 will be stored by
default for 30 days before you can move it to somewhere else, so C is the answer.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 3 times
I think - it is B
The first 30 days, the logs need to be highly available for frequent analysis. The S3 Standard storage class is the most expensive storage class, but i
also provides the highest availability.
After 30 days, the logs still need to be retained for backup purposes, but they do not need to be accessed frequently. The S3 Standard-IA storage
class is a good option for this, as it is less expensive than the S3 Standard storage class.
After 90 days, the logs can be moved to the S3 Glacier Flexible Retrieval storage class. This is the most cost-effective storage class for long-term
archiving.
The expiration action will ensure that the objects are deleted after 90 days, even if they are not accessed
upvoted 2 times
Selected Answer: C
C most likely.
upvoted 1 times
Question says "All logs must be highly available for 30 days for frequent analysis" I think the answer is A. Glacier is not made for frequent access.
upvoted 2 times
y0eri 1 year, 3 months ago
I take that back. Moderator, please delete my comment.
upvoted 4 times
Selected Answer: B
I think B
upvoted 1 times
Question #535 Topic 1
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS
A. Create a new AWS Key Management Service (AWS KMS) key. Use AWS Secrets Manager to manage, rotate, and store all secrets in Amazon
EKS.
B. Create a new AWS Key Management Service (AWS KMS) key. Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
C. Create the Amazon EKS cluster with default options. Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI)
driver as an add-on.
D. Create a new AWS Key Management Service (AWS KMS) key with the alias/aws/ebs alias. Enable default Amazon Elastic Block Store
Correct Answer: B
Selected Answer: B
B is the correct solution to meet the requirement of encrypting secrets in the etcd store for an Amazon EKS cluster.
Selected Answer: B
EKS supports using AWS KMS keys to provide envelope encryption of Kubernetes secrets stored in EKS. Envelope encryption adds an addition,
customer-managed layer of encryption for application secrets or user data that is stored within a Kubernetes cluster.
https://eksctl.io/usage/kms-encryption/
upvoted 5 times
Selected Answer: A
Why not a
upvoted 1 times
Selected Answer: B
Selected Answer: B
A company wants to provide data scientists with near real-time read-only access to the company's production Amazon RDS for PostgreSQL
database. The database is currently configured as a Single-AZ database. The data scientists use complex queries that will not affect the
A. Scale the existing production database in a maintenance window to provide enough power for the data scientists.
B. Change the setup from a Single-AZ to a Multi-AZ instance deployment with a larger secondary standby instance. Provide the data scientists
C. Change the setup from a Single-AZ to a Multi-AZ instance deployment. Provide two additional read replicas for the data scientists.
D. Change the setup from a Single-AZ to a Multi-AZ cluster deployment with two readable standby instances. Provide read endpoints to the
data scientists.
Correct Answer: D
Selected Answer: D
It's either C or D. To be honest, I find the newest questions to be ridiculously hard (roughly 500+). I agree with @alexandercamachop that Multi Az
in Instance mode is cheaper than Cluster. However, with Cluster we have reader endpoint available to use out-of-box, so there is no need to
provide read-replicas, which also has its own costs. The ridiculous part is that I'm pretty sure even the AWS support would have troubles to answer
which configuration is MOST cost-effective.
upvoted 12 times
Selected Answer: C
Option D: Multi-AZ cluster deployment with two readable standby instances would be more costly and is not necessary if read replicas are
sufficient for the data scientists' needs.
Thus, Option C is the most cost-effective and operationally efficient solution to meet the company's requirements.
upvoted 2 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html
upvoted 1 times
Selected Answer: D
https://aws.amazon.com/about-aws/whats-new/2023/01/amazon-rds-multi-az-readable-standbys-rds-postgresql-inbound-replication/
upvoted 2 times
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-az-instance-multi-az-instance-or-multi-az-
database-cluster/
C would mean you are paying for 4 instances (primary, backup, and 2 read instances). D would be 3 (primary, and 2 backup). Difficult to be sure,
pricing calculator doesn't even include clusters yet.
upvoted 1 times
Selected Answer: D
Option D is the most cost-effective solution that meets the requirements for this scenario.
Data scientists need read-only access to near real-time production data without affecting performance.
High availability is required.
Cost should be minimized.
upvoted 1 times
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-az-instance-multi-az-instance-or-multi-az-
database-cluster/
only multi AZ cluster have reader endpoint. multi AZ instance secondary replicate is not allow to access
upvoted 1 times
Selected Answer: D
Support for D:
Amazon RDS now offers Multi-AZ deployments with readable standby instances (also called Multi-AZ DB cluster deployments) in preview. You
should consider using Multi-AZ DB cluster deployments with two readable DB instances if you need additional read capacity in your Amazon RDS
Multi-AZ deployment and if your application workload has strict transaction latency requirements such as single-digit milliseconds transactions.
https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-option/
upvoted 1 times
Unlike Multi-AZ instance deployment, where the secondary instance can't be accessed for read or writes, Multi-AZ DB cluster deployment consists
of primary instance running in one AZ serving read-write traffic and two other standby running in two different AZs serving read traffic.
upvoted 3 times
Selected Answer: D
D. using Multi-AZ DB cluster deployments with two readable DB instances if you need additional read capacity in your Amazon RDS Multi-AZ
deployment and if your application workload has strict transaction latency requirements such as single-digit milliseconds transactions.
https://aws.amazon.com/blogs/database/readable-standby-instances-in-amazon-rds-multi-az-deployments-a-new-high-availability-option/
while on read replicas, Amazon RDS then uses the asynchronous replication method for the DB engine to update the read replica whenever there i
a change to the primary DB instance. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 1 times
Selected Answer: B
Why not b. Shouldnt it have less number of instances than both c and d?
upvoted 2 times
baba365 1 year, 2 months ago
Complex queries on single db will affect performance of db
upvoted 1 times
Selected Answer: D
Forgot to vote
upvoted 2 times
Single-AZ and Multi-AZ deployments: Pricing is billed per DB instance-hour consumed from the time a DB instance is launched until it is stopped
or deleted.
https://aws.amazon.com/rds/postgresql/pricing/?pg=pr&loc=3
In the case of a cluster, you will pay less.
upvoted 2 times
Selected Answer: D
Multi-AZ instance: the standby instance doesn’t serve any read or write traffic.
Multi-AZ DB cluster: consists of primary instance running in one AZ serving read-write traffic and two other standby running in two different AZs
serving read traffic.
https://aws.amazon.com/blogs/database/choose-the-right-amazon-rds-deployment-option-single-az-instance-multi-az-instance-or-multi-az-
database-cluster/
upvoted 3 times
Question #537 Topic 1
A company runs a three-tier web application in the AWS Cloud that operates across three Availability Zones. The application architecture has an
Application Load Balancer, an Amazon EC2 web server that hosts user session states, and a MySQL database that runs on an EC2 instance. The
company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Redis with
high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached
with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability
Zones.
C. Migrate the MySQL database to Amazon DynamoDB Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in
DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high
availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Correct Answer: A
Selected Answer: A
Memcached is best suited for caching data, while Redis is better for storing data that needs to be persisted. If you need to store data that needs to
be accessed frequently, such as user profiles, session data, and application settings, then Redis is the better choice
upvoted 16 times
Selected Answer: A
Replication: Redis supports creating multiple replicas for read scalability and high availability.https://aws.amazon.com/elasticache/redis-vs-
memcached/
upvoted 1 times
A because of "Amazon EC2 web server that hosts user session states"
Between A and B, A is suitable because of session state and Elasticache with Redis is more HA than option B
upvoted 1 times
Selected Answer: A
B: from what I know, Memcached provide better performance and simplicity but lower availability than redis.
C: mysql is relational database, dynamodb is nosql
D: single AZ
upvoted 1 times
pentium75 9 months ago
Selected Answer: A
ElastiCache for Redis supports HA, ElastiCache for Memcached does not:
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.html
C could in theory work, but session data is typically stored in ElastiCache, not in DynamoDB.
D is not HA.
upvoted 2 times
Selected Answer: B
Redis is a widely adopted in-memory data store for use as a database, cache, message broker, queue, session store, and leaderboard.
https://aws.amazon.com/elasticache/redis/
upvoted 4 times
RDS Multi-AZ provides high availability for MySQL by synchronously replicating data across AZs. Automatic failover handles AZ outages.
ElastiCache for Redis is better suited for session data caching than Memcached. Redis offers more advanced data structures and flexibility.
Auto scaling across 3 AZs provides high availability for the web tier
upvoted 1 times
the different between Redis and Memcache is that Memcache suuport multithread process to handle the increase of application traffic.
https://aws.amazon.com/elasticache/redis-vs-memcached/
upvoted 2 times
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/SelectEngine.html
upvoted 1 times
Selected Answer: B
This requirement wins for me: "be able to scale to meet future application capacity demands".
Memcached implements a multi-threaded architecture, it can make use of multiple processing cores. This means that you can handle more
operations by scaling up compute capacity.
https://aws.amazon.com/elasticache/redis-vs-memcached/#:~:text=by%20their%20rank.-,Multithreaded%20architecture,-
Since%20Memcached%20is
upvoted 1 times
Selected Answer: B
B is correct!
upvoted 3 times
A global video streaming company uses Amazon CloudFront as a content distribution network (CDN). The company wants to roll out content in a
phased manner across multiple countries. The company needs to ensure that viewers who are outside the countries to which the company rolls
A. Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error message.
B. Set up a new URL tor restricted content. Authorize access by using a signed URL and cookies. Set up a custom error message.
C. Encrypt the data for the content that the company distributes. Set up a custom error message.
D. Create a new URL for restricted content. Set up a time-restricted access policy for signed URLs.
Correct Answer: A
Selected Answer: A
BCD are impractical for geo restrictions as you cannot restrict URL by region and you cannot encrypt by geo region (country etc)
upvoted 4 times
Selected Answer: A
The CloudFront geographic restrictions feature lets you control distribution of your content at the country level for all files that you're distributing
with a given web distribution.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
upvoted 4 times
Selected Answer: A
Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error message
upvoted 2 times
Selected Answer: A
Add geographic restrictions to the content in CloudFront by using an allow list. Set up a custom error message.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
upvoted 4 times
A company wants to use the AWS Cloud to improve its on-premises disaster recovery (DR) configuration. The company's core production business
application uses Microsoft SQL Server Standard, which runs on a virtual machine (VM). The application has a recovery point objective (RPO) of 30
seconds or fewer and a recovery time objective (RTO) of 60 minutes. The DR solution needs to minimize costs wherever possible.
A. Configure a multi-site active/active setup between the on-premises server and AWS by using Microsoft SQL Server Enterprise with Always
On availability groups.
B. Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database Migration Service (AWS DMS) to use
C. Use AWS Elastic Disaster Recovery configured to replicate disk changes to AWS as a pilot light.
D. Use third-party backup software to capture backups every night. Store a secondary set of backups in Amazon S3.
Correct Answer: B
Selected Answer: B
Selected Answer: C
Selected Answer: B
https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html
upvoted 1 times
AWS DRS(AWS Elastic Disaster Recovery) enables RPOs of seconds and RTOs of minutes.
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-options-in-the-cloud.html#warm-standby
upvoted 2 times
Selected Answer: C
A: Not possible
B: With RDS it means your failover will launch a different database engine. This is wrong in general
D: No comments
C: It is a disk based replication so it will be similar DB server and this is the product managed by AWS for the DR of on-prem setups.
https://aws.amazon.com/blogs/modernizing-with-aws/how-to-set-up-disaster-recovery-for-sql-server-always-on-availability-groups-using-aws-
elastic-disaster-recovery/
upvoted 1 times
From <https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_disaster_recovery.html>
upvoted 3 times
Selected Answer: B
With the pilot light approach, you replicate your data from one environment to another and provision a copy of your core workload infrastructure,
not the fully functional copy of your production environment in a recovery environment.
upvoted 1 times
Selected Answer: B
The company wants to improve... so needs something guaranteed to be better than 60 mins RTO
upvoted 1 times
Selected Answer: B
Configure a warm standby Amazon RDS for SQL Server database on AWS. Configure AWS Database Migration Service (AWS DMS) to use change
data capture (CDC).
upvoted 2 times
Selected Answer: C
AWS DRS enables RPOs of seconds and RTOs of minutes. Pilot light is also cheaper than warm standby.
https://aws.amazon.com/disaster-recovery/
upvoted 3 times
Selected Answer: C
https://aws.amazon.com/ko/blogs/architecture/disaster-recovery-dr-architecture-on-aws-part-iii-pilot-light-and-warm-standby/
Selected Answer: B
https://stepstocloud.com/change-data-capture/?expand_article=1
upvoted 1 times
Answer C. RPO is in seconds and RTO 5-20 min; pilot light costs less than warm standby (and of course less than active-active).
https://docs.aws.amazon.com/drs/latest/userguide/failback-overview.html#recovery-objectives
upvoted 1 times
Question #540 Topic 1
A company has an on-premises server that uses an Oracle database to process and store customer information. The company wants to use an
AWS database service to achieve higher availability and to improve application performance. The company also wants to offload reporting from its
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions. Point the reporting
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database. Create a read replica in the same zone as the primary DB
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the reporting functions to use the reader
D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the
reader instances.
Correct Answer: D
Selected Answer: D
Its D
Multi-AZ DB clusters aren't available with the following engines:
RDS for MariaDB
RDS for Oracle
RDS for SQL Server
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.MultiAZDBClusters.html
upvoted 33 times
Selected Answer: C
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database. Direct the reporting functions to use the reader
instance in the cluster deployment.
A and B discarted.
The answer is between C and D
D says use an Amazon RDS to build an Amazon Aurora, makes no sense.
C is the correct one, high availability in multi az deployment.
Also point the reporting to the reader replica.
upvoted 12 times
Selected Answer: D
Multi-AZ (Availability Zone) deployments are not available for the following Amazon RDS database engines:
Selected Answer: D
Selected Answer: D
Not A - Creating multiple instances and keeping them in sync in DMS is surely not "operationally efficient"
Not B - "replica in the same zone" -> does not provide "higher availability"
Not C - "Multi-AZ cluster" does not support Oracle engine
Thus D. Question does not mention that the app would use Oracle-specific features; we're also not asked to minimize application changes. Ideal
solution from AWS point of view is to move from Oracle to Aurora.
upvoted 1 times
It should be C. Oracle DB is supported in RDS Multi-AZ with one standby for HA. https://aws.amazon.com/rds/features/multi-az/. Additionally, a
reader instance/replica could be added to RDS Multi-AZ with one standby setup to offload the read requests. Aurora is only supported MySQL and
Postgres compatible DB so "D" is out.
upvoted 2 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RDS_Fea_Regions_DB-eng.Feature.MultiAZDBClusters.html
upvoted 1 times
danielmakita 11 months, 1 week ago
It is C. Aurora database doesn't support Oracle.
upvoted 1 times
Selected Answer: D
D is my choice.
Multi-AZ DB cluster does not support Oracle DB.
upvoted 2 times
Selected Answer: C
Using RDS Multi-AZ provides high availability and failover capabilities for the primary Oracle database.
The reader instance in the Multi-AZ cluster can be used for offloading reporting workloads from the primary instance. This improves performance.
RDS Multi-AZ has automatic failover between AZs. DMS and Aurora migrations (A, D) would incur more effort and downtime.
Single-AZ with a read replica (B) does not provide the AZ failover capability that Multi-AZ does.
upvoted 1 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
upvoted 3 times
Question #541 Topic 1
A company wants to build a web application on AWS. Client access requests to the website are not predictable and can be idle for a long time.
Only customers who have paid a subscription fee can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB. Create an Amazon API Gateway endpoint to accept
B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from
Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function.
E. Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated Amazon CloudFront configuration.
F. Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web content.
Option B (Amazon ECS) is not the best option since the website "can be idle for a long time", so Lambda (Option A) is a more cost-effective choice
Option D is incorrect because User pools are for authentication (identity verification) while Identity pools are for authorization (access control).
Option F is wrong because S3 web hosting only supports static web files like HTML/CSS, and does not support PHP or JavaScript.
upvoted 8 times
I will go for A C E
upvoted 1 times
A: App may be idle for long time so Lambda is perfect (charge per invocation)
C: Cognito user pool for user auth
E: Amplify is low code web dev tool
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
S3 doesn't support server-side scripting
upvoted 1 times
E) Use AWS Amplify to serve the frontend web content with HTML, CSS, and JS. Use an integrated CloudFront configuration.
F) Use Amazon S3 static web hosting with PHP, CSS, and JS. Use Amazon CloudFront to serve the frontend web content.
upvoted 1 times
Amazon S3 does not support server-side scripting such as PHP, JSP, or ASP.NET.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html?
icmpid=docs_amazons3_console#:~:text=website%20relies%20on-,server%2Dside,-processing%2C%20including%20server
Traffic can be idle for a long time = AWS Lambda
upvoted 1 times
Use exclusion method: No need for Container (no need run all time), remove B. PHP cannot run with static Amazon S3, remove F.
Use selection method: Idle for sometime, choose AWS Lambda, choose A. “Amazon Cognito is an identity platform for web and mobile apps.”
(https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html ), choose C. Create an identity pool
https://docs.aws.amazon.com/cognito/latest/developerguide/tutorial-create-identity-pool.html . AWS Amplify https://aws.amazon.com/amplify/ fo
build full-stack web-app in hours.
upvoted 5 times
https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/php_s3_code_examples.html
upvoted 1 times
Answer is ACE
upvoted 1 times
Lambda =serverless
User Pool = For user authentication
Amplify = hosting web/mobile apps
upvoted 2 times
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
upvoted 3 times
I don't think S3 can handle anything dynamic such as PHP. So I go for ACE
upvoted 1 times
ACF no doubt. Check the difference between user pools and identity pools.
upvoted 2 times
Question #542 Topic 1
A media company uses an Amazon CloudFront distribution to deliver content over the internet. The company wants only premium customers to
have access to the media streams and file content. The company stores all content in an Amazon S3 bucket. The company also delivers content
on demand to customers for a specific purpose, such as movie rentals or music downloads.
C. Use origin access control (OAC) to limit the access of non-premium customers.
Correct Answer: B
Selected Answer: B
CloudFront Signed URL with Custom Policy are exactly for this.
A: Nope, cookies don't help as they don't restrict URL
C: Wrong. OAC for non-premium customers, how is that even possible without any details here?
D: Field encryption, while good idea, does not help restricting the content by customer
upvoted 2 times
Selected Answer: B
Selected Answer: B
Use CloudFront signed URLs or signed cookies to restrict access to documents, business data, media streams, or content that is intended for
selected users, for example, users who have paid a fee.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html#:~:text=CloudFront%20signed%20URLs
upvoted 2 times
See https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html#private-content-how-signed-urls-
work
upvoted 1 times
Selected Answer: B
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
Notice that A is not correct because it should be CloudFront signed URL, not S3.
upvoted 2 times
Signed URLs
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html
upvoted 2 times
A company runs Amazon EC2 instances in multiple AWS accounts that are individually bled. The company recently purchased a Savings Pian.
Because of changes in the company’s business requirements, the company has decommissioned a large number of EC2 instances. The company
wants to use its Savings Plan discounts on its other AWS accounts.
A. From the AWS Account Management Console of the management account, turn on discount sharing from the billing preferences section.
B. From the AWS Account Management Console of the account that purchased the existing Savings Plan, turn on discount sharing from the
C. From the AWS Organizations management account, use AWS Resource Access Manager (AWS RAM) to share the Savings Plan with other
accounts.
D. Create an organization in AWS Organizations in a new payer account. Invite the other AWS accounts to join the organization from the
management account.
E. Create an organization in AWS Organizations in the existing AWS account with the existing EC2 instances and Savings Plan. Invite the other
Correct Answer: AD
Selected Answer: AD
For me, E makes no sense as the discount is with a new payer and cannot be transferred to an existing account unless customer service is involved.
upvoted 1 times
Organization should be created by a new account that is reserved for management. Thus D, followed by A (discount sharing must be enabled in
the management account).
upvoted 3 times
Selected Answer: AD
Not E - it mentions using an account with existing EC2s as the management account, which goes against the best practice for a management
account
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html
upvoted 3 times
Guru4Cloud 1 year, 1 month ago
Selected Answer: AE
AE is best
upvoted 1 times
Selected Answer: AE
AE is best
upvoted 1 times
A. "turn on discount sharing" is ok. This case: Has discount for many EC2 instances in one account, then want to share with other user. At E, create
Organization, then share.
upvoted 1 times
Selected Answer: AE
I vote AE.
upvoted 1 times
Selected Answer: AE
AE are correct !
upvoted 1 times
It's not good practice to create a payer account with any workload so it must be D.
By the reason that we need Organizations for sharing, then we need to turn on its from our PAYER account. (all sub-accounts start share discounts)
upvoted 1 times
Selected Answer: AE
@alexandercamachop it is AE. I believe its just typo. RAM is not needed anyhow.
upvoted 4 times
Selected Answer: CE
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint is a custom domain name that
points to an Amazon Route 53 alias record. A solutions architect needs to create a solution that has minimal effects on customers and minimal
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the
canary stage. After API verification, promote the canary stage to the production stage.
B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in
merge mode into the API in API Gateway. Deploy the new version of the API to the production stage.
C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in
overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage.
D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API.
Point the Route 53 alias record to the new API Gateway API custom domain name.
Correct Answer: A
what are the total number of questions this package has as on 14 July 2023 , is it 544 or 551 ?
upvoted 8 times
Selected Answer: A
https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
upvoted 3 times
In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio.
Typically, the canary release receives a small percentage of API traffic and the production release takes up the rest. The updated API features are
only visible to API traffic through the canary. You can adjust the canary traffic percentage to optimize test coverage or performance.
https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html
upvoted 6 times
Selected Answer: A
Using a canary release deployment allows incremental rollout of the new API version to a percentage of traffic. This minimizes impact on customer
and potential data loss during the release.
upvoted 2 times
Selected Answer: A
Minimal effects on customers and minimal data loss = Canary deployment
upvoted 4 times
Key word "canary release". See this term in See: https://www.jetbrains.com/teamcity/ci-cd-guide/concepts/canary-release/ and/or
https://martinfowler.com/bliki/CanaryRelease.html
upvoted 1 times
Selected Answer: A
Canary release is a software development strategy in which a "new version of an API" (as well as other software) is deployed for testing purposes.
upvoted 3 times
Selected Answer: A
It's A
upvoted 1 times
Selected Answer: A
A. Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the canary
stage. After API verification, promote the canary stage to the production stage.
A company wants to direct its users to a backup static error page if the company's primary website is unavailable. The primary website's DNS
records are hosted in Amazon Route 53. The domain is pointing to an Application Load Balancer (ALB). The company needs a solution that
A. Update the Route 53 records to use a latency routing policy. Add a static error page that is hosted in an Amazon S3 bucket to the records so
B. Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when
C. Set up a Route 53 active-active configuration with the ALB and an Amazon EC2 instance that hosts a static error page as endpoints.
Configure Route 53 to send requests to the instance only if the health checks fail for the ALB.
D. Update the Route 53 records to use a multivalue answer routing policy. Create a health check. Direct traffic to the website if the health
check passes. Direct traffic to a static error page that is hosted in Amazon S3 if the health check does not pass.
Correct Answer: B
Selected Answer: B
Set up a Route 53 active-passive failover configuration. Direct traffic to a static error page that is hosted in an Amazon S3 bucket when Route 53
health checks determine that the ALB endpoint is unhealthy.
upvoted 5 times
Selected Answer: B
B is correct
upvoted 4 times
Selected Answer: D
Setting up a Route 53 active-passive failover configuration with the ALB as the primary endpoint and an Amazon S3 static website as the passive
endpoint meets the requirements with minimal overhead.
Route 53 health checks can monitor the ALB health. If the ALB becomes unhealthy, traffic will automatically failover to the S3 static website. This
provides automatic failover with minimal configuration changes
upvoted 2 times
Selected Answer: B
B seems correct
upvoted 3 times
https://repost.aws/knowledge-center/fail-over-s3-r53
upvoted 3 times
A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to
simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the
A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
Correct Answer: D
Selected Answer: D
Tape... lol
The company must preserve it's existing investment so they want to keep using existing applications. This means EFS won't work. and NFS may not
be compatible. VTL is the only thing that may be compatible with an application workflow that backups to tapes.
Selected Answer: D
Use Tape Gateway to replace physical tapes on premises with virtual tapes on AWS—reducing your data storage costs without changing your tape-
based backup workflows. Tape Gateway supports all leading backup applications and caches virtual tapes on premises for low-latency data access.
It compresses your tape data, encrypts it, and stores it in a virtual tape library in Amazon Simple Storage Service (Amazon S3). From there, you can
transfer it to either Amazon S3 Glacier Flexible Retrieval or Amazon S3 Glacier Deep Archive to help minimize your long-term storage costs.
https://aws.amazon.com/storagegateway/vtl/#:~:text=Use-,Tape%20Gateway,-to%20replace%20physical
upvoted 4 times
Selected Answer: D
Tape Gateway enables you to replace using physical tapes on premises with virtual tapes in AWS without changing existing backup workflows. Tape
Gateway supports all leading backup applications and caches virtual tapes on premises for low-latency data access. Tape Gateway encrypts data
between the gateway and AWS for secure data transfer, and compresses data and transitions virtual tapes between Amazon S3 and Amazon S3
Glacier Flexible Retrieval, or Amazon S3 Glacier Deep Archive, to minimize storage costs.
upvoted 2 times
Selected Answer: D
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls
upvoted 1 times
Selected Answer: D
Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
upvoted 1 times
Bmaster 1 year, 2 months ago
D is correct
https://aws.amazon.com/storagegateway/vtl/?nc1=h_ls
upvoted 1 times
Question #547 Topic 1
A company has data collection sensors at different locations. The data collection sensors stream a high volume of data to the company. The
company wants to design a platform on AWS to ingest and process high-volume streaming data. The solution must be scalable and support data
collection in near real time. The company must store the data in Amazon S3 for future reporting.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3.
C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3.
D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to Amazon S3.
Correct Answer: A
Selected Answer: A
Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon S3, Amazon
Redshift, Amazon Elasticsearch Service, and Splunk. It requires minimal setup and maintenance, automatically scales to match the throughput of
your data, and offers near real-time data delivery with minimal operational overhead.
upvoted 2 times
Selected Answer: A
Amazon Kinesis Data Firehose: Capture, transform, and load data streams into AWS data stores (S3) in near real-time.
https://aws.amazon.com/pm/kinesis/?
gclid=CjwKCAiAu9yqBhBmEiwAHTx5px9z182o0HBEX0BGXU7VeOCOdNpkJMxgbSfvcHlNKN4NHVnbEa0Y1xoCuU0QAvD_BwE&trk=239a97c0-9c5d-
42a5-ac65-
7381b62f3756&sc_channel=ps&ef_id=CjwKCAiAu9yqBhBmEiwAHTx5px9z182o0HBEX0BGXU7VeOCOdNpkJMxgbSfvcHlNKN4NHVnbEa0Y1xoCuU0
QAvD_BwE:G:s&s_kwcid=AL!4422!3!651612444428!e!!g!!kinesis%20firehose!19836376048!149982297311#:~:text=Kinesis%20Data%20Firehose-,Ca
pture%2C,-transform%2C%20and%20load
upvoted 2 times
A for sure
upvoted 2 times
Selected Answer: A
Correct Answer: A
upvoted 2 times
Selected Answer: D
Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3
upvoted 2 times
Selected Answer: A
Selected Answer: A
Selected Answer: A
A company has separate AWS accounts for its finance, data analytics, and development departments. Because of costs and security concerns,
the company wants to control which services each AWS account can use.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Systems Manager templates to control which AWS services each department can use.
B. Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs.
C. Use AWS CloudFormation to automatically provision only the AWS services that each department can use.
D. Set up a list of products in AWS Service Catalog in the AWS accounts to manage and control the usage of specific AWS services.
Correct Answer: B
Selected Answer: B
Selected Answer: B
Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs
upvoted 1 times
Correct Answer: B
upvoted 1 times
Create organization units (OUs) for each department in AWS Organizations. Attach service control policies (SCPs) to the OUs.
upvoted 1 times
Selected Answer: B
Selected Answer: D
My rational: Scenary is "A company has separate AWS accounts", it is not mentioning anything about use of Organizations or needs related to
centralized managment of these accounts.
Then, set up a list of products in AWS Service Catalog in the AWS accounts (on each AWS account) is the best way to manage and control the
usage of specific AWS services.
upvoted 1 times
Service Catalog alone does not restrict anything. You'd need to create a service in Service Catalog for everything you're allowing to use, then
grant permissions on those services, and you'd need to remove other permissions from everyone. All of which is not mentioned in D. Just
"setting up a list of products in AWS Service Catalog in the AWS accounts" will not restrict anyone from doing what he could do before.
upvoted 2 times
Selected Answer: B
BBBBBBBBB
upvoted 1 times
A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the
public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL
database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect
must devise a strategy that maximizes security without increasing operational overhead.
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance.
B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
C. Configure an internet gateway and attach it to the VPModify the private subnet route table to direct internet-bound traffic to the internet
gateway.
D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the
Correct Answer: B
Selected Answer: B
A: Probably an old question so this option is here but NAT instance is overhead
C: Not secure as IG opens up a lot of things
D: VPG connects to a service
B: NG is managed solution. Secure by config
upvoted 2 times
Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway
upvoted 2 times
Correct Answer: B
upvoted 3 times
Selected Answer: B
Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
upvoted 2 times
Selected Answer: B
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to
ensure that the required permissions are in place to decrypt and use the environment variables.
Which steps must the solutions architect take to implement the correct permissions? (Choose two.)
D. Allow the Lambda execution role in the AWS KMS key policy.
E. Allow the Lambda resource policy in the AWS KMS key policy.
Correct Answer: BD
Selected Answer: BD
To decrypt environment variables encrypted with AWS KMS, Lambda needs to be granted permissions to call KMS APIs. This is done in two places:
The Lambda execution role needs kms:Decrypt and kms:GenerateDataKey permissions added. The execution role governs what AWS services the
function code can access.
The KMS key policy needs to allow the Lambda execution role to have kms:Decrypt and kms:GenerateDataKey permissions for that specific key.
This allows the execution role to use that particular key.
upvoted 6 times
Selected Answer: BD
Allow the Lambda execution role in the AWS KMS key policy then add AWS KMS permissions in the role.
upvoted 2 times
Correct Answer: BD
upvoted 2 times
Selected Answer: BD
BD BD BD BD
upvoted 1 times
Selected Answer: BD
Its B and D
upvoted 1 times
A company has a financial application that produces reports. The reports average 50 KB in size and are stored in Amazon S3. The reports are
frequently accessed during the first week after production and must be stored for several years. The reports must be retrievable within 6 hours.
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7 days.
B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days.
C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) and
S3 Glacier.
D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier Deep Archive after 7 days.
Correct Answer: A
Answer is A
Amazon S3 Glacier:
Expedited Retrieval: Provides access to data within 1-5 minutes.
Standard Retrieval: Provides access to data within 3-5 hours.
Bulk Retrieval: Provides access to data within 5-12 hours.
Amazon S3 Glacier Deep Archive:
Standard Retrieval: Provides access to data within 12 hours.
Bulk Retrieval: Provides access to data within 48 hours.
upvoted 23 times
Selected Answer: C
Selected Answer: A
Selected Answer: C
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 1 times
Linuslin 4 months, 3 weeks ago
Selected Answer: A
C is incorrect.
Unsupported lifecycle transitions
Amazon S3 does not support any of the following lifecycle transitions.
You can't transition from the following:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-considerations.html
upvoted 3 times
Selected Answer: A
BC are lifecycle with tiering and infrequent access which are not required here.
D is deep archive and can take hours to retrieve so it is not suitable
A is cheapest workable option
upvoted 3 times
Selected Answer: A
Any option with S3 Intelligent-Tiering is out, this is only required when the access patterns are unknown.
From the question the access patterns are well known, enough to tie the frequently accessed reports to S3 standard and transition them to S3
glacier after 7days.
upvoted 3 times
its A for me
upvoted 2 times
Option A
Amazon S3 Glacier Standard Retrieval: Provides access to data within 3-5 hours.
upvoted 3 times
Selected Answer: A
Selected Answer: A
Selected Answer: C
Check Oayoade comment, before transition, 30 days in S3 the files have to be, young padawans
upvoted 2 times
Selected Answer: C
Correct Answer: C
upvoted 1 times
Question #552 Topic 1
A company needs to optimize the cost of its Amazon EC2 instances. The company also needs to change the type and family of its EC2 instances
D. Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
Correct Answer: B
Selected Answer: B
The company needs flexibility to change EC2 instance types and families every 2-3 months. This rules out Reserved Instances which lock you into
an instance type and family for 1-3 years.
A Compute Savings Plan allows switching instance types and families freely within the term as needed. No Upfront is more flexible than All Upfront
A 1-year term balances commitment and flexibility better than a 3-year term given the company's changing needs.
With No Upfront, the company only pays for usage monthly without an upfront payment. This optimizes cost.
upvoted 9 times
Selected Answer: B
"EC2 Instance Savings Plans give you the flexibility to change your usage between instances WITHIN a family in that region. "
https://aws.amazon.com/savingsplans/compute-pricing/
upvoted 5 times
Selected Answer: B
" needs to change the type and family of its EC2 instances". that means B I think.
upvoted 2 times
A solutions architect needs to review a company's Amazon S3 buckets to discover personally identifiable information (PII). The company stores
Which solution will meet these requirements with the LEAST operational overhead?
A. Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3.
B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon S3.
Correct Answer: A
Selected Answer: A
Amazon Macie is designed specifically for discovering and classifying sensitive data like PII in S3. This makes it the optimal service to use.
Macie can be enabled directly in the required Regions rather than enabling it across all Regions which is unnecessary. This minimizes overhead.
Macie can be set up to automatically scan the specified S3 buckets on a schedule. No need to create separate jobs.
Security Hub is for security monitoring across AWS accounts, not specific for PII discovery. More overhead than needed.
Inspector and GuardDuty are not built for PII discovery in S3 buckets. They provide broader security capabilities.
upvoted 5 times
Selected Answer: A
PII = Macie
Security Hub: Organisation security and logging not for PII
Inspector: Infra vulnerability management
GuardDuty: Network protection
upvoted 3 times
Selected Answer: A
Selected Answer: A
A company's SAP application has a backend SQL Server database in an on-premises environment. The company wants to migrate its on-premises
application and database server to AWS. The company needs an instance type that meets the high demands of its SAP database. On-premises
performance data shows that both the SAP application and the database have high memory utilization.
A. Use the compute optimized instance family for the application. Use the memory optimized instance family for the database.
B. Use the storage optimized instance family for both the application and the database.
C. Use the memory optimized instance family for both the application and the database.
D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for
the database.
Correct Answer: C
Selected Answer: C
Since both the app and database have high memory needs, the memory optimized family like R5 instances meet those requirements well.
Using the same instance family simplifies management and operations, rather than mixing instance types.
Compute optimized instances may not provide enough memory for the SAP app's needs.
Storage optimized is overkill for the database's compute and memory needs.
HPC is overprovisioned for the SAP app.
upvoted 14 times
Selected Answer: C
Use the memory optimized instance family for both the application and the database
upvoted 2 times
I thyink its C
upvoted 2 times
Question #555 Topic 1
A company runs an application in a VPC with public and private subnets. The VPC extends across multiple Availability Zones. The application runs
on Amazon EC2 instances in private subnets. The application uses an Amazon Simple Queue Service (Amazon SQS) queue.
A solutions architect needs to design a secure solution to establish a connection between the EC2 instances and the SQS queue.
A. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the private subnets. Add to the endpoint a security
group that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets.
B. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach to the interface endpoint a
VPC endpoint policy that allows access from the EC2 instances that are in the private subnets.
C. Implement an interface VPC endpoint for Amazon SQS. Configure the endpoint to use the public subnets. Attach an Amazon SQS access
policy to the interface VPC endpoint that allows requests from only a specified VPC endpoint.
D. Implement a gateway endpoint for Amazon SQS. Add a NAT gateway to the private subnets. Attach an IAM role to the EC2 instances that
Correct Answer: A
Selected Answer: A
An interface VPC endpoint is a private way to connect to AWS services without having to expose your VPC to the public internet. This is the most
secure way to connect to Amazon SQS from the private subnets.
Configuring the endpoint to use the private subnets ensures that the traffic between the EC2 instances and the SQS queue is only within the VPC.
This helps to protect the traffic from being intercepted by a malicious actor.
Adding a security group to the endpoint that has an inbound access rule that allows traffic from the EC2 instances that are in the private subnets
further restricts the traffic to only the authorized sources. This helps to prevent unauthorized access to the SQS queue.
upvoted 8 times
A is correct.
Selected Answer: A
Answer is A
upvoted 1 times
Selected Answer: A
I think its A
upvoted 1 times
Question #556 Topic 1
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consists of a web tier
and an application tier that stores and retrieves user data in Amazon DynamoDB tables. The web and application tiers are hosted on Amazon EC2
instances, and the database tier is not publicly accessible. The application EC2 instances need to access the DynamoDB tables without exposing
A. Create an IAM role to read the DynamoDB tables. Associate the role with the application instances by referencing an instance profile.
B. Create an IAM role that has the required permissions to read and write from the DynamoDB tables. Add the role to the EC2 instance profile,
C. Use the parameter section in the AWS CloudFormation template to have the user input access and secret keys from an already-created IAM
user that has the required permissions to read and write from the DynamoDB tables.
D. Create an IAM user in the AWS CloudFormation template that has the required permissions to read and write from the DynamoDB tables.
Use the GetAtt function to retrieve the access and secret keys, and pass them to the application instances through the user data.
Correct Answer: B
Selected Answer: B
best practice is using IAM role for database access. From app to DB => need both read & write, only B meets these 2
upvoted 2 times
Selected Answer: B
Application "stores and retrieves" data in DynamoDB while A grants only access "to read".
upvoted 2 times
B is correct, A total wrong because "read the DynamoDB tables", so what about write in database.
upvoted 3 times
Why "No read and write" ? The question clearly states that application tier STORE and RETRIEVE the data from DynamoDB. Which means write and
read... I think answer should be B
upvoted 2 times
Selected Answer: B
https://www.examtopics.com/discussions/amazon/view/80755-exam-aws-certified-solutions-architect-associate-saa-c02/
upvoted 1 times
Selected Answer: B
My rationl: Option A is wrong because the scenario says "stores and retrieves user data in Amazon DynamoDB tables", STORES and RETRIVE, if you
set a role to READ, you can write on DinamoDB database
upvoted 1 times
AAAAAAAAA
upvoted 1 times
Selected Answer: A
A is correct
upvoted 1 times
A solutions architect manages an analytics application. The application stores large amounts of semistructured data in an Amazon S3 bucket.
The solutions architect wants to use parallel data processing to process the data more quickly. The solutions architect also wants to use
A. Use Amazon Athena to process the S3 data. Use AWS Glue with the Amazon Redshift data to enrich the S3 data.
B. Use Amazon EMR to process the S3 data. Use Amazon EMR with the Amazon Redshift data to enrich the S3 data.
C. Use Amazon EMR to process the S3 data. Use Amazon Kinesis Data Streams to move the S3 data into Amazon Redshift so that the data
can be enriched.
D. Use AWS Glue to process the S3 data. Use AWS Lake Formation with the Amazon Redshift data to enrich the S3 data.
Correct Answer: B
Selected Answer: B
Use Amazon EMR to process the semi-structured data in Amazon S3. EMR provides a managed Hadoop framework optimized for processing large
datasets in S3.
EMR supports parallel data processing across multiple nodes to speed up the processing.
EMR can integrate directly with Amazon Redshift using the EMR-Redshift integration. This allows querying the Redshift data from EMR and joining
it with the S3 data.
This enables enriching the semi-structured S3 data with the information stored in Redshift
upvoted 15 times
By combining AWS Glue and Amazon Redshift, you can process the semistructured data in parallel using Glue ETL jobs and then store the
processed and enriched data in a structured format in Amazon Redshift. This approach allows you to perform complex analytics efficiently and at
scale.
upvoted 8 times
Selected Answer: B
D: not relevant, data is semistructured and Glue is more batch than stream data
A: not correct, Athena is for querying data
B & C look ok but C is out => redundant with Kinesis data stream; EMR already processed data as input into Redshift for parallel processing
Selected Answer: B
Selected Answer: B
A has a pitfall, "use Amazon Athena to PROCESS the data". With Athena you can query, not process, data.
C is wrong because Kinesis has no place here.
D is wrong because it does not process the Redshift data, and Glue does ETL, not analyze
Thus it's B. EMR can use semi-structured data from from S3 and structured data from Redshift and is ideal for "parallel data processing" of "large
amounts" of data.
upvoted 4 times
Selected Answer: A
Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using
standard SQL.
upvoted 1 times
Glue use apache pyspark cluster for parallel processing. EMR or Glue are possible options. Glue is serverless so better use this plus pyspark is in
memory parallel processing.
upvoted 1 times
https://aws.amazon.com/emr/features/hadoop/
upvoted 1 times
Selected Answer: A
Answer is A
upvoted 1 times
Selected Answer: B
From this documentation looks like EMR cannot interface with S3.
https://aws.amazon.com/emr/
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html
upvoted 1 times
For those answering A, AWS Glue can directly query S3, it can't use Athena as a source of data. The questions say the Redshift data should be user
to "enrich" which means thats the redshift data needs to be "added" to the s3 data. A doesn't allow that.
upvoted 1 times
Selected Answer: B
Choose option B.
Option A is not correct. Amazon Athena is suitable for querying data directly from S3 using SQL and allows parallel processing of S3 data.
AWS Glue can be used for data preparation and enrichment but might not directly integrate with Amazon Redshift for enrichment.
upvoted 1 times
Selected Answer: A
Selected Answer: A
https://aws.amazon.com/emr/features/hadoop/?nc1=h_ls
upvoted 1 times
Selected Answer: A
athena for s3
upvoted 1 times
Question #558 Topic 1
A company has two VPCs that are located in the us-west-2 Region within the same AWS account. The company needs to allow network traffic
between these VPCs. Approximately 500 GB of data transfer will occur between the VPCs each month.
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of each VPC to use the transit gateway for inter-VPC
communication.
B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tables of each VPC to use the VPN tunnel for inter-VPC
communication.
C. Set up a VPC peering connection between the VPCs. Update the route tables of each VPC to use the VPC peering connection for inter-VPC
communication.
D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the route tables of each VPC to use the Direct Connect
Correct Answer: C
Selected Answer: C
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4
addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC
peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different Regions (also known as an inter-
Region VPC peering connection).
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html#:~:text=A-,VPC%20peering,-connection%20is%20a
upvoted 2 times
Selected Answer: C
Selected Answer: C
VPC peering provides private connectivity between VPCs without using public IP space.
Data transferred between peered VPCs is free as long as they are in the same region.
500 GB/month inter-VPC data transfer fits within peering free tier.
Transit Gateway (Option A) incurs hourly charges plus data transfer fees. More costly than peering.
Site-to-Site VPN (Option B) incurs hourly charges and data transfer fees. More expensive than peering.
Direct Connect (Option D) has high hourly charges and would be overkill for this use case.
upvoted 4 times
Selected Answer: C
Selected Answer: C
Selected Answer: C
VPC peering is the most cost-effective way to connect two VPCs within the same region and AWS account. There are no additional charges for VPC
peering beyond standard data transfer rates.
Transit Gateway and VPN add additional hourly and data processing charges that are not necessary for simple VPC peering.
Direct Connect provides dedicated network connectivity, but is overkill for the relatively low inter-VPC data transfer needs described here. It has
high fixed costs plus data transfer rates.
For occasional inter-VPC communication of moderate data volumes within the same region and account, VPC peering is the most cost-effective
solution. It provides simple private connectivity without transfer charges or network appliances.
upvoted 4 times
A company hosts multiple applications on AWS for different product lines. The applications use different compute resources, including Amazon
EC2 instances and Application Load Balancers. The applications run in different AWS accounts under the same organization in AWS Organizations
across multiple AWS Regions. Teams for each product line have tagged each compute resource in the individual accounts.
The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.
Correct Answer: BE
Selected Answer: BE
User-defined tags were created by each product team to identify resources. Selecting the relevant tag in the Billing console will group costs.
The tag must be activated from the Organizations management account to consolidate billing across all accounts.
AWS generated tags are predefined by AWS and won't align to product lines.
Resource Groups (Option C) helps manage resources but not billing.
Activating the tag from each account (Option D) is not needed since Organizations centralizes billing.
upvoted 8 times
Selected Answer: BE
Your user-defined cost allocation tags represent the tag key, which you activate in the Billing console.
upvoted 1 times
BE BE BE BE
upvoted 2 times
Selected Answer: BE
"Only a management account in an organization and single accounts that aren't members of an organization have access to the cost allocation
tags manager in the Billing and Cost Management console."
https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/custom-tags.html
upvoted 3 times
Question #560 Topic 1
A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The solutions architect has organized
The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also needs to notify the company's
Which solution will meet these requirements with the LEAST operational overhead?
A. Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to identify the changes to the OU hierarchy.
B. Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the changes to the OU hierarchy.
C. Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to identify the changes to the OU
hierarchy.
D. Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on a stack to identify the
Correct Answer: A
Selected Answer: A
https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html
https://docs.aws.amazon.com/controltower/latest/userguide/prevention-and-notification.html
upvoted 7 times
Selected Answer: C
Utilize AWS Service Catalog to provision AWS accounts within AWS Organizations. This ensures standardized account creation and management.
Enable AWS CloudTrail Organization Trail:
Set up an AWS CloudTrail organization trail that records all API calls across all accounts in the organization.
This trail will capture changes to the OU hierarchy, including any modifications to organizational units.
upvoted 1 times
AWS Config helps you maintain a detailed inventory of your resources and their configurations, track changes over time, and ensure compliance
with your organization's policies and industry regulations.
upvoted 2 times
Selected Answer: A
AWS Control Tower provides passive and active methods of drift monitoring protection for preventive controls.
upvoted 1 times
A company's website handles millions of requests each day, and the number of requests continues to increase. A solutions architect needs to
improve the response time of the web application. The solutions architect determines that the application needs to decrease latency when
Which solution will meet these requirements with the LEAST amount of operational overhead?
A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX.
B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the web application. Route all read requests through Redis.
C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the web application. Route all read requests through
Memcached.
D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from the table and populate Amazon ElastiCache. Route all
Correct Answer: A
Selected Answer: A
A , because B,C and D contains Elasticache which required a heavy code changes, so more operational overhead
upvoted 8 times
Selected Answer: A
decrease latency when retrieving product details from the Amazon DynamoDB = Amazon DynamoDB Accelerator (DAX)
upvoted 4 times
DAX provides a DynamoDB-compatible caching layer to reduce read latency. It is purpose-built for accelerating DynamoDB workloads.
Using DAX requires minimal application changes - only read requests are routed through it.
DAX handles caching logic automatically without needing complex integration code.
ElastiCache Redis/Memcached (Options B/C) require more integration work to sync DynamoDB data.
Using Lambda and Streams to populate ElastiCache (Option D) is a complex event-driven approach requiring ongoing maintenance.
DAX plugs in seamlessly to accelerate DynamoDB with very little operational overhead
upvoted 2 times
DynamoDB = DAX
upvoted 2 times
A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not travel across the internet.
Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)
D. Create an elastic network interface for the endpoint in each of the subnets of the VPC.
E. Create a security group entry in the endpoint's security group to provide access.
Correct Answer: AB
Selected Answer: AB
https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-ddb.html
upvoted 10 times
Selected Answer: BE
A gateway endpoint for DynamoDB enables private connectivity between DynamoDB and the VPC. This allows EC2 instances to access DynamoDB
APIs without traversing the internet.
A security group entry is needed to allow the EC2 instances access to the DynamoDB endpoint over the VPC.
An interface endpoint is used for services like S3 and Systems Manager, not DynamoDB.
Route table entries route traffic within a VPC but do not affect external connectivity.
Elastic network interfaces are not needed for gateway endpoints.
upvoted 9 times
Selected Answer: AB
Creating the gateway endpoint and edit the route table is enough, there are no secruity group involved
upvoted 1 times
C & D are both not relevant. D looks ok but DynamoDB doesn't go with security group, it only allows route table for VPC endpoint. Link here:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
upvoted 1 times
Selected Answer: AB
DynamoDB can only be connected via Gateway endpoint (just like S3)
route table for connecting the VPC tor the endpoint
So do B then A
Selected Answer: AB
Gateway Endpoint does not have an ENI, thus it has no security group. Instances have security groups and those must allow access to DynamoDB.
upvoted 5 times
Selected Answer: BE
A. Create a route table entry for the endpoint: This is not necessary, as the gateway endpoint itself automatically creates the required route table
entries.
upvoted 2 times
Selected Answer: AB
Create a gateway endpoint for DynamoDB then create a route table entry for the endpoint
upvoted 2 times
Selected Answer: BE
Selected Answer: AB
https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html#vpc-endpoints-routing
Traffic from your VPC to Amazon S3 or DynamoDB is routed to the gateway endpoint. Each subnet route table must have a route that sends traffic
destined for the service to the gateway endpoint using the prefix list for the service.
upvoted 1 times
Selected Answer: AB
You can access Amazon DynamoDB from your VPC using gateway VPC endpoints. After you create the gateway endpoint, you can add it as a targe
in your route table for traffic destined from your VPC to DynamoDB.
upvoted 2 times
Selected Answer: AB
A company runs its applications on both Amazon Elastic Kubernetes Service (Amazon EKS) clusters and on-premises Kubernetes clusters. The
company wants to view all clusters and workloads from a central location.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon CloudWatch Container Insights to collect and group the cluster information.
B. Use Amazon EKS Connector to register and connect all Kubernetes clusters.
C. Use AWS Systems Manager to collect and view the cluster information.
D. Use Amazon EKS Anywhere as the primary cluster to view the other clusters with native Kubernetes commands.
Correct Answer: B
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS console
After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this
feature to view connected clusters in Amazon EKS console, but you can't manage them. The Amazon EKS Connector requires an agent that is an
open source project on Github. For additional technical content, including frequently asked questions and troubleshooting, see Troubleshooting
issues in Amazon EKS Connector
The Amazon EKS Connector can connect the following types of Kubernetes clusters to Amazon EKS.
Selected Answer: B
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
"You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS
console. "
B is the right product for this.
upvoted 3 times
Selected Answer: B
Selected Answer: B
View all clusters and workloads (incl on-prem) from a central location = Amazon EKS Connector
Create and operate Kubernetes clusters on your own infrastructure = Amazon EKS Anywhere
https://aws.amazon.com/eks/eks-anywhere/#:~:text=Amazon-,EKS%20Anywhere,-lets%20you%20create
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html#:~:text=You%20can%20use-,Amazon%20EKS%20Connector,-
to%20register%20and
upvoted 1 times
Definitely B.
"You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS
console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. "
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
upvoted 2 times
Selected Answer: B
EKS Connector allows registering external Kubernetes clusters (on-premises and otherwise) with Amazon EKS
This provides a unified view and management of all clusters within the EKS console.
EKS Connector handles keeping resources in sync across connected clusters.
This centralized approach minimizes operational overhead compared to using separate tools.
CloudWatch Container Insights (Option A) only provides metrics and logs, not cluster management.
Systems Manager (Option C) is more general purpose and does not natively integrate with EKS.
EKS Anywhere (Option D) would not provide a single pane of glass for external clusters.
upvoted 3 times
Selected Answer: B
You can use Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and visualize it in the Amazon EKS console
After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this
feature to view connected clusters in Amazon EKS console, but you can't manage them
upvoted 1 times
Selected Answer: D
"The Amazon EKS Connector can connect the following types of Kubernetes clusters to Amazon EKS.
https://aws.amazon.com/de/eks/eks-anywhere/
upvoted 1 times
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
upvoted 4 times
https://docs.aws.amazon.com/eks/latest/userguide/eks-connector.html
upvoted 2 times
Question #564 Topic 1
A company is building an ecommerce application and needs to store sensitive customer information. The company needs to give customers the
ability to complete purchase transactions on the website. The company also needs to ensure that sensitive customer data is protected, even from
database administrators.
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt the data. Use an IAM instance
B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side encryption to encrypt the data.
C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to encrypt the data. Use S3
D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use Windows file permissions to
restrict access.
Correct Answer: B
Selected Answer: B
RDS MySQL provides a fully managed database service well suited for an ecommerce application.
AWS KMS client-side encryption allows encrypting sensitive data before it hits the database. The data remains encrypted at rest.
This protects sensitive customer data from database admins and privileged users.
EBS encryption (Option A) protects data at rest but not in use. IAM roles don't prevent admin access.
S3 (Option C) encrypts data at rest on the server side. Bucket policies don't restrict admin access.
FSx file permissions (Option D) don't prevent admin access to unencrypted data.
upvoted 8 times
Selected Answer: B
A, C and D would allow the administrator of the storage to access the data. Besides, it is data about "purchase transactions" which is usually stored
in a transactional database (such as RDS for MySQL), not in a file or object storage.
upvoted 6 times
B
I want to go with B as question is for database administrator. Also client key encryption is possible in code and KMS can be used for encryption bu
not using KMS keys. Encrypted data available in DB is of no use to DB admin.
upvoted 1 times
Selected Answer: B
A company has an on-premises MySQL database that handles transactional data. The company is migrating the database to the AWS Cloud. The
migrated database must maintain compatibility with the company's applications that use the database. The migrated database also must scale
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configure elastic storage scaling.
B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on Auto Scaling for the Amazon Redshift cluster.
C. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon Aurora. Turn on Aurora Auto Scaling.
D. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon DynamoDB. Configure an Auto Scaling policy.
Correct Answer: C
Selected Answer: C
DMS provides an easy migration path from MySQL to Aurora while minimizing downtime.
Aurora is a MySQL-compatible relational database service that will maintain compatibility with the company's applications.
Aurora Auto Scaling allows the database to automatically scale up and down based on demand to handle increased workloads.
RDS MySQL (Option A) does not scale as well as the Aurora architecture.
Redshift (Option B) is for analytics, not transactional data, and may not be compatible.
DynamoDB (Option D) is a NoSQL datastore and lacks MySQL compatibility.
upvoted 8 times
Selected Answer: C
A is wrong as you cannot use native MySQL tools for migration. Happy to be corrected though!
B Redshift is not compatible with MySQL
D is DynamoDB
C Aurora MySQL is compatible and supports auto scaling
upvoted 2 times
on-premises MySQL database, transactional data, maintain compatibility, scale automatically = Amazon Aurora
migrating the database to the AWS Cloud = AWS Database Migration Service
upvoted 1 times
Selected Answer: C
Selected Answer: C
A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a
hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage.
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
C. Create a file system on a Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the
EC2 instances.
D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchronize the EBS
Correct Answer: B
Correct B.
Amazon S3 is an object storage platform that uses a simple API for storing and accessing data. Applications that do not require a file system
structure and are designed to work with object storage can use Amazon S3 as a massively scalable, durable, low-cost object storage solution.
upvoted 12 times
Selected Answer: B
https://aws.amazon.com/efs/when-to-choose-efs/
upvoted 1 times
Selected Answer: B
Selected Answer: B
hierarchical directory structure, read and write rapidly and concurrently to shared storage = Amazon Elastic File System
upvoted 1 times
Selected Answer: B
Amazon EFS simultaneously supports on-premises servers using a traditional file permissions model, file locking, and hierarchical directory
structure through the NFS v4 protocol.
upvoted 1 times
Selected Answer: B
Going with b
upvoted 1 times
C and D involve using Amazon EBS volumes, which are block storage. While they can be attached to EC2 instances, they might not provide the
same level of shared concurrent access as Amazon EFS. Additionally, synchronizing EBS volumes across different EC2 instances (as in option D) can
be complex and error-prone.
Therefore, for a scenario where multiple EC2 instances need to rapidly and concurrently access shared storage with a hierarchical directory
structure, Amazon EFS is the best solution.
upvoted 2 times
s3 is flat structure. EBS multi mount only for same available zone
upvoted 1 times
Selected Answer: B
Because Amazon EBS Multi-Attach enables you to attach a single Provisioned IOPS SSD (io1 or io2) volume to multiple instances that are in the
same Availability Zone. The infra contains 2 AZ's.
upvoted 1 times
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
upvoted 1 times
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
upvoted 1 times
I think that C is the best option coz io2 can share storage and multi attach.
upvoted 1 times
A solutions architect is designing a workload that will store hourly energy consumption by business tenants in a building. The sensors will feed a
database through HTTP requests that will add up usage for each tenant. The solutions architect must use managed services when possible. The
workload will receive more features in the future as the solutions architect adds independent components.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in an
B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the
C. Use Amazon API Gateway with AWS Lambda functions to receive the data from the sensors, process the data, and store the data in a
D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of Amazon EC2 instances to receive and process the data from the
sensors. Use an Amazon Elastic File System (Amazon EFS) shared file system to store the processed data.
Correct Answer: A
Selected Answer: A
° API Gateway removes the need to manage servers to receive the HTTP requests from sensors
° Lambda functions provide a serverless compute layer to process data as needed
° DynamoDB is a fully managed NoSQL database that scales automatically
° This serverless architecture has minimal operational overhead to manage
° Options B, C, and D all require managing EC2 instances which increases ops workload
° Option C also adds SQL Server admin tasks and licensing costs
° Option D uses EFS file storage which requires capacity planning and management
upvoted 5 times
Selected Answer: A
Workload runs every hour, must use managed services, more features in the future, LEAST operational overhead = AWS Lambda functions.
HTTP requests, must use managed services, more features in the future, LEAST operational overhead = API Gateway.
Must use managed services, more features in the future, LEAST operational overhead =Amazon DynamoDB.
upvoted 3 times
"The workload will receive more features in the future ..." -> DynamoDB
upvoted 3 times
A solutions architect is designing the storage architecture for a new web application used for storing and viewing engineering drawings. All
The application design must support caching to minimize the amount of time that users wait for the engineering drawings to load. The application
Which combination of storage and caching should the solutions architect use?
C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront
Correct Answer: A
Selected Answer: A
Selected Answer: A
Selected Answer: A
CF allows caching
upvoted 1 times
S3 provides highly durable and scalable object storage capable of handling petabytes of data cost-effectively.
CloudFront can be used to cache S3 content at the edge, minimizing latency for users and speeding up access to the engineering drawings.
The global CloudFront edge network is ideal for caching large amounts of static media like drawings.
EBS provides block storage but lacks the scale and durability of S3 for large media files.
Glacier is cheaper archival storage but has higher latency unsuited for frequent access.
Storage Gateway and ElastiCache may play a role but do not align as well to the main requirements.
upvoted 4 times
An Amazon EventBridge rule targets a third-party API. The third-party API has not received any incoming traffic. A solutions architect needs to
determine whether the rule conditions are being met and if the rule's target is being invoked.
B. Review events in the Amazon Simple Queue Service (Amazon SQS) dead-letter queue.
Correct Answer: A
Selected Answer: A
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html
upvoted 7 times
Selected Answer: A
"EventBridge sends metrics to Amazon CloudWatch every minute for everything from the number of matched events to the number of times a
target is invoked by a rule."
from https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html
B: SQS, irrelevant
C: 'Check for events', this wording is confusing but could mean something in wrong context. I would have chosen C if A wasn't an option
D: CloudTrail is for AWS resource monitoring so irrelevant
upvoted 6 times
Selected Answer: A
A per https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html
Not B because SQS is not even involved here
Not C because EventBridge sends only metrics, not detailed logs, to CloudWatch
Not D, many fall for CloudTrail supposedly recording "API calls", but this is about calls for the EventBridge API to AWS, not calls to 3rd party APIs by
EventBridge.
upvoted 4 times
Selected Answer: C
Option A, "Check for metrics in Amazon CloudWatch in the namespace for AWS/Events," primarily provides aggregated metrics related to
EventBridge, but it may not give detailed information about individual events or their specific content. Metrics in CloudWatch can give you an
overview of how many events are being processed, but for detailed inspection of events and their conditions, checking CloudWatch Logs (option C
is more appropriate.
CloudWatch Logs allow you to see the actual event data and details, providing a more granular view that is useful for troubleshooting and
understanding the specifics of why a third-party API is not receiving incoming traffic.
upvoted 1 times
CloudWatch is a monitoring service for AWS resources and applications. CloudTrail is a web service that records API activity in your AWS account.
CloudWatch monitors applications and infrastructure performance in the AWS environment. CloudTrail monitors actions in the AWS environment.
upvoted 1 times
Selected Answer: C
Selected Answer: A
should be A
upvoted 1 times
Amazon CloudWatch Logs is a service that collects and stores logs from Amazon Web Services (AWS) resources. These logs can be used to
troubleshoot problems, monitor performance, and audit activity.
The other options are incorrect:
Option A: CloudWatch metrics are used to track the performance of AWS resources. They are not used to store events.
Option B: Amazon SQS dead-letter queues are used to store messages that cannot be delivered to their intended recipients. They are not used to
store events.
Option D: AWS CloudTrail is a service that records AWS API calls. It can be used to track the activity of EventBridge rules, but it does not store the
events themselves.
upvoted 2 times
EventBridge sends metrics to Amazon CloudWatch every minute for everything from the number of matched events to the number of times a
target is invoked by a rule.
https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-monitoring.html
upvoted 1 times
Selected Answer: D
The answer is D:
"CloudTrail captures API calls made by or on behalf of your AWS account from the EventBridge console and to EventBridge API operations."
(https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-logging-monitoring.html)
upvoted 2 times
Selected Answer: D
AWS CloudTrail provides visibility into EventBridge operations by logging API calls made by EventBridge.
Checking the CloudTrail trails will show the PutEvents API calls made when EventBridge rules match an event pattern.
CloudTrail will also log the Invoke API call when the rule target is triggered.
CloudWatch metrics and logs contain runtime performance data but not info on rule evaluation and targeting.
SQS dead letter queues collect failed event deliveries but won't provide insights on successful invocations.
CloudTrail is purpose-built to log operational events and API activity so it can confirm if the EventBridge rule is being evaluated and triggering the
target as expected.
upvoted 2 times
Option A is the most appropriate solution because Amazon EventBridge publishes metrics to Amazon CloudWatch. You can find relevant metrics in
the "AWS/Events" namespace, which allows you to monitor the number of events matched by the rule and the number of invocations to the rule's
target.
upvoted 4 times
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CloudWatch-Events-Monitoring-CloudWatch-Metrics.html
upvoted 1 times
Question #570 Topic 1
A company has a large workload that runs every Friday evening. The workload runs on Amazon EC2 instances that are in two Availability Zones in
the us-east-1 Region. Normally, the company must run no more than two instances at all times. However, the company wants to scale up to six
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: B
B is correct.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scheduled-scaling.html
upvoted 8 times
Selected Answer: B
A - too much operation overhead, manually provisioning the instances after you receive the reminder from eventbridge
B - right answer, as you can scale up the EC2 instances and keep them ready before large overload time
C - too much operation overhead in manually scaling
D - automatic scaling will scale up the instances after some duration after it has encountered the heavy workload traffic, not ideal
upvoted 2 times
runs every Friday evening = an Auto Scaling group that has a scheduled action
upvoted 2 times
Auto Scaling scheduled actions allow defining specific dates/times to scale out or in. This can be used to scale to 6 instances every Friday evening
automatically.
Scheduled scaling removes the need for manual intervention to scale up/down for the workload.
EventBridge reminders and manual scaling require human involvement each week adding overhead.
Automatic scaling responds to demand and may not align perfectly to scale out every Friday without additional tuning.
Scheduled Auto Scaling actions provide the automation needed to scale for the weekly workload without ongoing operational overhead.
upvoted 3 times
Selected Answer: B
B seems to be correct
upvoted 1 times
Selected Answer: B
When we know the run time is Friday, we can schedule the instance to 6
upvoted 2 times
A company is creating a REST API. The company has strict requirements for the use of TLS. The company requires TLSv1.3 on the API endpoints.
The company also requires a specific public third-party certificate authority (CA) to sign the TLS certificate.
A. Use a local machine to create a certificate that is signed by the third-party CImport the certificate into AWS Certificate Manager (ACM).
Create an HTTP API in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
B. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an HTTP API in Amazon API Gateway with
C. Use AWS Certificate Manager (ACM) to create a certificate that is signed by the third-party CA. Import the certificate into AWS Certificate
Manager (ACM). Create an AWS Lambda function with a Lambda function URL. Configure the Lambda function URL to use the certificate.
D. Create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. Create an AWS Lambda function with a Lambda
function URL. Configure the Lambda function URL to use the certificate.
Correct Answer: A
Selected Answer: A
I don't understand why some many people vote B. In ACM, you can either request certificate from Amazon CA or import an existing certificate.
There is no option in ACM that allow you to request a certificate that can be signed by third party CA.
upvoted 17 times
https://docs.aws.amazon.com/acm/latest/userguide/gs.html
upvoted 4 times
Selected Answer: B
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy SSL/TLS certificates for use with AWS services and
your internal resources. By creating a certificate in ACM that is signed by the third-party CA, the company can meet its requirement for a specific
public third-party CA to sign the TLS certificate.
upvoted 8 times
Selected Answer: A
A. Use a local machine to create a certificate that is signed by the third-party CA. Import the certificate into AWS Certificate Manager (ACM). Create
an HTTP API in Amazon API Gateway with a custom domain. Configure the custom domain to use the certificate.
Reason:
Custom Certificate: Allows you to use a certificate signed by the third-party CA.
TLSv1.3 Support: API Gateway supports TLSv1.3 for custom domains.
Configuration: You can import the third-party CA certificate into ACM and configure API Gateway to use this certificate with a custom domain.
This approach meets all the specified requirements by allowing the use of a third-party CA-signed certificate and ensuring the API endpoints use
TLSv1.3.
upvoted 1 times
A is logical answer.
BCD are either misworded here or intentionally confusing. Regardless, you cannot create a cert in ACM that is signed by 3rd party CA. You can only
import these certs to ACM.
upvoted 3 times
Selected Answer: A
ACM can import, but not create, 3rd party certificates. Leaves only A.
upvoted 1 times
Selected Answer: A
You have already a publicly trusted certificate issued by a third party and you just need to import it in ACM not to creat a new one. So, the correct
answer is A which is the only one that importing the certificate in ACM while B, C and D are creating a new one.
upvoted 1 times
Yes. If you want to use a third-party certificate with Amazon CloudFront, Elastic Load Balancing, or Amazon API Gateway, you may import it into
ACM using the AWS Management Console, AWS CLI, or ACM APIs. ACM does not manage the renewal process for imported certificates. You can
use the AWS Management Console to monitor the expiration dates of an imported certificates and import a new third-party certificate to replace
an expiring one.
upvoted 1 times
Selected Answer: A
It's 22/Nov/2023 and from the console you cant create a certificate in AWS Certificate Manager (ACM) that is signed by the third-party CA. But you
could obtain it externally then import it into ACM.
upvoted 1 times
In ACM you can't create a cert signed by another CA. Dude, try it by yourself. There is no such option!
upvoted 1 times
Selected Answer: B
Use ACM to create a certificate signed by the third-party CA. ACM integrates with external CAs.
Create an API Gateway HTTP API with a custom domain name.
Configure the custom domain to use the ACM certificate. API Gateway supports configuring custom domains with ACM certificates.
This allows serving the API over TLS using the required third-party certificate and TLS 1.3 support.
upvoted 2 times
You can provide certificates for your integrated AWS services either by issuing them directly with ACM or by importing third-party certificates into
the ACM management system.
upvoted 1 times
A company runs an application on AWS. The application receives inconsistent amounts of usage. The application uses AWS Direct Connect to
connect to an on-premises MySQL-compatible database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use auto scaling capabilities to
Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
Correct Answer: C
Selected Answer: C
Aurora Serverless v2 provides auto-scaling so the database can handle inconsistent workloads and spikes automatically without admin
intervention.
It can scale down to zero when not in use to minimize costs.
The minimum 1 ACU capacity is sufficient to replace the on-prem 2 GiB database based on the info given.
Serverless capabilities reduce admin overhead for capacity management.
DynamoDB lacks MySQL compatibility and requires more hands-on management.
RDS and provisioned Aurora require manually resizing instances to scale, increasing admin overhead.
upvoted 9 times
Selected Answer: C
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
Suitability: Amazon Aurora Serverless v2 is a good option for applications with variable workloads because it automatically adjusts capacity based
on demand. It can handle MySQL-compatible databases and supports auto-scaling. You can set the minimum and maximum capacity based on
your needs, making it highly suitable for handling unexpected workload increases with minimal administrative overhead.
upvoted 1 times
Selected Answer: C
Selected Answer: C
Instead of provisioning and managing database servers, you specify Aurora capacity units (ACUs). Each ACU is a combination of approximately 2
gigabytes (GB) of memory, corresponding CPU, and networking. Database storage automatically scales from 10 gibibytes (GiB) to 128 tebibytes
(TiB), the same as storage in a standard Aurora DB cluster
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v1.how-it-works.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.html
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html#aurora-serverless-v2.how-it-
works.capacity
upvoted 2 times
Question #573 Topic 1
A company wants to use an event-driven programming model with AWS Lambda. The company wants to reduce startup latency for Lambda
functions that run on Java 11. The company does not have strict latency requirements for the applications. The company wants to reduce cold
Correct Answer: D
Selected Answer: D
SnapStart keeps functions initialized and ready to respond quickly, eliminating cold starts.
SnapStart is optimized for applications without aggressive latency needs, reducing costs.
It scales automatically to match traffic spikes, eliminating outliers when scaling up.
SnapStart is a native Lambda feature with no additional charges, keeping costs low.
Provisioned concurrency incurs charges for always-on capacity reserved. More costly than SnapStart.
Increasing timeout and memory do not directly improve startup performance like SnapStart.
upvoted 13 times
Selected Answer: D
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
"Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with no
changes to your function code."
upvoted 4 times
Selected Answer: D
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html#:~:text=RSS-,Lambda%20SnapStart,-for%20Java%20can
upvoted 1 times
Selected Answer: D
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with no
changes to your function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 2 times
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 1 times
Selected Answer: D
D is correct
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with no
changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends
initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.
With SnapStart, Lambda initializes your function when you publish a function version. Lambda takes a Firecracker microVM snapshot of the
memory and disk state of the initialized execution environment, encrypts the snapshot, and caches it for low-latency access. When you invoke the
function version for the first time, and as the invocations scale up, Lambda resumes new execution environments from the cached snapshot instead
of initializing them from scratch, improving startup latency.
upvoted 1 times
Both Lambda SnapStart and provisioned concurrency can reduce cold starts and outlier latencies when a function scales up. SnapStart helps you
improve startup performance by up to 10x at no extra cost. Provisioned concurrency keeps functions initialized and ready to respond in double-
digit milliseconds. Configuring provisioned concurrency incurs charges to your AWS account. Use provisioned concurrency if your application has
strict cold start latency requirements. You can't use both SnapStart and provisioned concurrency on the same function version.
upvoted 4 times
D is the answer
Lambda SnapStart for Java can improve startup performance for latency-sensitive applications by up to 10x at no extra cost, typically with no
changes to your function code. The largest contributor to startup latency (often referred to as cold start time) is the time that Lambda spends
initializing the function, which includes loading the function's code, starting the runtime, and initializing the function code.
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 2 times
https://docs.aws.amazon.com/lambda/latest/dg/snapstart.html
upvoted 3 times
A financial services company launched a new application that uses an Amazon RDS for MySQL database. The company uses the application to
track stock market trends. The company needs to operate the application for only 2 hours at the end of each week. The company needs to
A. Migrate the existing RDS for MySQL database to an Aurora Serverless v2 MySQL database cluster.
B. Migrate the existing RDS for MySQL database to an Aurora MySQL database cluster.
C. Migrate the existing RDS for MySQL database to an Amazon EC2 instance that runs MySQL. Purchase an instance reservation for the EC2
instance.
D. Migrate the existing RDS for MySQL database to an Amazon Elastic Container Service (Amazon ECS) cluster that uses MySQL container
Correct Answer: A
Selected Answer: A
Aurora Serverless v2 scales compute capacity automatically based on actual usage, down to zero when not in use. This minimizes costs for
intermittent usage.
Since it only runs for 2 hours per week, the application is ideal for a serverless architecture like Aurora Serverless.
Aurora Serverless v2 charges per second when the database is active, unlike RDS which charges hourly.
Aurora Serverless provides higher availability than self-managed MySQL on EC2 or ECS.
Using reserved EC2 instances or ECS still incurs charges when not in use versus the fine-grained scaling of serverless.
Standard Aurora clusters have a minimum capacity unlike the auto-scaling serverless architecture.
upvoted 10 times
Selected Answer: A
B is wrong because Aurora MySQL cluster will just keep on running for the rest of the week and will be costly.
C and D have too much infra bloating so costly
upvoted 1 times
Selected Answer: A
2 hours per week = Serverless = A. Recommended for "infrequent, intermittent, or unpredictable workloads"
upvoted 4 times
Answer is A.
Here are the key distinctions:
Amazon Aurora: provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication,
and integrations with other AWS services.
Amazon Aurora Serverless: is an on-demand, auto-scaling configuration for Aurora where the database automatically starts up, shuts down, and
scales capacity up or down based on your application's needs.
Selected Answer: A
Option is A
upvoted 2 times
hachiri 1 year, 1 month ago
Selected Answer: A
Selected Answer: A
"Amazon Aurora Serverless v2 is suitable for the most demanding, highly variable workloads. For example, your database usage might be heavy fo
a short period of time, followed by long periods of light activity or no activity at all. "
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html
upvoted 2 times
Selected Answer: B
B seems to be the correct answer, because if we have a predictable workload Aurora database seems to be most cost effective however if we have
unpredictable workload aurora serverless seems to be more cost effective because our database will scale up and down
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region.
The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The
Which solution will meet these requirements with the MOST operational efficiency?
Correct Answer: C
Selected Answer: C
RDS Multi-AZ DB cluster deployments provide high availability, automatic failover, and increased read capacity.
A multi-AZ cluster automatically handles replicating data across AZs in a single region.
This maintains operational efficiency as it is natively managed by RDS without needing external replication.
DynamoDB global tables involve complex provisioning and requires app changes.
RDS read replicas require manual setup and management of replication.
RDS Multi-AZ clustering is purpose-built by AWS for HA PostgreSQL deployments and balancing read workloads.
upvoted 8 times
Selected Answer: C
"A Multi-AZ DB cluster deployment is a semisynchronous, high availability deployment mode of Amazon RDS with two readable replica DB
instances."
upvoted 1 times
Selected Answer: C
multi-AZ addresses both HA & increased read capacity with synchronous data replication between main DB & standby. Read replica is not enough
because only increased read capacity not enabling HA, besides the data replication is async
upvoted 1 times
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/multi-az-db-clusters-concepts.html
"A Multi-AZ DB cluster deployment is a semisynchronous, high availability deployment mode of Amazon RDS with two readable standby DB
instances"
A: DynamoDB is not Postgres
B: Although HA is achieve but it does not increase the read capacity as much as C without additional operational complexity
D: Cross region is not a requirement and won't solve the same region HA or read issues
upvoted 1 times
Selected Answer: C
Multi-AZ DB cluster deployments provides two readable DB instances if you need additional read capacity
upvoted 1 times
Selected Answer: C
C is correct
upvoted 1 times
Multi-AZ DB clusters provide high availability, increased capacity for read workloads, and lower write latency when compared to Multi-AZ DB
instance deployments.
upvoted 1 times
Selected Answer: C
CCCCCCCCCcCCcCcCCCCccccCc
upvoted 1 times
Selected Answer: C
DB cluster deployment can scale read workloads by adding read replicas. This provides increased capacity for read workloads without impacting
the write workload.
upvoted 4 times
Question #576 Topic 1
A company is building a RESTful serverless web application on AWS by using Amazon API Gateway and AWS Lambda. The users of this web
application will be geographically distributed, and the company wants to reduce the latency of API requests to these users.
Which type of endpoint should a solutions architect use to meet these requirements?
A. Private endpoint
B. Regional endpoint
D. Edge-optimized endpoint
Correct Answer: D
Selected Answer: D
Selected Answer: D
Selected Answer: D
An edge-optimized API endpoint typically routes requests to the nearest CloudFront Point of Presence (POP), which could help in cases where you
clients are geographically distributed. This is the default endpoint type for API Gateway REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-
types.html#:~:text=API%20endpoint%20typically-,routes,-requests%20to%20the
upvoted 2 times
Selected Answer: D
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html
upvoted 2 times
Selected Answer: D
An edge-optimized API endpoint typically routes requests to the nearest CloudFront Point of Presence (POP), which could help in cases where you
clients are geographically distributed. This is the default endpoint type for API Gateway REST APIs.
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-endpoint-types.html
upvoted 4 times
Edge-optimized endpoint
upvoted 2 times
A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs to ensure that clients use a TLS
certificate when accessing the company's website. The company wants to automate the creation and renewal of the TLS certificates.
Which solution will meet these requirements with the MOST operational efficiency?
C. Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
D. Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
Correct Answer: C
C is correct.
"ACM provides managed renewal for your Amazon-issued SSL/TLS certificates. This means that ACM will either renew your certificates
automatically (if you are using DNS validation), or it will send you email notices when expiration is approaching. These services are provided for
both public and private ACM certificates."
https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html
upvoted 9 times
Selected Answer: C
AWS Certificate Manager (ACM) provides free public TLS/SSL certificates and handles certificate renewals automatically.
Using DNS validation with ACM is operationally efficient since it automatically makes changes to Route 53 rather than requiring manual validation
steps.
ACM integrates natively with CloudFront distributions for delivering HTTPS content.
CloudFront security policies and origin access controls do not issue TLS certificates.
Email validation requires manual steps to approve the domain validation emails for each renewal.
upvoted 5 times
Selected Answer: C
For me, C is the only realistic option as I don't think you can do AB without a lot of complexity. D just makes no sense.
upvoted 1 times
Selected Answer: C
Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain
upvoted 3 times
Selected Answer: C
C 似乎是正確的
upvoted 3 times
Selected Answer: C
Selected Answer: C
C seems to be correct
upvoted 1 times
A company deployed a serverless application that uses Amazon DynamoDB as a database layer. The application has experienced a large increase
in users. The company wants to improve database response time from milliseconds to microseconds and to cache requests to the database.
Which solution will meet these requirements with the LEAST operational overhead?
Correct Answer: A
Selected Answer: A
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon DynamoDB that delivers up to a 10 times
performance improvement—from milliseconds to microseconds—even at millions of requests per second.
https://aws.amazon.com/dynamodb/dax/#:~:text=Amazon%20DynamoDB%20Accelerator%20(DAX)%20is,millions%20of%20requests%20per%20s
cond.
upvoted 10 times
Amazon ElastiCache for Redis would help with "caching requests", but not " improve database response" itself.
upvoted 1 times
Selected Answer: A
improve DynamoDB response time from milliseconds to microseconds and to cache requests to the database = DynamoDB Accelerator (DAX)
upvoted 2 times
Selected Answer: A
A company runs an application that uses Amazon RDS for PostgreSQL. The application receives traffic only on weekdays during business hours.
The company wants to optimize costs and reduce operational overhead based on this usage.
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
Correct Answer: A
Selected Answer: A
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon
Relational Database Service (Amazon RDS) instances.
This solution helps reduce operational costs by stopping resources that are not in use and starting them when they are needed. The cost savings
can be significant if you leave all of your instances running at full utilization continuously.
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
upvoted 6 times
Selected Answer: A
Selected Answer: A
A. Use the Instance Scheduler on AWS to configure start and stop schedules
upvoted 3 times
But you need some mechanism to stop on weekend and in night to save cost.
upvoted 1 times
Selected Answer: B
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/?nc1=h_ls
upvoted 1 times
Selected Answer: A
https://aws.amazon.com/solutions/implementations/instance-scheduler-on-aws/
upvoted 1 times
A company uses locally attached storage to run a latency-sensitive application on premises. The company is using a lift and shift method to move
the application to the AWS Cloud. The company does not want to change the application architecture.
A. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for Lustre file system to run the application.
B. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP2 volume to run the application.
C. Configure an Auto Scaling group with an Amazon EC2 instance. Use an Amazon FSx for OpenZFS file system to run the application.
D. Host the application on an Amazon EC2 instance. Use an Amazon Elastic Block Store (Amazon EBS) GP3 volume to run the application.
Correct Answer: D
Selected Answer: D
Selected Answer: D
gp3 offers SSD-performance at a 20% lower cost per GB than gp2 volumes.
upvoted 2 times
GP3 is preferable over GP2, FSx for Lustre, and FSx for OpenZFS is clear and convincing:
Migrate your Amazon EBS volumes from gp2 to gp3 and save up to 20% on costs.
upvoted 2 times
Selected Answer: D
My rational:
Options A y C are based on autoscaling-group and no make sense for me on this scenary.
Then, use Amazon EBS is the solution and GP2 or GP3 is the question.
Requirement requires the most COST effective solution, then, I choose GP3
upvoted 3 times
Question #581 Topic 1
A company runs a stateful production application on Amazon EC2 instances. The application requires at least two EC2 instances to always be
running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto
Which set of additional steps should the solutions architect take to meet these requirements?
A. Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand
B. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability Zone and two On-Demand
C. Set the Auto Scaling group's minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D. Set the Auto Scaling group's minimum capacity to four. Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in
Correct Answer: B
Selected Answer: B
By setting the Auto Scaling group's minimum capacity to four, the architect ensures that there are always at least two running instances. Deploying
two On-Demand Instances in each of two Availability Zones ensures that the application is highly available and fault-tolerant. If one Availability
Zone becomes unavailable, the application can still run in the other Availability Zone.
upvoted 18 times
Selected Answer: A
My rational is: Highly available = 2 AZ, and then 2 EC2 instances always running is 1 EC2 in each AZ. If an entire AZ fails, SacalinGroup deploy the
minimun instances (2) on the running AZ
upvoted 12 times
The application requires at least two EC2 instances to always be running = 2 minimum capacity… minimum cap of 4 ec2 will work but a waste o
resources that doesn’t follow well archi. framework.
upvoted 2 times
Option A:
Set the Auto Scaling group's minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in
a second Availability Zone.
This configuration ensures that you have two instances running across two different AZs, which provides high availability. However, it does not take
advantage of additional capacity to handle failures or spikes in demand. If either AZ becomes unavailable, you will have one running instance, but
this does not meet the requirement of having at least two running instances at all times.
upvoted 1 times
This configuration provides high availability with four instances distributed across two AZs. The minimum capacity of four ensures that even if one
instance fails, there are still two instances in each AZ to handle the load. This option is highly available and fault-tolerant but may be more than
required if only two instances are needed to be always running.
upvoted 1 times
Selected Answer: B
so indeed ASG can set up a new EC2 instance in another AZ if there is one AZ failed with fault but it failed to meet the need of always having 2
instance running before the new instance replacement is done in the working AZ. so this is why we deploy 2 instances per AZ
upvoted 1 times
Selected Answer: B
If it would not mention the "stateful" application, and if it would only have to be "highly available" but NOT "fault-tolerant", A would be fine.
upvoted 4 times
Selected Answer: B
From <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html : Spot Instances are not suitable for workloads that are
inflexible, stateful, fault-intolerant, or tightly coupled between instance nodes. So C and D don't fit.
Selected Answer: B
The main requirement here is a 'highly available and fault-tolerant architecture for the application', this covered by option B.
The application requires at least two EC2 instances to always be running, main word here being 'atleast' which means more than two is ok.
upvoted 1 times
B - Need 2 in each AZ and you cant use spot instances as it could be recalled.
upvoted 1 times
Selected Answer: B
Selected Answer: A
If a complete AZ fails, autoscale will lunch a second EC2 in the running AZ. If that short period of time is not always, which is not, then the answer i
B, but I would take my chances and select A in the exam xD because the application is highly available and fault-tolerant.
upvoted 1 times
° Minimum of 4 ensures at least 2 instances are always running in each AZ, meeting the HA requirement.
° On-Demand instances provide consistent performance and availability, unlike Spot.
° Spreading across 2 AZs adds fault tolerance, protecting from AZ failure.
upvoted 2 times
While Spot Instances can be used to reduce costs, they might not provide the same level of availability and guaranteed uptime that On-Demand
Instances offer. So I will go with B and not D.
upvoted 1 times
Sat897 1 year, 2 months ago
Selected Answer: B
Highly available - 2 AZ and then 2 EC2 instances always running. 2 in each AZ.
upvoted 2 times
An ecommerce company uses Amazon Route 53 as its DNS provider. The company hosts its website on premises and in the AWS Cloud. The
company's on-premises data center is near the us-west-1 Region. The company uses the eu-central-1 Region to host the website. The company
A. Set up a geolocation routing policy. Send the traffic that is near us-west-1 to the on-premises data center. Send the traffic that is near eu-
central-1 to eu-central-1.
B. Set up a simple routing policy that routes all traffic that is near eu-central-1 to eu-central-1 and routes all traffic that is near the on-premises
D. Set up a weighted routing policy. Split the traffic evenly between eu-central-1 and the on-premises data center.
Correct Answer: A
Selected Answer: A
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html
B can be done but definition of "near" is ambiguous
C wrong region
D wrong solution as splitting evenly does not reduce latency for on-prem server users
upvoted 1 times
not C. Client do not have AWS us-west-1 region. Client have a on prem DC near west-1
not D. 2 people visit the site together near eu-central-1, one of the user may be thrown to west-1 due to load balancing on split even weighted
policy.
A and B are both valid, latency = how soon user reach the datacenter and received a responses from the DC, round trip. So in short, geolocation or
send user to the nearest DC will improve latency.
upvoted 1 times
Geolocation routing policy allows you to route traffic based on the location of your users.
upvoted 3 times
Selected Answer: C
Explanation:
A latency routing policy directs traffic based on the lowest network latency to the specified AWS endpoint. Since the on-premises data center is
near the us-west-1 Region, associating the policy with us-west-1 ensures that users near that region will be directed to the on-premises data
center.
This allows for optimal routing, minimizing the load time for users based on their geographical proximity to the respective hosting locations (us-
west-1 and eu-central-1).
Options A, B, and D do not explicitly consider latency or are not optimal for minimizing load time:
Option A (geolocation routing policy) would direct traffic based on the geographic location of the user but may not necessarily optimize for the
lowest latency.
upvoted 2 times
Selected Answer: A
Selected Answer: A
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy-geo.html
upvoted 1 times
Selected Answer: A
Geolocation routing allows you to route users to the closest endpoint based on their geographic location. This will provide the lowest latency.
Routing us-west-1 traffic to the on-premises data center minimizes latency for those users since it is also located near there.
Routing eu-central-1 traffic to the eu-central-1 AWS region minimizes latency for users nearby.
This achieves routing users to the closest endpoint on a geographic basis to optimize for low latency.
upvoted 4 times
A company has 5 PB of archived data on physical tapes. The company needs to preserve the data on the tapes for another 10 years for
compliance purposes. The company wants to migrate to AWS in the next 6 months. The data center that stores the tapes has a 1 Gbps uplink
internet connectivity.
A. Read the data from the tapes on premises. Stage the data in a local NFS storage. Use AWS DataSync to migrate the data to Amazon S3
B. Use an on-premises backup application to read the data from the tapes and to write directly to Amazon S3 Glacier Deep Archive.
C. Order multiple AWS Snowball devices that have Tape Gateway. Copy the physical tapes to virtual tapes in Snowball. Ship the Snowball
devices to AWS. Create a lifecycle policy to move the tapes to Amazon S3 Glacier Deep Archive.
D. Configure an on-premises Tape Gateway. Create virtual tapes in the AWS Cloud. Use backup software to copy the physical tape to the virtual
tape.
Correct Answer: C
If you have made it to the end of the exam dump, you will definitely pass your exams in Jesus name. After over a year of Procrastination, I am
finally ready to write my AWS Solutions Architect Exam. Thank you Exam Topics
upvoted 23 times
Selected Answer: C
Ready for the exam tomorrow. Wish you guys all the best. BTW Snowball Device comes in handy when you need to move a huge amount of data
but cant afford any bandwidth loss
upvoted 10 times
Oh, to think now we have to study 904 questions instead of just 583 lol
upvoted 1 times
5PB over 1GB connection will take approximately 15 months so anything with "transfer" is invalid. ABD are not practical.
C: Just order snowball
upvoted 3 times
Selected Answer: C
Though we'll need more than 60 Snowball devices, C is the only option that works. The internet uplink could transport less than 2 PB in 6 months
(otherwise, say with a 10 Gb uplink, D would work).
upvoted 4 times
Snowball it is. C
upvoted 1 times
Selected Answer: C
Migrate petabyte-scale data stored on physical tapes to AWS using AWS Snowball
https://aws.amazon.com/snowball/#:~:text=Migrate-,petabyte%2Dscale,-data%20stored%20on
upvoted 1 times
Selected Answer: C
5 PB data is too huge for using 1Gbps uplink. With this uplink, it takes more than 1 year to migrate this data.
upvoted 1 times
If you are looking for a cost-effective, durable, long-term, offsite alternative for data archiving, deploy a Tape Gateway. With its virtual tape library
(VTL) interface, you can use your existing tape-based backup software infrastructure to store data on virtual tape cartridges that you create -
https://docs.aws.amazon.com/storagegateway/latest/tgw/WhatIsStorageGateway.html
upvoted 1 times
Selected Answer: A
The most cost-effective solution to meet the requirements is to read the data from the tapes on premises. Stage the data in a local NFS storage.
Use AWS DataSync to migrate the data to Amazon S3 Glacier Flexible Retrieval.
This solution is the most cost-effective because it uses the least amount of bandwidth. AWS DataSync is a service that transfers data between on-
premises storage and Amazon S3. It uses a variety of techniques to optimize the transfer speed and reduce c
upvoted 1 times
Selected Answer: C
Selected Answer: C
Option C is likely the most cost-effective solution given the large data size and limited internet bandwidth. The physical data transfer and
integration with the existing tape infrastructure provides efficiency benefits that can optimize the cost.
upvoted 2 times
Went through this dump twice now. Exam is in about an hour. Will update with results.
upvoted 2 times
A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for
the workload. The network architecture must be configurable to prevent groups of nodes from sharing the same underlying hardware.
Correct Answer: A
Selected Answer: A
A spread placement group is a group of instances that are each placed on distinct hardware.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 11 times
Selected Answer: C
Configuring the EC2 instances with dedicated tenancy ensures that each instance will run on isolated, single-tenant hardware. This meets the
requirement to prevent groups of nodes from sharing underlying hardware.
A spread placement group only provides isolation at the Availability Zone level. Instances could still share hardware within an AZ.
upvoted 5 times
Selected Answer: A
Spread Placement Group: This placement group strategy ensures that EC2 instances are distributed across distinct hardware to reduce the risk of
correlated failures. Instances in a spread placement group are placed on different underlying hardware, which aligns with the requirement to
prevent groups of nodes from sharing the same underlying hardware. This is a good fit for the scenario where you need to ensure high availability
and fault tolerance.
upvoted 1 times
Selected Answer: A
Spread – Strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
upvoted 1 times
dedicated tenancy cannot ensure the instances share the same hardware. So A
upvoted 2 times
Selected Answer: A
Let's assume that you have two groups of instances, group A and group B and you have two physical hardware X and Y. With spread placement
group, you can have group A of instances on hardware X and group B on hardware Y but this will not prevent hardware X to host other instances o
other customers because your only requirement is to separate group A from group B. On the other hand, the dedicated tenancy means that AWS
will dedicate the physical hardware only for you. So, the correct answer is A.
upvoted 2 times
Selected Answer: A
Spread placement group allows you to isolate your instances on hardware level.
Dedicated tenancy allows you to be sure that you are the only customer on the hardware.
Def is A: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 1 times
Keywords 'prevent groups of nodes from sharing the same underlying hardware'.
Spread Placement Group strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
upvoted 1 times
Selected Answer: A
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Each instances is placed on seven different racks, each rack has its own network and power source.
upvoted 1 times
Dedicated instances:
Dedicated Instances are EC2 instances that run on hardware that's dedicated to a single customer. Dedicated Instances that belong to different
AWS accounts are physically isolated at a hardware level, even if those accounts are linked to a single payer account. However, Dedicated Instances
might share hardware with other instances from the same AWS account that are not Dedicated Instances.
Which is not the desired option.
Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
That's why A.
upvoted 2 times
Selected Answer: C
C is clear.
upvoted 1 times
Spread Placement Group strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
upvoted 1 times
Option A is the correct answer. It suggests running the EC2 instances in a spread placement group. This solution is cost-effective and requires
minimal development effort .
upvoted 2 times
A solutions architect is designing a disaster recovery (DR) strategy to provide Amazon EC2 capacity in a failover AWS Region. Business
requirements state that the DR strategy must meet capacity in the failover Region.
Correct Answer: D
Selected Answer: D
Capacity Reservation: Capacity Reservations ensure that you have reserved capacity in a specific region for your instances, regardless of whether
you are using On-Demand or Reserved Instances. This is ideal for DR scenarios because it guarantees that the required EC2 capacity will be
available when needed.
upvoted 2 times
Selected Answer: D
"Business requirements state that the DR strategy must meet capacity in the failover Region"
so only D meets these requirements
A. No reservation of capacity
B. Saving plans don't guarantee capacity
C. Can be possible but it's like an active instance so doesn't really make sense
upvoted 2 times
Selected Answer: D
A Capacity Reservation allows you to reserve a specific amount of EC2 instance capacity in a given region without purchasing specific instances.
This reserved capacity is dedicated to your account and can be utilized for launching instances when needed. Capacity Reservations offer flexibility
allowing you to launch different instance types and sizes within the reserved capacity.
Regional Reserved Instances involve paying an upfront fee to reserve a certain number of specific EC2 instances in a particular region. These
reserved instances are of a predefined type and size, providing a more traditional reservation model. Regional Reserved Instances are specific to a
designated region and ensure that the reserved instances of a particular specification are available when needed.
upvoted 3 times
Selected Answer: D
Capacity Reservations mitigate against the risk of being unable to get On-Demand capacity in case there are capacity constraints. If you have strict
capacity requirements, and are running business-critical workloads that require a certain level of long or short-term capacity assurance, create a
Capacity Reservation to ensure that you always have access to Amazon EC2 capacity when you need it, for as long as you need it.
upvoted 1 times
Selected Answer: D
Capacity Reservations enable you to reserve capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This gives you
the flexibility to selectively add capacity reservations and still get the Regional RI discounts for that usage. By creating Capacity Reservations, you
ensure that you always have access to Amazon EC2 capacity when you need it, for as long as you need it.
upvoted 2 times
Selected Answer: D
Capacity Reservations allocate EC2 capacity in a specific AWS Region for you to launch instances.
The capacity is reserved and available to be utilized when needed, meeting the requirement to provide EC2 capacity in the failover region.
Other options do not reserve capacity. On-Demand provides flexible capacity but does not reserve capacity upfront. Savings Plans and Reserved
Instances provide discounts but do not reserve capacity.
Capacity Reservations allow defining instance attributes like instance type, platform, Availability Zone so the reserved capacity matches the
production environment.
upvoted 3 times
Selected Answer: D
Selected Answer: C
The Reserved Instance discount applies to instance usage within the instance family, regardless of size.
upvoted 1 times
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-reserved-instances.html
upvoted 1 times
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html
upvoted 1 times
Question #586 Topic 1
A company has five organizational units (OUs) as part of its organization in AWS Organizations. Each OU correlates to the five businesses that the
company owns. The company's research and development (R&D) business is separating from the company and will need its own organization. A
solutions architect creates a separate new management account for this purpose.
What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition.
B. Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization.
C. Create a new R&D AWS account in the new organization. Migrate resources from the prior R&D AWS account to the new R&D AWS account.
D. Have the R&D AWS account join the new organization. Make the new management account a member of the prior organization.
Correct Answer: B
Selected Answer: B
An account can only join another org when it leaves the first org.
A is wrong as it's not possible
C that's a new account so not really a migration
D The R&D department is separating from the company so you don't want the OU to join via nesting
upvoted 8 times
Selected Answer: B
Selected Answer: B
Option B: Invite the R&D AWS account to be part of the new organization after the R&D AWS account has left the prior organization is the
appropriate approach. This option ensures that the R&D AWS account transitions smoothly from the old organization to the new one. The steps
involved are:
Remove the R&D AWS account from the existing organization: This is done from the existing organization’s management account.
Invite the R&D AWS account to join the new organization: Once the R&D account is no longer part of the previous organization, it can be invited t
and accepted into the new organization.
upvoted 1 times
https://aws.amazon.com/blogs/mt/migrating-accounts-between-aws-organizations-with-consolidated-billing-to-all-features/
upvoted 3 times
Selected Answer: B
https://repost.aws/knowledge-center/organizations-move-accounts
Remove the member account from the old organization.
Send an invite to the member account from the new organization.
Accept the invite to the new organization from the member account.
upvoted 2 times
Selected Answer: C
Selected Answer: B
Selected Answer: B
https://repost.aws/knowledge-center/organizations-move-accounts
upvoted 4 times
Creating a brand new AWS account in the new organization (Option C) allows for a clean separation and migration of only the necessary resources
from the old account to the new.
upvoted 2 times
Selected Answer: C
When separating a business unit from an AWS Organizations structure, best practice is to:
Create a new AWS account dedicated for the business unit in the new organization
Migrate resources from the old account to the new account
Remove the old account from the original organization
This allows a clean break between the organizations and avoids any linking between them after separation.
upvoted 1 times
Selected Answer: B
account can leave current organization and then join new organization.
upvoted 3 times
Question #587 Topic 1
A company is designing a solution to capture customer activity in different web applications to process analytics and make predictions. Customer
activity in the web applications is unpredictable and can increase suddenly. The company requires a solution that integrates with other web
applications. The solution must include an authorization step for security purposes.
A. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores
the information that the company receives in an Amazon Elastic File System (Amazon EFS) file system. Authorization is resolved at the GWLB.
B. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis data stream that stores the information that the company
C. Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information that the company
receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
D. Configure a Gateway Load Balancer (GWLB) in front of an Amazon Elastic Container Service (Amazon ECS) container instance that stores
the information that the company receives on an Amazon Elastic File System (Amazon EFS) file system. Use an AWS Lambda function to
resolve authorization.
Correct Answer: C
Selected Answer: C
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
upvoted 7 times
Selected Answer: C
Option C: Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information that the company
receives in an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
Handles Unpredictable Traffic: Amazon Kinesis Data Firehose can handle variable amounts of streaming data and automatically scales to
accommodate sudden increases in traffic.
Integration with Web Applications: Amazon API Gateway provides a RESTful API endpoint for integrating with web applications.
Authorization: An API Gateway Lambda authorizer provides the necessary authorization step to secure API access.
Data Storage: Amazon Kinesis Data Firehose can deliver data directly to an Amazon S3 bucket for storage, making it suitable for long-term
analytics and predictions.
upvoted 3 times
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html
upvoted 1 times
Selected Answer: B
Selected Answer: C
Configure an Amazon API Gateway endpoint in front of an Amazon Kinesis Data Firehose that stores the information that the company receives in
an Amazon S3 bucket. Use an API Gateway Lambda authorizer to resolve authorization.
upvoted 3 times
Selected Answer: C
https://docs.aws.amazon.com/lambda/latest/dg/services-kinesisfirehose.html
upvoted 2 times
authorizer is configured for the method. If it is, API Gateway calls the Lambda function. The Lambda function authenticates the caller by means
such as the following: Calling out to an OAuth provider to get an OAuth access token
upvoted 2 times
An ecommerce company wants a disaster recovery solution for its Amazon RDS DB instances that run Microsoft SQL Server Enterprise Edition.
The company's current recovery point objective (RPO) and recovery time objective (RTO) are 24 hours.
A. Create a cross-Region read replica and promote the read replica to the primary instance.
B. Use AWS Database Migration Service (AWS DMS) to create RDS cross-Region replication.
C. Use cross-Region replication every 24 hours to copy native backups to an Amazon S3 bucket.
Correct Answer: D
Explanation: This option involves copying RDS automatic snapshots to another Region. It is a straightforward way to ensure that snapshots are
available in the event of a disaster. Since RDS snapshots are typically incremental and copied periodically, this solution matches the 24-hour RPO
requirement effectively and is cost-effective compared to maintaining constant cross-Region replication.
upvoted 1 times
Cross region data transfer is billable so think of smallest amount of data to transfer every 24 hours
upvoted 4 times
Amazon RDS creates and saves automated backups of your DB instance or Multi-AZ DB cluster during the backup window of your DB instance.
RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases. RDS saves the
automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your DB instance
to any point in time during the backup retention period.
upvoted 2 times
Selected Answer: D
Dddddddddd
upvoted 3 times
Selected Answer: D
This is the most cost-effective solution because it does not require any additional AWS services. Amazon RDS automatically creates snapshots of
your DB instances every hour. You can copy these snapshots to another Region every 24 hours to meet your RPO and RTO requirements.
The other solutions are more expensive because they require additional AWS services. For example, AWS DMS is a more expensive service than
AWS RDS.
upvoted 3 times
TiagueteVital 1 year, 1 month ago
Selected Answer: D
A company runs a web application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer that has sticky
sessions enabled. The web server currently hosts the user session state. The company wants to ensure high availability and avoid user session
A. Use an Amazon ElastiCache for Memcached instance to store the session data. Update the application to use ElastiCache for Memcached
B. Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for Redis to store the session
state.
C. Use an AWS Storage Gateway cached volume to store session data. Update the application to use AWS Storage Gateway cached volume to
D. Use Amazon RDS to store the session state. Update the application to use Amazon RDS to store the session state.
Correct Answer: B
Selected Answer: B
ElastiCache Redis provides in-memory caching that can deliver microsecond latency for session data.
Redis supports replication and multi-AZ which can provide high availability for the cache.
The application can be updated to store session data in ElastiCache Redis rather than locally on the web servers.
If a web server fails, the user can be routed via the load balancer to another web server which can retrieve their session data from the highly
available ElastiCache Redis cluster.
upvoted 8 times
Selected Answer: B
Option B: Use Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for Redis to store the session
state.
Explanation: Amazon ElastiCache for Redis is suitable for session state storage because Redis provides both in-memory data storage and
persistence options. Redis supports features like replication, persistence, and high availability (through Redis Sentinel or clusters). This ensures that
session state is preserved and available even if individual web servers fail.
upvoted 1 times
Selected Answer: B
As Memcached is not HA
upvoted 3 times
B is correct
upvoted 2 times
Selected Answer: D
Selected Answer: B
Selected Answer: B
B is the correct answer. It suggests using Amazon ElastiCache for Redis to store the session state. Update the application to use ElastiCache for
Redis to store the session state. This solution is cost-effective and requires minimal development effort.
upvoted 3 times
Selected Answer: B
A company migrated a MySQL database from the company's on-premises data center to an Amazon RDS for MySQL DB instance. The company
sized the RDS DB instance to meet the company's average daily workload. Once a month, the database performs slowly when the company runs
queries for a report. The company wants to have the ability to run reports and maintain the performance of the daily workloads.
A. Create a read replica of the database. Direct the queries to the read replica.
B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new database.
C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket.
Correct Answer: A
Selected Answer: A
Selected Answer: A
Create a read replica of the database. Direct the queries to the read replica.
upvoted 3 times
Selected Answer: A
This is the most cost-effective solution because it does not require any additional AWS services. A read replica is a copy of a database that is
synchronized with the primary database. You can direct the queries for the report to the read replica, which will not affect the performance of the
daily workloads
upvoted 3 times
Selected Answer: A
Clearly the right choice, with a read replica all the queries needed for a report are done in the replica, leaving the primary on best perfomance for
write
upvoted 2 times
Question #591 Topic 1
A company runs a container application by using Amazon Elastic Kubernetes Service (Amazon EKS). The application includes microservices that
manage customers and place orders. The company needs to route incoming requests to the appropriate microservices.
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer.
B. Use the AWS Load Balancer Controller to provision an Application Load Balancer.
Correct Answer: B
Selected Answer: B
Selected Answer: B
Option B: Use the AWS Load Balancer Controller to provision an Application Load Balancer (ALB).
Explanation: The AWS Load Balancer Controller can provision ALBs, which operate at the application layer (Layer 7). ALBs support advanced routing
capabilities such as routing based on HTTP paths or hostnames. This makes ALBs well-suited for routing requests to different microservices based
on URL paths or domains. This approach integrates well with Kubernetes and is a common pattern for microservices architectures.
upvoted 1 times
Selected Answer: B
Not D because
- even with an API gateway you'd need an ALB or ELB (so B+D would work, but D alone does not)
- you would use AWS API Gateway Controller (not "Amazon API Gateway") to create the API Gateway
upvoted 4 times
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
upvoted 3 times
Selected Answer: B
ALB is cost-effectively
upvoted 3 times
If you do not need any specific functionalities of API Gateway so you must choose ALB because it will be cheaper.
upvoted 3 times
Selected Answer: B
Routing requests to the appr. microserv. can easily be done with ALB and ingress. The ingress handles routing rules to the micro.serv. With answer
D you wil still need ALB or NLB as can be seen in the pics of https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-
amazon-eks/ or https://aws.amazon.com/blogs/containers/microservices-development-using-aws-controllers-for-kubernetes-ack-and-amazon-
eks-blueprints/ so that is not the most cost-effectively.
upvoted 2 times
Both ALB and API gateway can be used to route traffic to the microservices, but the question seeks the most 'cost effective' option.
You are charged for each hour or partial hour that an Application Load Balancer is running, and the number of Load Balancer Capacity Units (LCU)
used per hour.
With Amazon API Gateway, you only pay when your APIs are in use.
https://aws.amazon.com/blogs/containers/microservices-development-using-aws-controllers-for-kubernetes-ack-and-amazon-eks-blueprints/
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
upvoted 1 times
Selected Answer: B
AWS Load Balancer Controller: The AWS Load Balancer Controller is a Kubernetes controller that makes it easy to set up an Application Load
Balancer (ALB) or Network Load Balancer (NLB) for your Amazon EKS clusters. It simplifies the process of managing load balancers for applications
running on EKS.
Application Load Balancer (ALB): ALB is a Layer 7 load balancer that is capable of routing requests based on content, such as URL paths or
hostnames. This makes it suitable for routing requests to different microservices based on specific criteria.
Cost-Effectiveness: ALB is typically more cost-effective than an NLB, and it provides additional features at the application layer, which may be usefu
for routing requests to microservices based on specific conditions.
Option D: Amazon API Gateway is designed for creating, publishing, and managing APIs. While it can integrate with Amazon EKS, it may be more
feature-rich and complex than needed for simple routing to microservices within an EKS cluster.
upvoted 3 times
Selected Answer: D
https://aws.amazon.com/blogs/containers/integrate-amazon-api-gateway-with-amazon-eks/
upvoted 1 times
Routing to ms in k8s -> Ingresses -> Ingress Controller -> AWS Load Balancer Controller https://kubernetes-sigs.github.io/aws-load-balancer-
controller/v2.6/
upvoted 3 times
Selected Answer: D
Selected Answer: D
API Gateway is a fully managed service that makes it easy for you to create, publish, maintain, monitor, and secure APIs at any scale. API Gateway
provides an entry point to your microservices.
upvoted 1 times
Selected Answer: D
https://aws.amazon.com/blogs/containers/microservices-development-using-aws-controllers-for-kubernetes-ack-and-amazon-eks-blueprints/
upvoted 1 times
Question #592 Topic 1
A company uses AWS and sells access to copyrighted images. The company’s global customer base needs to be able to access these images
quickly. The company must deny access to users from specific countries. The company wants to minimize costs as much as possible.
A. Use Amazon S3 to store the images. Turn on multi-factor authentication (MFA) and public bucket access. Provide customers with a link to
the S3 bucket.
B. Use Amazon S3 to store the images. Create an IAM user for each customer. Add the users to a group that has permission to access the S3
bucket.
C. Use Amazon EC2 instances that are behind Application Load Balancers (ALBs) to store the images. Deploy the instances only in the
countries the company services. Provide customers with links to the ALBs for their specific country's instances.
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic restrictions. Provide a signed URL
Correct Answer: D
Selected Answer: D
Selected Answer: D
D. Use Amazon S3 to store the images. Use Amazon CloudFront to distribute the images with geographic restrictions. Provide a signed URL for
each customer to access the data in CloudFront.
upvoted 3 times
Selected Answer: D
answer is D
upvoted 1 times
Selected Answer: D
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
upvoted 3 times
Selected Answer: D
A solutions architect is designing a highly available Amazon ElastiCache for Redis based solution. The solutions architect needs to ensure that
failures do not result in performance degradation or loss of data locally and within an AWS Region. The solution needs to provide high availability
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes.
B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turned on.
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Correct Answer: A
Selected Answer: A
It seems like "Multi-AZ Redis replication group" (A) and "Multi-AZ Redis cluster" (C) are different wordings for the same configuration. However, "to
minimize the impact of a node failure, we recommend that your implementation use multiple nodes in each shard" - and that is mentioned only in
A.
upvoted 4 times
Selected Answer: A
high availability at the node level = shard and Multi A-Z = region level
upvoted 4 times
My answer A.
upvoted 2 times
Multi-AZ is only supported on Redis clusters that have more than one node in each shard (node groups).
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html#:~:text=node%20in%20each-,shard.,-Topics
upvoted 4 times
C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group.
In summary, option C, using a Multi-AZ Redis cluster with more than one read replica, is designed to provide both node-level and AWS Region-
level high availability, making it the most suitable choice for the given requirements.
upvoted 2 times
the replication structure is contained within a shard (called node group in the API/CLI) which is contained within a Redis cluster
A shard (in the API and CLI, a node group) is a hierarchical arrangement of nodes, each wrapped in a cluster. Shards support replication. Within a
shard, one node functions as the read/write primary node. All the other nodes in a shard function as read-only replicas of the primary node.
upvoted 1 times
its c for me
upvoted 1 times
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html
upvoted 3 times
Multi-AZ replication groups provide automatic failover between AZs if there is an issue with the primary AZ. This provides high availability at the
region level
upvoted 2 times
Enabling ElastiCache Multi-AZ with automatic failover on your Redis cluster (in the API and CLI, replication group) improves your fault tolerance.
This is true particularly in cases where your cluster's read/write primary cluster becomes unreachable or fails for any reason. Multi-AZ with
automatic failover is only supported on Redis clusters that support replication
upvoted 1 times
I would go with A, Using AOF can't protect you from all failure scenarios.
For example, if a node fails due to a hardware fault in an underlying physical server, ElastiCache will provision a new node on a different server. In
this case, the AOF is not available and can't be used to recover the data.
upvoted 1 times
Selected Answer: A
Hate to say this, but I read the two docs linked below, and I still think the answer is A. Turning on AOF helps in data persistence after failure, but it
does nothing for availability unless you use Multi-AZ replica groups.
upvoted 2 times
Question #594 Topic 1
A company plans to migrate to AWS and use Amazon EC2 On-Demand Instances for its application. During the migration testing phase, a
technical team observes that the application takes a long time to launch and load memory to become fully productive.
Which solution will reduce the launch time of the application during the next testing phase?
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand Instances available during the
B. Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools during the next testing phase.
D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next testing phase.
Correct Answer: C
Selected Answer: C
Using EC2 hibernation and Auto Scaling warm pools will help address this:
Hibernation saves the in-memory state of the EC2 instance to persistent storage and shuts the instance down. When the instance is started again,
the in-memory state is restored, which launches much faster than launching a new instance.
Warm pools pre-initialize EC2 instances and keep them ready to fulfill requests, reducing launch time. The hibernated instances can be added to a
warm pool.
When auto scaling scales out during the next testing phase, it will be able to launch instances from the warm pool rapidly since they are already
initialized
upvoted 7 times
Selected Answer: C
https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html
upvoted 2 times
Selected Answer: C
If an instance or application takes a long time to bootstrap and build a memory footprint in order to become fully productive, you can use
hibernation to pre-warm the instance. To pre-warm the instance, you:
Launch it with hibernation enabled.
Bring it to a desired state.
Hibernate it so that it's ready to be resumed to the desired state whenever needed.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#:~:text=you%20can%20use-,hibernation,-to%20pre%2Dwarm
upvoted 3 times
Selected Answer: C
With Amazon EC2 hibernation enabled, you can maintain your EC2 instances in a "pre-warmed" state so these can get to a productive state faster.
upvoted 2 times
Selected Answer: C
just use hibernation option so you won't load the full EC2 Instance
upvoted 1 times
Question #595 Topic 1
A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its applications experience sudden
traffic increases on random days of the week. The company wants to maintain application performance during sudden traffic increases.
A. Use manual scaling to change the size of the Auto Scaling group.
B. Use predictive scaling to change the size of the Auto Scaling group.
C. Use dynamic scaling to change the size of the Auto Scaling group.
D. Use schedule scaling to change the size of the Auto Scaling group.
Correct Answer: C
Selected Answer: C
Dynamic Scaling – This is yet another type of Auto Scaling in which the number of EC2 instances is changed automatically depending on the
signals received. Dynamic Scaling is a good choice when there is a high volume of unpredictable traffic.
https://www.developer.com/web-services/aws-auto-scaling-types-best-
practices/#:~:text=Dynamic%20Scaling%20%E2%80%93%20This%20is%20yet,high%20volume%20of%20unpredictable%20traffic.
upvoted 5 times
Selected Answer: C
random = dynamic
A: Manual is never a solution
B: Predictive is not possible as it's random
D: Cannot schedule random
upvoted 4 times
Selected Answer: C
Dynamic scaling
upvoted 1 times
https://aws.amazon.com/ec2/autoscaling/faqs/
upvoted 2 times
C is the best answer here. Dynamic scaling is the most cost-effective way to automatically scale the Auto Scaling group to maintain performance
during random traffic spikes.
upvoted 2 times
Question #596 Topic 1
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage
increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which
impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic.
B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.
C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type.
Correct Answer: A
Selected Answer: A
Answer is A.
Aurora Serverless v2 got autoscaling, highly available and cheaper when compared to the other options.
upvoted 7 times
Selected Answer: A
Not B - we can auto-scale the EC2 instance, but not "the [self-managed] PostgreSQL database ON the EC2 instance"
Not C - This does not mention scaling, so it would incur high cost and still it might not be able to keep up with the "unpredictable" spikes
Not D - Redshift is OLAP Data Warehouse
upvoted 2 times
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Aurora where the database automatically starts up, shuts down, and
scales capacity up or down based on your application's needs. This is the least costly option for unpredictable traffic.
upvoted 2 times
Selected Answer: C
A is probably more expensive than C. Aurora is serverless and fast. But nevertheless it needs DB migration service. Not sure DMS may not be free.
upvoted 1 times
Selected Answer: A
A to autoscaling
upvoted 2 times
Selected Answer: A
A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company’s employees report
issues with high latency when they begin using the application each day. The company wants to reduce latency.
B. Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning of each day.
Correct Answer: B
Selected Answer: B
Option B: Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
Explanation: Provisioned concurrency ensures that a specified number of Lambda instances are initialized and ready to handle requests. By
scheduling this scaling, you can pre-warm Lambda functions before peak usage times, reducing cold start latency. This solution directly addresses
the latency issue caused by cold starts.
upvoted 2 times
Selected Answer: B
Provisioned concurrency pre-initializes execution environments for your functions. These execution environments are prepared to respond
immediately to incoming function requests at start of day.
upvoted 2 times
A is wrong
API Gateway throttling limit is for better throughput, not for latency
upvoted 1 times
Selected Answer: B
Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
upvoted 4 times
Selected Answer: B
Provisioned Concurrency incurs additional costs, so it is cost-efficient to use it only when necessary. For example, early in the morning when activit
starts, or to handle recurring peak usage.
upvoted 3 times
B option setting up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day. This
solution is cost-effective and requires minimal development effort.
upvoted 1 times
Selected Answer: B
https://aws.amazon.com/blogs/compute/scheduling-aws-lambda-provisioned-concurrency-for-recurring-peak-usage/
upvoted 4 times
Question #598 Topic 1
A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS Cloud to analyze the data. The
devices generate .csv files and support writing the data to an SMB file share. Company analysts must be able to use SQL commands to query the
data. The analysts will run queries periodically throughout the day.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose three.)
B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway made.
C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
D. Set up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon S3. Provide access to analysts.
E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
F. Setup Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
SMB + use SQL commands to query the data = Amazon S3 File Gateway mode + Amazon Athena
upvoted 1 times
"Amazon S3 File Gateway provides a seamless way to connect to the cloud in order to store application data files and backup images as durable
objects in Amazon S3 cloud storage. Amazon S3 File Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching"
=> SMB and NFS is supported in Amazon S3 File Gateway => ACF
upvoted 2 times
https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-csv-home.html
https://aws.amazon.com/blogs/aws/amazon-athena-interactive-sql-queries-for-data-in-amazon-s3/
https://aws.amazon.com/storagegateway/faqs/
upvoted 3 times
It should be ACF
upvoted 2 times
A company wants to use Amazon Elastic Container Service (Amazon ECS) clusters and Amazon RDS DB instances to build and run a payment
processing application. The company will run the application in its on-premises data center for compliance purposes.
A solutions architect wants to use AWS Outposts as part of the solution. The solutions architect is working with the company's operational team
Which activities are the responsibility of the company's operational team? (Choose three.)
B. Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts
D. Availability of the Outposts infrastructure including the power supplies, servers, and networking equipment within the Outposts racks
F. Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events
From https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-outposts-high-availability-design.html
With Outposts, you are responsible for providing resilient power and network connectivity to the Outpost racks to meet your availability
requirements for workloads running on Outposts. You are responsible for the physical security and access controls of the data center environment.
You must provide sufficient power, space, and cooling to keep the Outpost operational and network connections to connect the Outpost back to
the Region. Since Outpost capacity is finite and determined by the size and number of racks AWS installs at your site, you must decide how much
EC2, EBS, and S3 on Outposts capacity you need to run your initial workloads, accommodate future growth, and to provide extra capacity to
mitigate server failures and maintenance events.
upvoted 26 times
My exam is tomorrow. thank you all for the answers and links.
upvoted 15 times
The activities that are the responsibility of the company's operational team when using Amazon Elastic Container Service (Amazon ECS) clusters
and Amazon RDS DB instances on AWS Outposts are:
Managing the virtualization hypervisor, storage systems, and the AWS services that run on Outposts.
Ensuring the availability of the Outposts infrastructure, including the power supplies, servers, and networking equipment within the Outposts racks
Providing extra capacity for Amazon ECS clusters to mitigate server failures and maintenance events.
upvoted 1 times
F: "If there is no additional capacity on the Outpost, the instance remains in the stopped state. The Outpost owner can try to free up used capacity
or request additional capacity for the Outpost so that the migration can complete."
Not D: "Equipment within the Outposts rack" is AWS' responsibility, you're not supposed to touch that
Not E: "When the AWS installation team arrives on site, they will replace the unhealthy hosts, switches, or rack elements"
upvoted 4 times
From <https://aws.amazon.com/outposts/rack/faqs/ : Your site must support the basic power, networking and space requirements to host an
Outpost ===> A
From <https://docs.aws.amazon.com/whitepapers/latest/applying-security-practices-to-network-workload-for-csps/the-shared-responsibility-
model.html : In AWS Outposts, the customer takes the responsibility of securing the physical infrastructure to host the AWS Outposts equipment in
their own data centers. ===> C
upvoted 1 times
https://docs.aws.amazon.com/outposts/latest/userguide/outpost-maintenance.html
upvoted 1 times
You're missing F, you must order the Outposts rack with excess capacity
upvoted 1 times
E is wrong
If there is a need to perform physical maintenance, AWS will reach out to schedule a time to visit your site.
https://aws.amazon.com/outposts/rack/faqs/#:~:text=As%20AWS%20Outposts%20rack%20runs,the%20Outpost%20for%20compliance%20certific
tion.
upvoted 1 times
https://d1.awsstatic.com/whitepapers/aws-outposts-high-availability-design-and-architecture-considerations.pdf
upvoted 1 times
https://docs.aws.amazon.com/whitepapers/latest/aws-outposts-high-availability-design/aws-outposts-high-availability-design.html
upvoted 1 times
I choose ACD
upvoted 1 times
A and C are obviously right. D is wrong because "within the Outpost racks". Between E and F, E is wrong because
(https://aws.amazon.com/outposts/rack/faqs/) says "If there is a need to perform physical maintenance, AWS will reach out to schedule a time to
visit your site. AWS may replace a given module as appropriate but will not perform any host or network switch servicing on customer premises."
So, choosing F.
upvoted 1 times
Question #600 Topic 1
A company is planning to migrate a TCP-based application into the company's VPC. The application is publicly accessible on a nonstandard TCP
port through a hardware appliance in the company's data center. This public endpoint can process up to 3 million requests per second with low
latency. The company requires the same level of performance for the new public endpoint in AWS.
A. Deploy a Network Load Balancer (NLB). Configure the NLB to be publicly accessible over the TCP port that the application requires.
B. Deploy an Application Load Balancer (ALB). Configure the ALB to be publicly accessible over the TCP port that the application requires.
C. Deploy an Amazon CloudFront distribution that listens on the TCP port that the application requires. Use an Application Load Balancer as
the origin.
D. Deploy an Amazon API Gateway API that is configured with the TCP port that the application requires. Configure AWS Lambda functions
Correct Answer: A
Selected Answer: A
Since the company requires the same level of performance for the new public endpoint in AWS.
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per
second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a
TCP connection to the selected target on the port specified in the listener configuration.
Link;
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html
upvoted 9 times
Selected Answer: A
TCP = NLB
upvoted 5 times
Selected Answer: A
Selected Answer: A
NLBs handle millions of requests per second. NLBs can handle general TCP traffic.
upvoted 3 times