AWS solution architect question and answers
AWS solution architect question and answers
Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3. Use security
assessments provided by Amazon Inspector to check for vulnerabilities on Amazon EC2 instances
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your
AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of
meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon
VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP
addresses, anomaly detection, and machine learning to identify threats more accurately.
via - https://aws.amazon.com/guardduty/
Amazon Inspector security assessments help you check for unintended network accessibility of your
Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are
offered to you as pre-defined rules packages mapped to common security best practices and
vulnerability definitions.
Incorrect options:
Use Amazon GuardDuty to monitor any malicious activity on data stored in Amazon S3. Use security
assessments provided by Amazon GuardDuty to check for vulnerabilities on Amazon EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in Amazon S3. Use security
assessments provided by Amazon Inspector to check for vulnerabilities on Amazon EC2 instances
Use Amazon Inspector to monitor any malicious activity on data stored in Amazon S3. Use security
assessments provided by Amazon GuardDuty to check for vulnerabilities on Amazon EC2 instances
These three options contradict the explanation provided above, so these options are incorrect.
References:
https://aws.amazon.com/guardduty/
https://aws.amazon.com/inspector/
Domain
An audit department generates and accesses the audit reports only twice in a financial year. The
department uses AWS Step Functions to orchestrate the report creating process that has failover and
retry scenarios built into the solution. The underlying data to create these audit reports is stored on
Amazon S3, runs into hundreds of Terabytes and should be available with millisecond latency.
As an AWS Certified Solutions Architect – Associate, which is the MOST cost-effective storage class that
you would recommend to be used for this use-case?
Amazon S3 Standard
Overall explanation
Correct option:
Since the data is accessed only twice in a financial year but needs rapid access when required, the most
cost-effective storage class for this use-case is Amazon S3 Standard-IA. S3 Standard-IA storage class is for
data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA matches the
high durability, high throughput, and low latency of S3 Standard, with a low per GB storage price and per
GB retrieval fee. Amazon Standard-IA is designed for 99.9% availability compared to 99.99% availability
of Amazon S3 Standard. However, the report creation process has failover and retry scenarios built into
the workflow, so in case the data is not available owing to the 99.9% availability of Amazon S3 Standard-
IA, the job will be auto re-invoked till data is successfully retrieved. Therefore this is the correct option.
Incorrect options:
Amazon S3 Standard - Amazon S3 Standard offers high durability, availability, and performance object
storage for frequently accessed data. As described above, Amazon S3 Standard-IA storage is a better fit
than Amazon S3 Standard, hence using S3 standard is ruled out for the given use-case.
Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering) - For a small monthly object monitoring and
automation charge, Amazon S3 Intelligent-Tiering monitors access patterns and automatically moves
objects that have not been accessed to lower-cost access tiers. The Amazon S3 Intelligent-Tiering storage
class is designed to optimize costs by automatically moving data to the most cost-effective access tier,
without performance impact or operational overhead. S3 Standard-IA matches the high durability, high
throughput, and low latency of S3 Intelligent-Tiering, with a low per GB storage price and per GB
retrieval fee. Moreover, Standard-IA has the same availability as that of Amazon S3 Intelligent-Tiering. So,
it's cost-efficient to use S3 Standard-IA instead of S3 Intelligent-Tiering.
Amazon S3 Glacier Deep Archive - Amazon S3 Glacier Deep Archive is a secure, durable, and low-cost
storage class for data archiving. Amazon S3 Glacier Deep Archive does not support millisecond latency,
so this option is ruled out.
For more details on the durability, availability, cost and access latency - please review this reference
link: https://aws.amazon.com/s3/storage-classes
Domain
Which of the following solutions would you recommend for the given use-case? (Select two)
Correct selection
Overall explanation
Correct options:
You can use Amazon Aurora replicas and Amazon CloudFront distribution to make the application more
resilient to spikes in request rates.
Amazon Aurora Replicas have two main purposes. You can issue queries to them to scale the read
operations for your application. You typically do so by connecting to the reader endpoint of the cluster.
That way, Aurora can spread the load for read-only connections across as many Aurora Replicas as you
have in the cluster. Amazon Aurora Replicas also help to increase availability. If the writer instance in a
cluster becomes unavailable, Aurora automatically promotes one of the reader instances to take its place
as the new writer. Up to 15 Aurora Replicas can be distributed across the Availability Zones (AZs) that a
DB cluster spans within an AWS Region.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos,
applications, and APIs to customers globally with low latency, high transfer speeds, all within a
developer-friendly environment. CloudFront points of presence (POPs) (edge locations) make sure that
popular content can be served quickly to your viewers. Amazon CloudFront also has regional edge
caches that bring more of your content closer to your viewers, even when the content is not popular
enough to stay at a POP, to help improve performance for that content.
Amazon CloudFront offers an origin failover feature to help support your data resiliency needs. Amazon
CloudFront is a global service that delivers your content through a worldwide network of data centers
called edge locations or points of presence (POPs). If your content is not already cached in an edge
location, Amazon CloudFront retrieves it from an origin that you've identified as the source for the
definitive version of the content.
Incorrect options:
Use AWS Shield - AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that
safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline
mitigations that minimize application downtime and latency. There are two tiers of AWS Shield -
Standard and Advanced. AWS Shield cannot be used to improve application resiliency to handle spikes in
traffic.
Use AWS Global Accelerator - AWS Global Accelerator is a service that improves the availability and
performance of your applications with local or global users. It provides static IP addresses that act as a
fixed entry point to your application endpoints in a single or multiple AWS Regions, such as your
Application Load Balancers, Network Load Balancers or Amazon EC2 instances. Amazon Global
Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as
well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional
failover. Since Amazon CloudFront is better for improving application resiliency to handle spikes in traffic,
so this option is ruled out.
Use AWS Direct Connect - AWS Direct Connect lets you establish a dedicated network connection
between your network and one of the AWS Direct Connect locations. Using industry-standard 802.1q
VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. AWS Direct Connect
does not involve the Internet; instead, it uses dedicated, private network connections between your
intranet and Amazon VPC. AWS Direct Connect cannot be used to improve application resiliency to
handle spikes in traffic.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/disaster-recovery-resiliency.html
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/global-accelerator/latest/dg/disaster-recovery-resiliency.html
Domain
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network
connection from your premises to AWS. AWS Direct Connect lets you establish a dedicated network
connection between your network and one of the AWS Direct Connect locations.
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect dedicated
network connections with the Amazon VPC VPN. This combination provides an IPsec-encrypted private
connection that also reduces network costs, increases bandwidth throughput, and provides a more
consistent network experience than internet-based VPN connections.
This solution combines the AWS managed benefits of the VPN solution with low latency, increased
bandwidth, more consistent benefits of the AWS Direct Connect solution, and an end-to-end, secure
IPsec connection. Therefore, AWS Direct Connect plus VPN is the correct solution for this use-case.
via - https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-
connect-vpn.html
Incorrect options:
Use AWS site-to-site VPN to establish a connection between the data center and AWS Cloud - AWS
Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to your
Amazon Virtual Private Cloud (Amazon VPC). A VPC VPN Connection utilizes IPSec to establish encrypted
network connectivity between your intranet and Amazon VPC over the Internet. VPN Connections are a
good solution if you have an immediate need, have low to modest bandwidth requirements, and can
tolerate the inherent variability in Internet-based connectivity. However, Site-to-site VPN cannot provide
low latency and high throughput connection, therefore this option is ruled out.
Use AWS Transit Gateway to establish a connection between the data center and AWS Cloud - AWS
Transit Gateway is a network transit hub that you can use to interconnect your virtual private clouds
(VPC) and on-premises networks. AWS Transit Gateway by itself cannot establish a low latency and high
throughput connection between a data center and AWS Cloud. Hence this option is incorrect.
Use AWS Direct Connect to establish a connection between the data center and AWS Cloud - AWS
Direct Connect by itself cannot provide an encrypted connection between a data center and AWS Cloud,
so this option is ruled out.
References:
https://aws.amazon.com/directconnect/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-
plus-vpn-network-to-amazon.html
Domain
As a solutions architect, what is your recommendation to enable this collaboration with the LEAST
amount of operational overhead?
The spreadsheet will have to be copied into Amazon EFS file systems of other AWS regions as Amazon
EFS is a regional service and it does not allow access from other AWS regions
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS
regions by using an inter-region VPC peering connection
The spreadsheet data will have to be moved into an Amazon RDS for MySQL database which can then
be accessed from any AWS region
The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS
region
Overall explanation
Correct option:
The spreadsheet on the Amazon Elastic File System (Amazon EFS) can be accessed in other AWS
regions by using an inter-region VPC peering connection
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file
system for use with AWS Cloud services and on-premises resources.
Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high
availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and
VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.
You can connect to Amazon EFS file systems from EC2 instances in other AWS regions using an inter-
region VPC peering connection, and from on-premises servers using an AWS VPN connection. So this is
the correct option.
Incorrect options:
The spreadsheet will have to be copied in Amazon S3 which can then be accessed from any AWS
region
The spreadsheet data will have to be moved into an Amazon RDS for MySQL database which can then
be accessed from any AWS region
Copying the spreadsheet into Amazon S3 or Amazon RDS for MySQL database is not the correct solution
as it involves a lot of operational overhead. For Amazon RDS, one would need to write custom code to
replicate the spreadsheet functionality running off of the database. S3 does not allow in-place edit of an
object. Additionally, it's also not POSIX compliant. So one would need to develop a custom application to
"simulate in-place edits" to support collabaration as per the use-case. So both these options are ruled
out.
The spreadsheet will have to be copied into Amazon EFS file systems of other AWS regions as Amazon
EFS is a regional service and it does not allow access from other AWS regions - Creating copies of the
spreadsheet into Amazon EFS file systems of other AWS regions would mean no collaboration would be
possible between the teams. In this case, each team would work on "its own file" instead of a single file
accessed and updated by all teams. Hence this option is incorrect.
Reference:
https://aws.amazon.com/efs/
Domain
Given these constraints, which of the following solutions is the BEST fit to develop this car-as-a-sensor
service?
Correct answer
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is
polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon
DynamoDB table for downstream processing
Ingest the sensor data in Amazon Kinesis Data Streams, which is polled by an application running on
an Amazon EC2 instance and the data is written into an auto-scaled Amazon DynamoDB table for
downstream processing
Ingest the sensor data in Amazon Kinesis Data Firehose, which directly writes the data into an auto-
scaled Amazon DynamoDB table for downstream processing
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is
polled by an application running on an Amazon EC2 instance and the data is written into an auto-
scaled Amazon DynamoDB table for downstream processing
Overall explanation
Correct option:
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is
polled by an AWS Lambda function in batches and the data is written into an auto-scaled Amazon
DynamoDB table for downstream processing
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute
time you consume. Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
enables you to decouple and scale microservices, distributed systems, and serverless applications. SQS
offers two types of message queues. Standard queues offer maximum throughput, best-effort ordering,
and at-least-once delivery. SQS FIFO queues are designed to guarantee that messages are processed
exactly once, in the exact order that they are sent.
AWS manages all ongoing operations and underlying infrastructure needed to provide a highly available
and scalable message queuing service. With SQS, there is no upfront cost, no need to acquire, install,
and configure messaging software, and no time-consuming build-out and maintenance of supporting
infrastructure. SQS queues are dynamically created and scale automatically so you can build and grow
applications quickly and efficiently.
As there is no need to manually provision the capacity, so this is the correct option.
Incorrect options:
Ingest the sensor data in Amazon Kinesis Data Firehose, which directly writes the data into an auto-
scaled Amazon DynamoDB table for downstream processing -Amazon Kinesis Data Firehose is a fully
managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage
Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, Splunk, and any custom HTTP
endpoint or HTTP endpoints owned by supported third-party service providers, including Datadog,
Dynatrace, LogicMonitor, MongoDB, New Relic, and Sumo Logic.
Firehose cannot directly write into a DynamoDB table, so this option is incorrect.
Ingest the sensor data in an Amazon Simple Queue Service (Amazon SQS) standard queue, which is
polled by an application running on an Amazon EC2 instance and the data is written into an auto-
scaled Amazon DynamoDB table for downstream processing
Ingest the sensor data in Amazon Kinesis Data Streams, which is polled by an application running on
an Amazon EC2 instance and the data is written into an auto-scaled Amazon DynamoDB table for
downstream processing
Using an application on an Amazon EC2 instance is ruled out as the carmaker wants to use fully
serverless components. So both these options are incorrect.
References:
https://aws.amazon.com/sqs/
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
https://aws.amazon.com/kinesis/data-streams/faqs/
Domain
Which of the following are the MOST cost-effective options to improve the file upload speed into
Amazon S3 (Select two)
Create multiple AWS Direct Connect connections between the AWS Cloud and branch offices in Europe
and Asia. Use the direct connect connections for faster file uploads into Amazon S3
Use multipart uploads for faster file uploads into the destination Amazon S3 bucket
Create multiple AWS Site-to-Site VPN connections between the AWS Cloud and branch offices in
Europe and Asia. Use these VPN connections for faster file uploads into Amazon S3
Correct selection
Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into the destination
S3 bucket
Use AWS Global Accelerator for faster file uploads into the destination Amazon S3 bucket
Overall explanation
Correct options:
Use Amazon S3 Transfer Acceleration (Amazon S3TA) to enable faster file uploads into the destination
S3 bucket
Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances
between your client and an S3 bucket. Amazon S3TA takes advantage of Amazon CloudFront’s globally
distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an
optimized network path.
Use multipart uploads for faster file uploads into the destination Amazon S3 bucket
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion
of the object's data. You can upload these object parts independently and in any order. If transmission of
any part fails, you can retransmit that part without affecting other parts. After all parts of your object are
uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size
reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single
operation. Multipart upload provides improved throughput, therefore it facilitates faster file uploads.
Incorrect options:
Create multiple AWS Direct Connect connections between the AWS Cloud and branch offices in Europe
and Asia. Use the direct connect connections for faster file uploads into Amazon S3 - AWS Direct
Connect is a cloud service solution that makes it easy to establish a dedicated network connection from
your premises to AWS. AWS Direct Connect lets you establish a dedicated network connection between
your network and one of the AWS Direct Connect locations. Direct connect takes significant time (several
months) to be provisioned and is an overkill for the given use-case.
Create multiple AWS Site-to-Site VPN connections between the AWS Cloud and branch offices in
Europe and Asia. Use these VPN connections for faster file uploads into Amazon S3 - AWS Site-to-Site
VPN enables you to securely connect your on-premises network or branch office site to your Amazon
Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch office network
to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes IPSec to establish
encrypted network connectivity between your intranet and Amazon VPC over the Internet. VPN
Connections are a good solution if you have low to modest bandwidth requirements and can tolerate the
inherent variability in Internet-based connectivity. Site-to-site VPN will not help in accelerating the file
transfer speeds into S3 for the given use-case.
Use AWS Global Accelerator for faster file uploads into the destination Amazon S3 bucket - AWS Global
Accelerator is a service that improves the availability and performance of your applications with local or
global users. It provides static IP addresses that act as a fixed entry point to your application endpoints in
a single or multiple AWS Regions, such as your Application Load Balancers, Network Load Balancers or
Amazon EC2 instances. AWS Global Accelerator will not help in accelerating the file transfer speeds into
S3 for the given use-case.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Domain
Beta
Share feedback
BackNext question
Course content
Course content
Overview
Q&AQuestions and answers
Notes
Announcements
Reviews
Learning tools
A large financial institution operates an on-premises data center with hundreds of petabytes of data
managed on Microsoft’s Distributed File System (DFS). The CTO wants the organization to transition into
a hybrid cloud environment and run data-intensive analytics workloads that support DFS.
Which of the following AWS services can facilitate the migration of these workloads?
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
Overall explanation
Correct option:
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that is accessible
over the industry-standard Service Message Block (SMB) protocol. It is built on Windows Server,
delivering a wide range of administrative features such as user quotas, end-user file restore, and
Microsoft Active Directory (AD) integration. Amazon FSx supports the use of Microsoft’s Distributed File
System (DFS) to organize shares into a single folder structure up to hundreds of PB in size. So this option
is correct.
via - https://aws.amazon.com/fsx/windows/
Incorrect options:
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-
performance file system. It is used for workloads such as machine learning, high-performance computing
(HPC), video processing, and financial modeling. Amazon FSx enables you to use Lustre file systems for
any workload where storage speed matters. FSx for Lustre does not support Microsoft’s Distributed File
System (DFS), so this option is incorrect.
AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD)
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD,
enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS
Cloud. AWS Managed Microsoft AD is built on the actual Microsoft Active Directory and does not require
you to synchronize or replicate data from your existing Active Directory to the cloud. AWS Managed
Microsoft AD does not support Microsoft’s Distributed File System (DFS), so this option is incorrect.
Microsoft SQL Server on AWS offers you the flexibility to run Microsoft SQL Server database on AWS
Cloud. Microsoft SQL Server on AWS does not support Microsoft’s Distributed File System (DFS), so this
option is incorrect.
Reference:
https://aws.amazon.com/fsx/windows/
Domain
Correct answer
Overall explanation
Correct option:
An Amazon Machine Image (AMI) provides the information required to launch an instance. You must
specify an AMI when you launch an instance. When the new AMI is copied from Region A into Region B,
it automatically creates a snapshot in Region B because AMIs are based on the underlying snapshots.
Further, an instance is created from this AMI in Region B. Hence, we have 1 Amazon EC2 instance, 1 AMI
and 1 snapshot in Region B.
Incorrect options:
As mentioned earlier in the explanation, when the new AMI is copied from Region A into Region B, it also
creates a snapshot in Region B because AMIs are based on the underlying snapshots. In addition, an
instance is created from this AMI in Region B. So, we have 1 Amazon EC2 instance, 1 AMI and 1 snapshot
in Region B. Hence all three options are incorrect.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Domain
Which of the following options would allow the company to enforce these streaming restrictions? (Select
two)
Use Amazon Route 53 based weighted routing policy to restrict distribution of content to only the
locations in which you have distribution rights
Use georestriction to prevent users in specific geographic locations from accessing content that you're
distributing through a Amazon CloudFront web distribution
Use Amazon Route 53 based geolocation routing policy to restrict distribution of content to only the
locations in which you have distribution rights
Use Amazon Route 53 based latency-based routing policy to restrict distribution of content to only the
locations in which you have distribution rights
Use Amazon Route 53 based failover routing policy to restrict distribution of content to only the
locations in which you have distribution rights
Overall explanation
Correct options:
Use Amazon Route 53 based geolocation routing policy to restrict distribution of content to only the
locations in which you have distribution rights
Geolocation routing lets you choose the resources that serve your traffic based on the geographic
location of your users, meaning the location that DNS queries originate from. For example, you might
want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region. You can also
use geolocation routing to restrict the distribution of content to only the locations in which you have
distribution rights.
Use georestriction to prevent users in specific geographic locations from accessing content that you're
distributing through a Amazon CloudFront web distribution
You can use georestriction, also known as geo-blocking, to prevent users in specific geographic locations
from accessing content that you're distributing through a Amazon CloudFront web distribution. When a
user requests your content, Amazon CloudFront typically serves the requested content regardless of
where the user is located. If you need to prevent users in specific countries from accessing your content,
you can use the CloudFront geo restriction feature to do one of the following: Allow your users to access
your content only if they're in one of the countries on a whitelist of approved countries. Prevent your
users from accessing your content if they're in one of the countries on a blacklist of banned countries. So
this option is also correct.
via - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Incorrect options:
Use Amazon Route 53 based latency-based routing policy to restrict distribution of content to only the
locations in which you have distribution rights - Use latency-based routing when you have resources in
multiple AWS Regions and you want to route traffic to the region that provides the lowest latency. To use
latency-based routing, you create latency records for your resources in multiple AWS Regions. When
Amazon Route 53 receives a DNS query for your domain or subdomain (example.com or
acme.example.com), it determines which AWS Regions you've created latency records for, determines
which region gives the user the lowest latency, and then selects a latency record for that region. Route
53 responds with the value from the selected record, such as the IP address for a web server.
Use Amazon Route 53 based weighted routing policy to restrict distribution of content to only the
locations in which you have distribution rights - Weighted routing lets you associate multiple resources
with a single domain name (example.com) or subdomain name (acme.example.com) and choose how
much traffic is routed to each resource. This can be useful for a variety of purposes, including load
balancing and testing new versions of the software.
Use Amazon Route 53 based failover routing policy to restrict distribution of content to only the
locations in which you have distribution rights - Failover routing lets you route traffic to a resource
when the resource is healthy or to a different resource when the first resource is unhealthy. The primary
and secondary records can route traffic to anything from an Amazon S3 bucket that is configured as a
website to a complex tree of records.
Weighted routing or failover routing or latency routing cannot be used to restrict the distribution of
content to only the locations in which you have distribution rights. So all three options above are
incorrect.
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-geo
Domain
Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS) and
AWS Lambda
Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS Lambda
Correct answer
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
Overall explanation
Correct option:
Throttling is the process of limiting the number of requests an authorized program can submit to a given
operation in a given amount of time.
Amazon API Gateway, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis
To prevent your API from being overwhelmed by too many requests, Amazon API Gateway throttles
requests to your API using the token bucket algorithm, where a token counts for a request. Specifically,
API Gateway sets a limit on a steady-state rate and a burst of request submissions against all APIs in your
account. In the token bucket algorithm, the burst is the maximum bucket size.
Amazon Simple Queue Service (Amazon SQS) - Amazon Simple Queue Service (SQS) is a fully managed
message queuing service that enables you to decouple and scale microservices, distributed systems, and
serverless applications. Amazon SQS offers buffer capabilities to smooth out temporary volume spikes
without losing messages or increasing latency.
Amazon Kinesis - Amazon Kinesis is a fully managed, scalable service that can ingest, buffer, and process
streaming data in real-time.
Incorrect options:
Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS) and
AWS Lambda - Amazon SQS has the ability to buffer its messages. Amazon Simple Notification Service
(SNS) cannot buffer messages and is generally used with SQS to provide the buffering facility. When
requests come in faster than your Lambda function can scale, or when your function is at maximum
concurrency, additional requests fail as the Lambda throttles those requests with error code 429 status
code. So, this combination of services is incorrect.
Amazon Gateway Endpoints, Amazon Simple Queue Service (Amazon SQS) and Amazon Kinesis - A
Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic
destined to a supported AWS service. This cannot help in throttling or buffering of requests. Amazon SQS
and Kinesis can buffer incoming data. Since Gateway Endpoint is an incorrect service for throttling or
buffering, this option is incorrect.
Elastic Load Balancer, Amazon Simple Queue Service (Amazon SQS), AWS Lambda - Elastic Load
Balancer cannot throttle requests. Amazon SQS can be used to buffer messages. When requests come in
faster than your Lambda function can scale, or when your function is at maximum concurrency,
additional requests fail as the Lambda throttles those requests with error code 429 status code. So, this
combination of services is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-request-throttling.html
https://aws.amazon.com/sqs/features/
Domain
Which of the following content types skip the regional edge cache? (Select two)
User-generated videos
Correct selection
Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
Overall explanation
Correct options:
Dynamic content, as determined at request time (cache-behavior configured to forward all headers)
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos,
applications, and APIs to customers globally with low latency, high transfer speeds, all within a
developer-friendly environment.
CloudFront points of presence (POPs) (edge locations) make sure that popular content can be served
quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer
to your viewers, even when the content is not popular enough to stay at a POP, to help improve
performance for that content.
Dynamic content, as determined at request time (cache-behavior configured to forward all headers),
does not flow through regional edge caches, but goes directly to the origin. So this option is correct.
Proxy methods PUT/POST/PATCH/OPTIONS/DELETE go directly to the origin from the POPs and do not
proxy through the regional edge caches. So this option is also correct.
Incorrect Options:
User-generated videos
The following type of content flows through the regional edge caches - user-generated content, such as
video, photos, or artwork; e-commerce assets such as product photos and videos and static content such
as style sheets, JavaScript files. Hence these three options are not correct.
Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/HowCloudFrontWorks.html
Domain
As a solutions architect, which of the following solutions would you recommend? (Select two)
Power the on-demand, live leaderboard using Amazon Neptune as it meets the in-memory, high
availability, low latency requirements
Power the on-demand, live leaderboard using Amazon RDS for Aurora as it meets the in-memory, high
availability, low latency requirements
Power the on-demand, live leaderboard using Amazon DynamoDB as it meets the in-memory, high
availability, low latency requirements
Power the on-demand, live leaderboard using Amazon DynamoDB with DynamoDB Accelerator (DAX)
as it meets the in-memory, high availability, low latency requirements
Correct selection
Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-
memory, high availability, low latency requirements
Overall explanation
Correct options:
Power the on-demand, live leaderboard using Amazon ElastiCache for Redis as it meets the in-
memory, high availability, low latency requirements
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond
latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for
real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming
leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session
store. ElastiCache for Redis can be used to power the live leaderboard, so this option is correct.
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond
performance at any scale. It's a fully managed, multiregion, multimaster, durable database with built-in
security, backup and restore, and in-memory caching for internet-scale applications. DAX is a DynamoDB-
compatible caching service that enables you to benefit from fast in-memory performance for demanding
applications. So DynamoDB with DAX can be used to power the live leaderboard.
Incorrect options:Power the on-demand, live leaderboard using Amazon Neptune as it meets the in-
memory, high availability, low latency requirements - Amazon Neptune is a fast, reliable, fully-managed
graph database service that makes it easy to build and run applications that work with highly connected
datasets. Neptune is not an in-memory database, so this option is not correct.
Power the on-demand, live leaderboard using Amazon DynamoDB as it meets the in-memory, high
availability, low latency requirements - DynamoDB is not an in-memory database, so this option is not
correct.Power the on-demand, live leaderboard using Amazon RDS for Aurora as it meets the in-
memory, high availability, low latency requirements - Amazon Aurora is a MySQL and PostgreSQL-
compatible relational database built for the cloud, that combines the performance and availability of
traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to
128TB per database instance. Aurora is not an in-memory database, so this option is not correct.
References:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/elasticache/redis/
https://aws.amazon.com/dynamodb/dax/
Domain
Which of the following features of Application Load Balancers can be used for this use-case?
Host-based Routing
Correct answer
Path-based Routing
Overall explanation
Correct option:
Path-based Routing
Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such
as Amazon EC2 instances, containers, IP addresses, and AWS Lambda functions.
If your application is composed of several individual services, an Application Load Balancer can route a
request to a service based on the content of the request. Here are the different types -
Host-based Routing:
You can route a client request based on the Host field of the HTTP header allowing you to route to
multiple domains from the same load balancer.
Path-based Routing:
You can route a client request based on the URL path of the HTTP header.
You can route a client request based on the value of any standard or custom HTTP header.
You can route a client request based on any standard or custom HTTP method.
Query string parameter-based routing:
You can route a client request based on the query string or query parameters.
You can route a client request based on source IP address CIDR from where the request originates.
You can use path conditions to define rules that route requests based on the URL in the request (also
known as path-based routing).
The path pattern is applied only to the path of the URL, not to its query parameters.
via - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-
listeners.html#path-conditions
Incorrect options:
Host-based Routing
As mentioned earlier in the explanation, none of these three types of routing support requests based on
the URL path of the HTTP header. Hence these three are incorrect.
Reference:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html
Domain
As a Solutions Architect, which of the following would you suggest as the MOST efficient solution to
improve the application performance?
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and ElastiCache Memcached for
Amazon S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for Amazon S3
Correct answer
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and Amazon CloudFront for
Amazon S3
Enable ElastiCache Redis for DynamoDB and Amazon CloudFront for Amazon S3
Overall explanation
Correct option:
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and Amazon CloudFront for
Amazon S3
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for Amazon
DynamoDB that delivers up to a 10 times performance improvement—from milliseconds to
microseconds—even at millions of requests per second.
Amazon DynamoDB Accelerator (DAX) is tightly integrated with Amazon DynamoDB—you simply
provision a DAX cluster, use the DAX client SDK to point your existing Amazon DynamoDB API calls at the
DAX cluster, and let DAX handle the rest. Because DAX is API-compatible with Amazon DynamoDB, you
don't have to make any functional application code changes. DAX is used to natively cache Amazon
DynamoDB reads.
Amazon CloudFront is a content delivery network (CDN) service that delivers static and dynamic web
content, video streams, and APIs around the world, securely and at scale. By design, delivering data out
of Amazon CloudFront can be more cost-effective than delivering it from S3 directly to your users.
When a user requests content that you serve with CloudFront, their request is routed to a nearby Edge
Location. If CloudFront has a cached copy of the requested file, CloudFront delivers it to the user,
providing a fast (low-latency) response. If the file they’ve requested isn’t yet cached, CloudFront
retrieves it from your origin – for example, the Amazon S3 bucket where you’ve stored your content.
So, you can use Amazon CloudFront to improve application performance to serve static content from
Amazon S3.
Incorrect options:
Enable ElastiCache Redis for DynamoDB and Amazon CloudFront for Amazon S3
Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond
latency to power internet-scale real-time applications. Amazon ElastiCache for Redis is a great choice for
real-time transactional and analytical processing use cases such as caching, chat/messaging, gaming
leaderboards, geospatial, machine learning, media streaming, queues, real-time analytics, and session
store.
via - https://aws.amazon.com/elasticache/redis/
Although you can integrate Redis with DynamoDB, it's much more involved than using DAX which is a
much better fit.
Enable Amazon DynamoDB Accelerator (DAX) for Amazon DynamoDB and ElastiCache Memcached for
Amazon S3
Enable ElastiCache Redis for DynamoDB and ElastiCache Memcached for Amazon S3
Amazon ElastiCache for Memcached is a Memcached-compatible in-memory key-value store service that
can be used as a cache or a data store. Amazon ElastiCache for Memcached is a great choice for
implementing an in-memory cache to decrease access latency, increase throughput, and ease the load
off your relational or NoSQL database.
Amazon ElastiCache Memcached cannot be used as a cache to serve static content from Amazon S3, so
both these options are incorrect.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-
match-made-in-the-cloud/
https://aws.amazon.com/elasticache/redis/
Domain
As an AWS Certified Solutions Architect – Associate, which of the following solutions would you suggest,
so that both the applications can consume the real-time status data concurrently?
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES)
Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS)
Overall explanation
Correct option:
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It provides ordering of
records, as well as the ability to read and/or replay records in the same order to multiple Amazon Kinesis
Applications. The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the same Amazon
Kinesis data stream (for example, to perform counting, aggregation, and filtering).
AWS recommends Amazon Kinesis Data Streams for use cases with requirements that are similar to the
following:
1. Routing related records to the same record processor (as in streaming MapReduce). For
example, counting and aggregation are simpler when all records for a given key are routed to the
same record processor.
2. Ordering of records. For example, you want to transfer log data from the application host to the
processing/archival host while maintaining the order of log statements.
3. Ability for multiple applications to consume the same stream concurrently. For example, you
have one application that updates a real-time dashboard and another that archives data to
Amazon Redshift. You want both applications to consume data from the same stream
concurrently and independently.
4. Ability to consume records in the same order a few hours later. For example, you have a billing
application and an audit application that runs a few hours behind the billing application. Because
Amazon Kinesis Data Streams stores data for up to 365 days, you can run the audit application
up to 365 days behind the billing application.
Incorrect options:
Amazon Simple Notification Service (SNS) - Amazon Simple Notification Service (SNS) is a highly
available, durable, secure, fully managed pub/sub messaging service that enables you to decouple
microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-
throughput, push-based, many-to-many messaging. Amazon SNS is a notification service and cannot be
used for real-time processing of data.
Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service (SNS) - Amazon Simple
Queue Service (Amazon SQS) offers a reliable, highly scalable hosted queue for storing messages as they
travel between computers. Amazon SQS lets you easily move data between distributed application
components and helps you build applications in which messages are processed independently (with
message-level ack/fail semantics), such as automated workflows. Since multiple applications need to
consume the same data stream concurrently, Kinesis is a better choice when compared to the
combination of SQS with SNS.
Amazon Simple Queue Service (SQS) with Amazon Simple Email Service (Amazon SES) - As discussed
above, Amazon Kinesis is a better option for this use case in comparison to Amazon SQS. Also, Amazon
SES does not fit this use-case. Hence, this option is an incorrect answer.
Reference:
https://aws.amazon.com/kinesis/data-streams/faqs/
Domain
Migrate the website to Amazon S3. Use S3 cross-region replication (S3 CRR) between AWS Regions in
the US and Asia
Use Amazon CloudFront with a custom origin pointing to the DNS record of the website on Amazon
Route 53
Correct answer
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
Overall explanation
Correct option:
Use Amazon CloudFront with a custom origin pointing to the on-premises servers
Amazon CloudFront is a web service that gives businesses and web application developers an easy and
cost-effective way to distribute content with low latency and high data transfer speeds. Amazon
CloudFront uses standard cache control headers you set on your files to identify static and dynamic
content. You can use different origins for different types of content on a single site – e.g. Amazon S3 for
static objects, Amazon EC2 for dynamic content, and custom origins for third-party content.
Amazon CloudFront:
via - https://aws.amazon.com/cloudfront/
An origin server stores the original, definitive version of your objects. If you're serving content over HTTP,
your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP
server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you
manage; these servers are also known as custom origins.
via - https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
Amazon CloudFront employs a global network of edge locations and regional edge caches that cache
copies of your content close to your viewers. Amazon CloudFront ensures that end-user requests are
served by the closest edge location. As a result, viewer requests travel a short distance, improving
performance for your viewers. Therefore for the given use case, the users in Asia will enjoy a low latency
experience while using the website even though the on-premises servers continue to be in the US.
Incorrect options:
Use Amazon CloudFront with a custom origin pointing to the DNS record of the website on Amazon
Route 53 - This option has been added as a distractor. CloudFront cannot have a custom origin pointing
to the DNS record of the website on Route 53.
Migrate the website to Amazon S3. Use S3 cross-region replication (S3 CRR) between AWS Regions in
the US and Asia - The use case states that the company operates a dynamic website. You can use
Amazon S3 to host a static website. On a static website, individual web pages include static content. They
might also contain client-side scripts. By contrast, a dynamic website relies on server-side processing,
including server-side scripts, such as PHP, JSP, or ASP.NET. Amazon S3 does not support server-side
scripting, but AWS has other resources for hosting dynamic websites. So this option is incorrect.
Leverage a Amazon Route 53 geo-proximity routing policy pointing to on-premises servers - Since the
on-premises servers continue to be in the US, so even using a Route 53 geo-proximity routing policy that
directs the users in Asia to the on-premises servers in the US would not reduce the latency for the users
in Asia. So this option is incorrect.
References:
https://aws.amazon.com/cloudfront/
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
Domain
Which of the following is the MOST resource efficient and cost-optimal way of addressing this issue?
Change the application architecture to create a new Amazon S3 bucket for each customer and then
upload each customer's files directly under the respective buckets
Change the application architecture to use Amazon Elastic File System (Amazon EFS) instead of
Amazon S3 for storing the customers' uploaded files
Change the application architecture to create a new Amazon S3 bucket for each day's data and then
upload the daily files directly under that day's bucket
Change the application architecture to create customer-specific custom prefixes within the single
Amazon S3 bucket and then upload the daily files into those prefixed locations
Overall explanation
Correct option:
Change the application architecture to create customer-specific custom prefixes within the single
Amazon S3 bucket and then upload the daily files into those prefixed locations
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading
scalability, data availability, security, and performance. Your applications can easily achieve thousands of
transactions per second in request performance when uploading and retrieving storage from Amazon S3.
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least
3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per prefix in a bucket.
There are no limits to the number of prefixes in a bucket. You can increase your read or write
performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to
parallelize reads, you could scale your read performance to 55,000 read requests per second. Please see
this example for more clarity on prefixes: if you have a file f1 stored in an S3 object path like
so s3://your_bucket_name/folder1/sub_folder_1/f1, then /folder1/sub_folder_1/ becomes the prefix
for file f1.
Some data lake applications on Amazon S3 scan millions or billions of objects for queries that run over
petabytes of data. These data lake applications achieve single-instance transfer rates that maximize the
network interface used for their Amazon EC2 instance, which can be up to 100 Gb/s on a single instance.
These applications then aggregate throughput across multiple instances to get multiple terabits per
second. Therefore creating customer-specific custom prefixes within the single bucket and then
uploading the daily files into those prefixed locations is the BEST solution for the given constraints.
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html
Incorrect options:
Change the application architecture to create a new Amazon S3 bucket for each customer and then
upload each customer's files directly under the respective buckets - Creating a new Amazon S3 bucket
for each new customer is an inefficient way of handling resource availability (S3 buckets need to be
globally unique) as some customers may use the service sparingly but the bucket name is locked for
them forever. Moreover, this is really not required as we can use S3 prefixes to improve the
performance.
Change the application architecture to create a new Amazon S3 bucket for each day's data and then
upload the daily files directly under that day's bucket - Creating a new Amazon S3 bucket for each new
day's data is also an inefficient way of handling resource availability (S3 buckets need to be globally
unique) as some of the bucket names may not be available for daily data processing. Moreover, this is
really not required as we can use S3 prefixes to improve the performance.
Change the application architecture to use Amazon Elastic File System (Amazon EFS) instead of
Amazon S3 for storing the customers' uploaded files - Amazon EFS is a costlier storage option compared
to Amazon S3, so it is ruled out.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html
Domain
Which of the following AWS services represents the best solution for this use-case?
Amazon CloudFront
Amazon Route 53
Correct answer
Overall explanation
Correct option:
AWS Global Accelerator utilizes the Amazon global network, allowing you to improve the performance of
your applications by lowering first-byte latency (the round trip time for a packet to go from a client to
your endpoint and back again) and jitter (the variation of latency), and increasing throughput (the
amount of time it takes to transfer data) as compared to the public internet.
AWS Global Accelerator improves performance for a wide range of applications over TCP or UDP by
proxying packets at the edge to applications running in one or more AWS Regions. Global Accelerator is a
good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP
use cases that specifically require static IP addresses or deterministic, fast regional failover.
Incorrect options:
Amazon CloudFront - Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds,
all within a developer-friendly environment.
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network
and its edge locations around the world. CloudFront improves performance for both cacheable content
(such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery),
while Global Accelerator improves performance for a wide range of applications over TCP or UDP.
AWS Elastic Load Balancing (ELB) - Both of the services, ELB and Global Accelerator solve the challenge
of routing user requests to healthy application endpoints. AWS Global Accelerator relies on ELB to
provide the traditional load balancing features such as support for internal and non-AWS endpoints, pre-
warming, and Layer 7 routing. However, while ELB provides load balancing within one Region, AWS
Global Accelerator provides traffic management across multiple Regions.
A regional ELB load balancer is an ideal target for AWS Global Accelerator. By using a regional ELB load
balancer, you can precisely distribute incoming application traffic across backends, such as Amazon EC2
instances or Amazon ECS tasks, within an AWS Region.
If you have workloads that cater to a global client base, AWS recommends that you use AWS Global
Accelerator. If you have workloads hosted in a single AWS Region and used by clients in and around the
same Region, you can use an Application Load Balancer or Network Load Balancer to manage your
resources.
Amazon Route 53 - Amazon Route 53 is a highly available and scalable cloud Domain Name System
(DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-
effective way to route end users to Internet applications by translating names like www.example.com
into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Route 53 is
ruled out as the company wants to continue using its own custom DNS service.
Reference:
https://aws.amazon.com/global-accelerator/faqs/
Domain
Which of the following AWS services is BEST suited to accelerate the aforementioned chip design
process?
AWS Glue
Correct answer
Amazon EMR
Overall explanation
Correct option:
Amazon FSx for Lustre makes it easy and cost-effective to launch and run the world’s most popular high-
performance file system. It is used for workloads such as machine learning, high-performance computing
(HPC), video processing, and financial modeling. The open-source Lustre file system is designed for
applications that require fast storage – where you want your storage to keep up with your compute. FSx
for Lustre integrates with Amazon S3, making it easy to process data sets with the Lustre file system.
When linked to an S3 bucket, an FSx for Lustre file system transparently presents S3 objects as files and
allows you to write changed data back to S3.
FSx for Lustre provides the ability to both process the 'hot data' in a parallel and distributed fashion as
well as easily store the 'cold data' on Amazon S3. Therefore this option is the BEST fit for the given
problem statement.
Incorrect options:
Amazon FSx for Windows File Server - Amazon FSx for Windows File Server provides fully managed,
highly reliable file storage that is accessible over the industry-standard Service Message Block (SMB)
protocol. It is built on Windows Server, delivering a wide range of administrative features such as user
quotas, end-user file restore, and Microsoft Active Directory (AD) integration. FSx for Windows does not
allow you to present S3 objects as files and does not allow you to write changed data back to S3.
Therefore you cannot reference the "cold data" with quick access for reads and updates at low cost.
Hence this option is not correct.
Amazon EMR - Amazon EMR is the industry-leading cloud big data platform for processing vast amounts
of data using open source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache
Hudi, and Presto. Amazon EMR uses Hadoop, an open-source framework, to distribute your data and
processing across a resizable cluster of Amazon EC2 instances. EMR does not offer the same storage and
processing speed as FSx for Lustre. So it is not the right fit for the given high-performance workflow
scenario.
AWS Glue - AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for
customers to prepare and load their data for analytics. AWS Glue job is meant to be used for batch ETL
data processing. AWS Glue does not offer the same storage and processing speed as FSx for Lustre. So it
is not the right fit for the given high-performance workflow scenario.
References:
https://aws.amazon.com/fsx/lustre/
https://aws.amazon.com/fsx/windows/faqs/
Domain
Create a new IAM role with the required permissions to access the resources in the production
environment. The users can then assume this IAM role while accessing the resources from the
production environment
Both IAM roles and IAM users can be used interchangeably for cross-account access
Create new IAM user credentials for the production environment and share these credentials with the
set of users from the development environment
Correct option:Create a new IAM role with the required permissions to access the resources in the
production environment. The users can then assume this IAM role while accessing the resources from
the production environment
IAM roles allow you to delegate access to users or services that normally don't have access to your
organization's AWS resources. IAM users or AWS services can assume a role to obtain temporary security
credentials that can be used to make AWS API calls. Consequently, you don't have to share long-term
credentials for access to a resource. Using IAM roles, it is possible to access cross-account resources.
Incorrect options:Create new IAM user credentials for the production environment and share these
credentials with the set of users from the development environment - There is no need to create new
IAM user credentials for the production environment, as you can use IAM roles to access cross-account
resources.
It is not possible to access cross-account resources - You can use IAM roles to access cross-account
resources.
Both IAM roles and IAM users can be used interchangeably for cross-account access - IAM roles and
IAM users are separate IAM entities and should not be mixed. Only IAM roles can be used to access
cross-account resources.
Reference:
https://aws.amazon.com/iam/features/manage-roles/
Domain
Send an email to the business owner with details of the login username and password for the AWS
root user. This will help the business owner to troubleshoot any login issues in future
Enable Multi Factor Authentication (MFA) for the AWS account root user account
Create AWS account root user access keys and share those keys only with the business owner
Correct selection
Overall explanation
Correct options:
Enable Multi Factor Authentication (MFA) for the AWS account root user account
Here are some of the best practices while creating an AWS account root user:
1) Use a strong password to help protect account-level access to the AWS Management Console. 2)
Never share your AWS account root user password or access keys with anyone. 3) If you do have an
access key for your AWS account root user, delete it. If you must keep it, rotate (change) the access key
regularly. You should not encrypt the access keys and save them on Amazon S3. 4) If you don't already
have an access key for your AWS account root user, don't create one unless you absolutely need to. 5)
Enable AWS multi-factor authentication (MFA) on your AWS account root user account.
Incorrect options:
Encrypt the access keys and save them on Amazon S3 - AWS recommends that if you don't already have
an access key for your AWS account root user, don't create one unless you absolutely need to. Even an
encrypted access key for the root user poses a significant security risk. Therefore, this option is incorrect.
Create AWS account root user access keys and share those keys only with the business owner - AWS
recommends that if you don't already have an access key for your AWS account root user, don't create
one unless you absolutely need to. Hence, this option is incorrect.
Send an email to the business owner with details of the login username and password for the AWS
root user. This will help the business owner to troubleshoot any login issues in future - AWS
recommends that you should never share your AWS account root user password or access keys with
anyone. Sending an email with AWS account root user credentials creates a security risk as it can be
misused by anyone reading the email. Hence, this option is incorrect.
Reference:
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#create-iam-users
Domain
"Version": "2021-10-17",
"Statement": [
"Action": [
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket"
],
"Effect": "Allow"
Which statement should a solutions architect add to the policy to address this issue?
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
}
{
"Action": [
"s3:*Object"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket*"
],
"Effect": "Allow"
Overall explanation
Correct option:
**
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
**
1. Effect: Specifies whether the statement will Allow or Deny an action (Allow is the effect defined
here).
2. Action: Describes a specific action or actions that will either be allowed or denied to run based
on the Effect entered. API actions are unique to each service (DeleteObject is the action defined
here).
This policy provides the necessary delete permissions on the resources of the Amazon S3 bucket to the
group.
Incorrect options:
**
"Action": [
"s3:*Object"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
**
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::example-bucket/*"
],
"Effect": "Allow"
** - This policy is incorrect since it allows all actions on the resource, which violates the principle of least
privilege, as required by the given use case.
**
"Action": [
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::example-bucket*"
],
"Effect": "Allow"
** - This is incorrect, as the resource name is incorrect. It should have a /* after the bucket name.
Reference:
https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/
Domain
Which of the following are the MOST cost-effective options for completing the data transfer and
establishing connectivity? (Select two)
Correct selection
Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-premises data center
and AWS Cloud
Setup AWS Direct Connect to establish connectivity between the on-premises data center and AWS
Cloud
Order 70 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer
Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer
Overall explanation
Correct options:
Order 10 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer
AWS Snowball Edge Storage Optimized is the optimal choice if you need to securely and quickly transfer
dozens of terabytes to petabytes of data to AWS. It provides up to 80 Terabytes of usable HDD storage,
40 vCPUs, 1 TB of SATA SSD storage, and up to 40 Gigabytes network connectivity to address large scale
data transfer and pre-processing use cases.
As each Snowball Edge Storage Optimized device can handle 80 Terabytes of data, you can order 10 such
devices to take care of the data transfer for all applications.
Exam Alert:
The original Snowball devices were transitioned out of service and Snowball Edge Storage Optimized are
now the primary devices used for data transfer. You may see the Snowball device on the exam, just
remember that the original Snowball device had 80 Terabytes of storage space.
Setup AWS Site-to-Site VPN to establish on-going connectivity between the on-premises data center
and AWS Cloud
AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch office site to
your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your data center or branch
office network to the cloud with an AWS Site-to-Site VPN connection. A VPC VPN Connection utilizes
IPSec to establish encrypted network connectivity between your intranet and Amazon VPC over the
Internet. VPN Connections can be configured in minutes and are a good solution if you have an
immediate need, have low to modest bandwidth requirements, and can tolerate the inherent variability
in Internet-based connectivity.
Therefore this option is the right fit for the given use-case as the connectivity can be easily established
within the given timeframe.
Incorrect options:
Order 1 AWS Snowmobile to complete the one-time data transfer - Each AWS Snowmobile has a total
capacity of up to 100 petabytes. To migrate large datasets of 10 petabytes or more in a single location,
you should use AWS Snowmobile. For datasets less than 10 petabytes or distributed in multiple
locations, you should use Snowball. So AWS Snowmobile is not the right fit for this use-case.
Setup AWS Direct Connect to establish connectivity between the on-premises data center and AWS
Cloud - AWS Direct Connect lets you establish a dedicated network connection between your network
and one of the AWS Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated
connection can be partitioned into multiple virtual interfaces. AWS Direct Connect does not involve the
Internet; instead, it uses dedicated, private network connections between your intranet and Amazon
VPC. Direct Connect involves significant monetary investment and takes at least a month to set up,
therefore it's not the correct fit for this use-case.
Order 70 AWS Snowball Edge Storage Optimized devices to complete the one-time data transfer - As
the data-transfer can be completed with just 10 AWS Snowball Edge Storage Optimized devices, there is
no need to order 70 devices.
References:
https://aws.amazon.com/snowball/faqs/
https://aws.amazon.com/vpn/
https://aws.amazon.com/snowmobile/faqs/
https://aws.amazon.com/directconnect/
Domain
As a solutions architect, which of the following AWS services would you recommend as a caching layer
for this use-case? (Select two)
Amazon ElastiCache
Correct selection
Amazon Redshift
Overall explanation
Correct options:
Amazon DynamoDB Accelerator (DAX) is a fully managed, highly available, in-memory cache for
DynamoDB that delivers up to a 10x performance improvement – from milliseconds to microseconds –
even at millions of requests per second. DAX does all the heavy lifting required to add in-memory
acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data
population, or cluster management. Therefore, this is a correct option.
DAX Overview:
via - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.concepts.html
Amazon ElastiCache
Amazon ElastiCache for Memcached is an ideal front-end for data stores like Amazon RDS or Amazon
DynamoDB, providing a high-performance middle tier for applications with extremely high request rates
and/or low latency requirements. Therefore, this is also a correct option.
Incorrect options:
Amazon Relational Database Service (Amazon RDS) - Amazon Relational Database Service (Amazon
RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-
efficient and resizable capacity while automating time-consuming administration tasks such as hardware
provisioning, database setup, patching, and backups. Amazon RDS cannot be used as a caching layer for
Amazon DynamoDB.
Amazon OpenSearch Service - Amazon OpenSearch Service is a managed service that makes it easy for
you to perform interactive log analytics, real-time application monitoring, website search, and more.
OpenSearch is an open source, distributed search and analytics suite derived from Elasticsearch. It
cannot be used as a caching layer for Amazon DynamoDB.
References:
https://aws.amazon.com/dynamodb/dax/
https://aws.amazon.com/elasticache/faqs/
Domain
Which of the following options represent a valid configuration for setting up retention periods for objects
in Amazon S3 buckets? (Select two)
Correct selection
Different versions of a single object can have different retention modes and periods
The bucket default settings will override any explicit retention mode or period you request on an
object version
You cannot place a retention period on an object version through a bucket default setting
When you use bucket default settings, you specify a Retain Until Date for the object version
Correct selection
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for
the object version
Overall explanation
Correct options:
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for
the object version
You can place a retention period on an object version either explicitly or through a bucket default setting.
When you apply a retention period to an object version explicitly, you specify a Retain Until Date for the
object version. Amazon S3 stores the Retain Until Date setting in the object version's metadata and
protects the object version until the retention period expires.
Different versions of a single object can have different retention modes and periods
Like all other Object Lock settings, retention periods apply to individual object versions. Different
versions of a single object can have different retention modes and periods.
For example, suppose that you have an object that is 15 days into a 30-day retention period, and you
PUT an object into Amazon S3 with the same name and a 60-day retention period. In this case, your PUT
succeeds, and Amazon S3 creates a new version of the object with a 60-day retention period. The older
version maintains its original retention period and becomes deletable in 15 days.
Incorrect options:
You cannot place a retention period on an object version through a bucket default setting - You can
place a retention period on an object version either explicitly or through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date for the object version - When
you use bucket default settings, you don't specify a Retain Until Date. Instead, you specify a duration, in
either days or years, for which every object version placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or period you request on an
object version - If your request to place an object version in a bucket contains an explicit retention mode
and period, those settings override any bucket default settings for that object version.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
Domain
Which of the following would you identify as data sources supported by Amazon GuardDuty?
Elastic Load Balancing logs, Domain Name System (DNS) logs, AWS CloudTrail events
Correct answer
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
VPC Flow Logs, Amazon API Gateway logs, Amazon S3 access logs
Amazon CloudFront logs, Amazon API Gateway logs, AWS CloudTrail events
Overall explanation
Correct option:
VPC Flow Logs, Domain Name System (DNS) logs, AWS CloudTrail events
Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and
unauthorized behavior to protect your AWS accounts, workloads, and data stored in Amazon S3. With
the cloud, the collection and aggregation of account and network activities is simplified, but it can be
time-consuming for security teams to continuously analyze event log data for potential threats. With
GuardDuty, you now have an intelligent and cost-effective option for continuous threat detection in AWS.
The service uses machine learning, anomaly detection, and integrated threat intelligence to identify and
prioritize potential threats.
Amazon GuardDuty analyzes tens of billions of events across multiple AWS data sources, such as AWS
CloudTrail events, Amazon VPC Flow Logs, and DNS logs.
With a few clicks in the AWS Management Console, GuardDuty can be enabled with no software or
hardware to deploy or maintain. By integrating with Amazon EventBridge Events, GuardDuty alerts are
actionable, easy to aggregate across multiple accounts, and straightforward to push into existing event
management and workflow systems.
Incorrect options:
VPC Flow Logs, Amazon API Gateway logs, Amazon S3 access logs
Elastic Load Balancing logs, Domain Name System (DNS) logs, AWS CloudTrail events
Amazon CloudFront logs, Amazon API Gateway logs, AWS CloudTrail events
These three options contradict the explanation provided above, so these options are incorrect.
Reference:
https://aws.amazon.com/guardduty/
Domain
Which of the following is the MOST cost-effective strategy for storing this intermediary query data?
Store the intermediary query results in Amazon S3 Glacier Instant Retrieval storage class
Correct answer
Store the intermediary query results in Amazon S3 One Zone-Infrequent Access storage class
Store the intermediary query results in Amazon S3 Standard-Infrequent Access storage class
Overall explanation
Correct option:
Amazon S3 Standard offers high durability, availability, and performance object storage for frequently
accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate for a wide
variety of use cases, including cloud applications, dynamic websites, content distribution, mobile and
gaming applications, and big data analytics. As there is no minimum storage duration charge and no
retrieval fee (remember that intermediary query results are heavily referenced by other parts of the
analytics pipeline), this is the MOST cost-effective storage class amongst the given options.
Incorrect options:
Store the intermediary query results in Amazon S3 Glacier Instant Retrieval storage class - Amazon S3
Glacier Instant Retrieval delivers the fastest access to archive storage, with the same throughput and
milliseconds access as the S3 Standard and S3 Standard-IA storage classes. S3 Glacier Instant Retrieval is
ideal for archive data that needs immediate access, such as medical images, news media assets, or user-
generated content archives.
The minimum storage duration charge is 90 days, so this option is NOT cost-effective because
intermediary query results need to be kept only for 24 hours. Hence this option is not correct.
Store the intermediary query results in Amazon S3 Standard-Infrequent Access storage class - Amazon
S3 Standard-IA is for data that is accessed less frequently but requires rapid access when needed. S3
Standard-IA offers high durability, high throughput, and low latency of S3 Standard, with a low per GB
storage price and per GB retrieval fee. This combination of low cost and high performance makes S3
Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. The
minimum storage duration charge is 30 days, so this option is NOT cost-effective because intermediary
query results need to be kept only for 24 hours. Hence this option is not correct.
Store the intermediary query results in Amazon S3 One Zone-Infrequent Access storage class - Amazon
S3 One Zone-IA is for data that is accessed less frequently but requires rapid access when needed. Unlike
other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), S3 One Zone-IA
stores data in a single AZ and costs 20% less than S3 Standard-IA. The minimum storage duration charge
is 30 days, so this option is NOT cost-effective because intermediary query results need to be kept only
for 24 hours. Hence this option is not correct.
To summarize again, S3 Standard-IA and S3 One Zone-IA have a minimum storage duration charge of 30
days (so instead of 24 hours, you end up paying for 30 days). S3 Standard-IA and S3 One Zone-IA also
have retrieval charges (as the results are heavily referenced by other parts of the analytics pipeline, so
the retrieval costs would be pretty high). Therefore, these storage classes are not cost optimal for the
given use-case.
Reference:
https://aws.amazon.com/s3/storage-classes/
Domain
Given this scenario, which of the following is correct regarding the charges for this image transfer?
The junior scientist only needs to pay S3TA transfer charges for the image upload
The junior scientist only needs to pay Amazon S3 transfer charges for the image upload
The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image
upload
Correct answer
The junior scientist does not need to pay any transfer charges for the image upload
Overall explanation
Correct option:
The junior scientist does not need to pay any transfer charges for the image upload
There are no S3 data transfer charges when data is transferred in from the internet. Also with S3TA, you
pay only for transfers that are accelerated. Therefore the junior scientist does not need to pay any
transfer charges for the image upload because S3TA did not result in an accelerated transfer.
via - https://aws.amazon.com/s3/transfer-acceleration/
Incorrect options:
The junior scientist only needs to pay S3TA transfer charges for the image upload - Since S3TA did not
result in an accelerated transfer, there are no S3TA transfer charges to be paid.
The junior scientist only needs to pay Amazon S3 transfer charges for the image upload - There are no
S3 data transfer charges when data is transferred in from the internet. So this option is incorrect.
The junior scientist needs to pay both S3 transfer charges and S3TA transfer charges for the image
upload - There are no Amazon S3 data transfer charges when data is transferred in from the internet.
Since S3TA did not result in an accelerated transfer, there are no S3TA transfer charges to be paid.
References:
https://aws.amazon.com/s3/transfer-acceleration/
https://aws.amazon.com/s3/pricing/
Domain
As a solutions architect, which of the following steps would you recommend to implement the solution?
Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count
to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the
designated hour
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour
on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to
happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour
on the last day of the month. Set the min count as well as the max count of instances to 10. This
causes the scale-out to happen before peak traffic kicks in at the designated hour
Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to
10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the
designated hour
Overall explanation
Correct option:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour
on the last day of the month. Set the desired capacity of instances to 10. This causes the scale-out to
happen before peak traffic kicks in at the designated hour
Scheduled scaling allows you to set your own scaling schedule. For example, let's say that every week the
traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to
decrease on Friday. You can plan your scaling actions based on the predictable traffic patterns of your
web application. Scaling actions are performed automatically as a function of time and date.
A scheduled action sets the minimum, maximum, and desired sizes to what is specified by the scheduled
action at the time specified by the scheduled action. For the given use case, the correct solution is to set
the desired capacity to 10. When we want to specify a range of instances, then we must use min and
max values.
Incorrect options:
Configure your Auto Scaling group by creating a scheduled action that kicks-off at the designated hour
on the last day of the month. Set the min count as well as the max count of instances to 10. This
causes the scale-out to happen before peak traffic kicks in at the designated hour - As mentioned
earlier in the explanation, only when we want to specify a range of instances, then we must use min and
max values. As the given use-case requires exactly 10 instances to be available during the peak hour, so
we must set the desired capacity to 10. Hence this option is incorrect.
Configure your Auto Scaling group by creating a target tracking policy and setting the instance count to
10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the
designated hour
Configure your Auto Scaling group by creating a simple tracking policy and setting the instance count
to 10 at the designated hour. This causes the scale-out to happen before peak traffic kicks in at the
designated hour
Target tracking policy or simple tracking policy cannot be used to effect a scaling action at a certain
designated hour. Both these options have been added as distractors.
Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/schedule_time.html
Domain
Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic, subscribe an AWS
Lambda function to this Amazon SNS topic to process the updates and then store these processed
updates in a SQL database running on Amazon EC2 instance
Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue which uses a fleet of
Amazon EC2 instances (with Auto Scaling) to process these updates in the Amazon SQS queue and
then store these processed updates in an Amazon RDS MySQL database
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process
these updates and then store these processed updates in Amazon DynamoDB
Push score updates to Amazon Kinesis Data Streams which uses a fleet of Amazon EC2 instances (with
Auto Scaling) to process the updates in Amazon Kinesis Data Streams and then store these processed
updates in Amazon DynamoDB
Overall explanation
Correct option:
Push score updates to Amazon Kinesis Data Streams which uses an AWS Lambda function to process
these updates and then store these processed updates in Amazon DynamoDB
To help ingest real-time data or streaming data at large scales, you can use Amazon Kinesis Data Streams
(KDS). KDS can continuously capture gigabytes of data per second from hundreds of thousands of
sources. The data collected is available in milliseconds, enabling real-time analytics. KDS provides
ordering of records, as well as the ability to read and/or replay records in the same order to multiple
Amazon Kinesis Applications.
AWS Lambda integrates natively with Kinesis Data Streams. The polling, checkpointing, and error
handling complexities are abstracted when you use this native integration. The processed data can then
be configured to be saved in Amazon DynamoDB.
Incorrect options:
Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue which uses a fleet of
Amazon EC2 instances (with Auto Scaling) to process these updates in the Amazon SQS queue and
then store these processed updates in an Amazon RDS MySQL database
Push score updates to Amazon Kinesis Data Streams which uses a fleet of Amazon EC2 instances (with
Auto Scaling) to process the updates in Amazon Kinesis Data Streams and then store these processed
updates in Amazon DynamoDB
Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic, subscribe an AWS
Lambda function to this Amazon SNS topic to process the updates and then store these processed
updates in a SQL database running on Amazon EC2 instance
These three options use Amazon EC2 instances as part of the solution architecture. The use-case seeks to
minimize the management overhead required to maintain the solution. However, Amazon EC2 instances
involve several maintenance activities such as managing the guest operating system and software
deployed to the guest operating system, including updates and security patches, etc. Hence these
options are incorrect.
Reference:
https://aws.amazon.com/blogs/big-data/best-practices-for-consuming-amazon-kinesis-data-streams-
using-aws-lambda/
Domain
Which of the following would you attribute as the underlying reason for the unexpectedly high costs for
AWS Shield Advanced service?
Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs
Correct answer
Consolidated billing has not been enabled. All the AWS accounts should fall under a single
consolidated billing for the monthly fee to be charged only once
AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby
resulting in increased costs
Overall explanation
Correct option:
Consolidated billing has not been enabled. All the AWS accounts should fall under a single
consolidated billing for the monthly fee to be charged only once
If your organization has multiple AWS accounts, then you can subscribe multiple AWS Accounts to AWS
Shield Advanced by individually enabling it on each account using the AWS Management Console or API.
You will pay the monthly fee once as long as the AWS accounts are all under a single consolidated billing,
and you own all the AWS accounts and resources in those accounts.
Incorrect options:
AWS Shield Advanced is being used for custom servers, that are not part of AWS Cloud, thereby
resulting in increased costs - AWS Shield Advanced does offer protection to resources outside of AWS.
This should not cause unexpected spike in billing costs.
AWS Shield Advanced also covers AWS Shield Standard plan, thereby resulting in increased costs - AWS
Shield Standard is automatically enabled for all AWS customers at no additional cost. AWS Shield
Advanced is an optional paid service.
Savings Plans has not been enabled for the AWS Shield Advanced service across all the AWS accounts -
This option has been added as a distractor. Savings Plans is a flexible pricing model that offers low prices
on Amazon EC2 instances, AWS Lambda, and AWS Fargate usage, in exchange for a commitment to a
consistent amount of usage (measured in $/hour) for a 1 or 3 year term. Savings Plans is not applicable
for the AWS Shield Advanced service.
References:
https://aws.amazon.com/shield/faqs/
https://aws.amazon.com/savingsplans/faq/
Domain
Which of the following is correct regarding the pricing for these two services?
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are charged based
on Amazon EC2 instances and Amazon EBS Elastic Volumes used
Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. Amazon
ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized
application requests
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are just charged
based on Elastic Container Service used per hour
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are charged based
on vCPU and memory resources that the containerized application requests
Overall explanation
Correct option:
Amazon ECS with EC2 launch type is charged based on EC2 instances and EBS volumes used. Amazon
ECS with Fargate launch type is charged based on vCPU and memory resources that the containerized
application requests
Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service. ECS
allows you to easily run, scale, and secure Docker container applications on AWS.
With the Fargate launch type, you pay for the amount of vCPU and memory resources that your
containerized application requests. vCPU and memory resources are calculated from the time your
container images are pulled until the Amazon ECS Task terminates, rounded up to the nearest second.
With the EC2 launch type, there is no additional charge for the EC2 launch type. You pay for AWS
resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.
Incorrect options:
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are charged based
on vCPU and memory resources that the containerized application requests
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are charged based
on Amazon EC2 instances and Amazon EBS Elastic Volumes used
As mentioned above - with the Fargate launch type, you pay for the amount of vCPU and memory
resources. With EC2 launch type, you pay for AWS resources (e.g. EC2 instances or EBS volumes). Hence
both these options are incorrect.
Both Amazon ECS with EC2 launch type and Amazon ECS with Fargate launch type are just charged
based on Elastic Container Service used per hour
References:
https://aws.amazon.com/ecs/pricing/
Domain
Which of the following Amazon EC2 instance topologies should this application be deployed on?
The Amazon EC2 instances should be deployed in an Auto Scaling group so that application meets high
availability requirements
The Amazon EC2 instances should be deployed in a partition placement group so that distributed
workloads can be handled effectively
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying
workload can benefit from low network latency and high network throughput
The Amazon EC2 instances should be deployed in a spread placement group so that there are no
correlated failures
Overall explanation
Correct option:
The Amazon EC2 instances should be deployed in a cluster placement group so that the underlying
workload can benefit from low network latency and high network throughput
The key thing to understand in this question is that HPC workloads need to achieve low-latency network
performance necessary for tightly-coupled node-to-node communication that is typical of HPC
applications. Cluster placement groups pack instances close together inside an Availability Zone. These
are recommended for applications that benefit from low network latency, high network throughput, or
both. Therefore this option is the correct answer.
Incorrect options:
The Amazon EC2 instances should be deployed in a partition placement group so that distributed
workloads can be handled effectively - A partition placement group spreads your instances across
logical partitions such that groups of instances in one partition do not share the underlying hardware
with groups of instances in different partitions. This strategy is typically used by large distributed and
replicated workloads, such as Hadoop, Cassandra, and Kafka. A partition placement group can have a
maximum of seven partitions per Availability Zone. Since a partition placement group can have partitions
in multiple Availability Zones in the same region, therefore instances will not have low-latency network
performance. Hence the partition placement group is not the right fit for HPC applications.
The Amazon EC2 instances should be deployed in a spread placement group so that there are no
correlated failures - A spread placement group is a group of instances that are each placed on distinct
racks, with each rack having its own network and power source. The instances are placed across distinct
underlying hardware to reduce correlated failures. You can have a maximum of seven running instances
per Availability Zone per group. Since a spread placement group can span multiple Availability Zones in
the same Region, therefore instances will not have low-latency network performance. Hence spread
placement group is not the right fit for HPC applications.
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
The Amazon EC2 instances should be deployed in an Auto Scaling group so that application meets high
availability requirements - An Auto Scaling group contains a collection of Amazon EC2 instances that are
treated as a logical grouping for the purposes of automatic scaling. You do not use Auto Scaling groups
per se to meet HPC requirements.
Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
Domain
Correct answer
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process
the messages at the peak rate
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2 messages per operation to process
the messages at the peak rate
Overall explanation
Correct option:
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 4 messages per operation to process
the messages at the peak rate
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to
decouple and scale microservices, distributed systems, and serverless applications. SQS offers two types
of message queues - Standard queues vs FIFO queues.
For FIFO queues, the order in which messages are sent and received is strictly preserved (i.e. First-In-
First-Out). On the other hand, the standard SQS queues offer best-effort ordering. This means that
occasionally, messages might be delivered in an order different from which they were sent.
By default, FIFO queues support up to 300 messages per second (300 send, receive, or delete operations
per second). When you batch 10 messages per operation (maximum), FIFO queues can support up to
3,000 messages per second. Therefore you need to process 4 messages per operation so that the FIFO
queue can support up to 1200 messages per second, which is well within the peak rate.
Incorrect options:
Use Amazon SQS standard queue to process the messages - As messages need to be processed in order,
therefore standard queues are ruled out.
Use Amazon SQS FIFO (First-In-First-Out) queue to process the messages - By default, FIFO queues
support up to 300 messages per second and this is not sufficient to meet the message processing
throughput per the given use-case. Hence this option is incorrect.
Use Amazon SQS FIFO (First-In-First-Out) queue in batch mode of 2 messages per operation to process
the messages at the peak rate - As mentioned earlier in the explanation, you need to use FIFO queues in
batch mode and process 4 messages per operation, so that the FIFO queue can support up to 1200
messages per second. With 2 messages per operation, you can only support up to 600 messages per
second.
References:
https://aws.amazon.com/sqs/
https://aws.amazon.com/sqs/features/
Domain
Amazon API Gateway creates RESTful APIs that enable stateful client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateful, full-duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateful client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateless, full-duplex communication between client and server
Correct answer
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateful, full-duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateless, full-duplex communication between client and server
Overall explanation
Correct option:
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateful, full-duplex communication between client and server
Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish,
maintain, monitor, and secure APIs at any scale. APIs act as the front door for applications to access data,
business logic, or functionality from your backend services. Using API Gateway, you can create RESTful
APIs and WebSocket APIs that enable real-time two-way communication applications.
Are HTTP-based.
Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.
Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client
and server. Route incoming messages based on message content.
So Amazon API Gateway supports stateless RESTful APIs as well as stateful WebSocket APIs. Therefore
this option is correct.
Incorrect options:
Amazon API Gateway creates RESTful APIs that enable stateful client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateful, full-duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateless client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateless, full-duplex communication between client and server
Amazon API Gateway creates RESTful APIs that enable stateful client-server communication and
Amazon API Gateway also creates WebSocket APIs that adhere to the WebSocket protocol, which
enables stateless, full-duplex communication between client and server
These three options contradict the earlier details provided in the explanation. To summarize, Amazon
API Gateway supports stateless RESTful APIs and stateful WebSocket APIs. Hence these options are
incorrect.
Reference:
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
Domain
Which of the following options represents the best solution for this use case?
Deploy the Oracle database layer on multiple Amazon EC2 instances spread across two Availability
Zones (AZs). This deployment configuration guarantees high availability and also allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system
Leverage cross AZ read-replica configuration of Amazon RDS for Oracle that allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system
Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the Database Administrator
(DBA) to access and customize the database environment and the underlying operating system
Overall explanation
Correct option:
Leverage multi-AZ configuration of Amazon RDS Custom for Oracle that allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system
Amazon RDS is a managed service that makes it easy to set up, operate, and scale a relational database
in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database
administration tasks. Amazon RDS can automatically back up your database and keep your database
software up to date with the latest version. However, RDS does not allow you to access the host OS of
the database.
For the given use-case, you need to use Amazon RDS Custom for Oracle as it allows you to access and
customize your database server host and operating system, for example by applying special patches and
changing the database software settings to support third-party applications that require privileged
access. Amazon RDS Custom for Oracle facilitates these functionalities with minimum infrastructure
maintenance effort. You need to set up the RDS Custom for Oracle in multi-AZ configuration for high
availability.
via - https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-capabilities-in-
database-environment/
Incorrect options:
Leverage multi-AZ configuration of Amazon RDS for Oracle that allows the Database Administrator
(DBA) to access and customize the database environment and the underlying operating system
Leverage cross AZ read-replica configuration of Amazon RDS for Oracle that allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system
Amazon RDS for Oracle does not allow you to access and customize your database server host and
operating system. Therefore, both these options are incorrect.
Deploy the Oracle database layer on multiple Amazon EC2 instances spread across two Availability
Zones (AZs). This deployment configuration guarantees high availability and also allows the Database
Administrator (DBA) to access and customize the database environment and the underlying operating
system - The use case requires that the best solution should involve minimum infrastructure
maintenance effort. When you use Amazon EC2 instances to host the databases, you need to manage
the server health, server maintenance, server patching, and database maintenance tasks yourself. In
addition, you will also need to manage the multi-AZ configuration by deploying Amazon EC2 instances
across two Availability Zones (AZs), perhaps by using an Auto Scaling group. These steps entail significant
maintenance effort. Hence this option is incorrect.
References:
https://aws.amazon.com/blogs/aws/amazon-rds-custom-for-oracle-new-control-capabilities-in-
database-environment/
https://aws.amazon.com/rds/faqs/
Domain
Correct answer
Use server-side encryption with AWS Key Management Service keys (SSE-KMS) to encrypt the user
data on Amazon S3
Use server-side encryption with customer-provided keys (SSE-C) to encrypt the user data on Amazon
S3
Use client-side encryption with client provided keys and then upload the encrypted user data to
Amazon S3
Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the user data on
Amazon S3
Overall explanation
Correct option:
Use server-side encryption with AWS Key Management Service keys (SSE-KMS) to encrypt the user
data on Amazon S3
AWS Key Management Service (AWS KMS) is a service that combines secure, highly available hardware
and software to provide a key management system scaled for the cloud. When you use server-side
encryption with AWS KMS (SSE-KMS), you can specify a customer-managed CMK that you have already
created. SSE-KMS provides you with an audit trail that shows when your CMK was used and by whom.
Therefore SSE-KMS is the correct solution for this use-case.
Incorrect options:
Use server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the user data on
Amazon S3 - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3), each object
is encrypted with a unique key. However this option does not provide the ability to audit trail the usage
of the encryption keys.
Use server-side encryption with customer-provided keys (SSE-C) to encrypt the user data on Amazon
S3 - With Server-Side Encryption with Customer-Provided Keys (SSE-C), you manage the encryption keys
and Amazon S3 manages the encryption, as it writes to disks, and decryption when you access your
objects. However this option does not provide the ability to audit trail the usage of the encryption keys.
Use client-side encryption with client provided keys and then upload the encrypted user data to
Amazon S3 - Using client-side encryption is ruled out as the startup does not want to provide the
encryption keys.
Referenceshttps://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Domain
Versioning
Requester Pays
Overall explanation
Correct option:
Versioning
Once you version-enable a bucket, it can never return to an unversioned state. Versioning can only be
suspended once it has been enabled.
Versioning Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Incorrect options:
Requester Pays
Server Access Logging, Static Website Hosting and Requester Pays features can be disabled even after
they have been enabled.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
Domain
The engineering team at a data analytics company has observed that its flagship application functions at
its peak performance when the underlying Amazon Elastic Compute Cloud (Amazon EC2) instances have
a CPU utilization of about 50%. The application is built on a fleet of Amazon EC2 instances managed
under an Auto Scaling group. The workflow requests are handled by an internal Application Load
Balancer that routes the requests to the instances.
As a solutions architect, what would you recommend so that the application runs near its peak
performance state?
Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target
metric with a target value of 50%
Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target
metric with a target value of 50%
Correct answer
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target
metric with a target value of 50%
Configure the Auto Scaling group to use a Amazon Cloudwatch alarm triggered on a CPU utilization
threshold of 50%
Overall explanation
Correct option:
Configure the Auto Scaling group to use target tracking policy and set the CPU utilization as the target
metric with a target value of 50%
An Auto Scaling group contains a collection of Amazon EC2 instances that are treated as a logical
grouping for the purposes of automatic scaling and management. An Auto Scaling group also enables
you to use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies.
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto
Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the
scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity
as required to keep the metric at, or close to, the specified target value.
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto
Scaling group at 50 percent. This meets the requirements specified in the given use-case and therefore,
this is the correct option.
Incorrect options:
Configure the Auto Scaling group to use step scaling policy and set the CPU utilization as the target
metric with a target value of 50%
Configure the Auto Scaling group to use simple scaling policy and set the CPU utilization as the target
metric with a target value of 50%
With step scaling and simple scaling, you choose scaling metrics and threshold values for the Amazon
CloudWatch alarms that trigger the scaling process. Neither step scaling nor simple scaling can be
configured to use a target metric for CPU utilization, hence both these options are incorrect.
Configure the Auto Scaling group to use a Amazon Cloudwatch alarm triggered on a CPU utilization
threshold of 50% - An Auto Scaling group cannot directly use a Cloudwatch alarm as the source for a
scale-in or scale-out event, hence this option is incorrect.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html
Domain
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares
in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly
from Amazon FSx in AWS
Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can
access this data directly from Amazon FSx in AWS
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares
in Amazon EFS. The applications deployed on AWS can access this data directly from Amazon EFS
Use AWS Storage Gateway’s File Gateway to provide low-latency, on-premises access to fully managed
file shares in Amazon S3. The applications deployed on AWS can access this data directly from Amazon
S3
Overall explanation
Correct option:
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares
in Amazon FSx for Windows File Server. The applications deployed on AWS can access this data directly
from Amazon FSx in AWS
For user or team file shares, and file-based application migrations, Amazon FSx File Gateway provides
low-latency, on-premises access to fully managed file shares in Amazon FSx for Windows File Server. For
applications deployed on AWS, you may access your file shares directly from Amazon FSx in AWS.
For your native Windows workloads and users, or your SMB clients, Amazon FSx for Windows File Server
provides all of the benefits of a native Windows SMB environment that is fully managed and secured and
scaled like any other AWS service. You get detailed reporting, replication, backup, failover, and support
for native Windows tools like DFS and Active Directory.
Incorrect options:
Use Amazon Storage Gateway’s File Gateway to provide low-latency, on-premises access to fully
managed file shares in Amazon FSx for Windows File Server. The applications deployed on AWS can
access this data directly from Amazon FSx in AWS - When you need to access S3 using a file system
protocol, you should use File Gateway. You get a local cache in the gateway that provides high
throughput and low latency over SMB.
AWS Storage Gateway’s File Gateway does not support file shares in Amazon FSx for Windows File
Server, so this option is incorrect.
The given use case requires low latency access to data which needs to be stored on a file system service
after migration. Since S3 is an object storage service, so this option is incorrect.
Use Amazon FSx File Gateway to provide low-latency, on-premises access to fully managed file shares
in Amazon EFS. The applications deployed on AWS can access this data directly from Amazon EFS -
Amazon FSx File Gateway provides access to fully managed file shares in Amazon FSx for Windows File
Server and it does not support EFS. You should also note that EFS uses the Network File System version 4
(NFS v4) protocol and it does not support SMB protocol. Therefore this option is incorrect for the given
use case.
References:
https://aws.amazon.com/storagegateway/file/fsx/
https://aws.amazon.com/storagegateway/faqs/
https://aws.amazon.com/blogs/storage/aws-reinvent-recap-choosing-storage-for-on-premises-file-
based-workloads/
Domain
As a solutions architect, which of the following would you suggest as the BEST possible solution to this
issue?
The engineering team needs to provision more servers running the AWS Lambda service
The engineering team needs to provision more servers running the Amazon SNS service
Correct answer
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS
Lambda, so the team needs to contact AWS support to raise the account limit
Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account
limit
Overall explanation
Correct option:
Amazon SNS message deliveries to AWS Lambda have crossed the account concurrency quota for AWS
Lambda, so the team needs to contact AWS support to raise the account limit
Amazon Simple Notification Service (Amazon SNS) is a highly available, durable, secure, fully managed
pub/sub messaging service that enables you to decouple microservices, distributed systems, and
serverless applications.
With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the
compute time that you consume—there’s no charge when your code isn’t running.
AWS Lambda currently supports 1000 concurrent executions per AWS account per region. If your
Amazon SNS message deliveries to AWS Lambda contribute to crossing these concurrency quotas, your
Amazon SNS message deliveries will be throttled. You need to contact AWS support to raise the account
limit. Therefore this option is correct.
Incorrect options:
Amazon SNS has hit a scalability limit, so the team needs to contact AWS support to raise the account
limit - Amazon SNS leverages the proven AWS cloud to dynamically scale with your application. You don't
need to contact AWS support, as SNS is a fully managed service, taking care of the heavy lifting related to
capacity planning, provisioning, monitoring, and patching. Therefore, this option is incorrect.
The engineering team needs to provision more servers running the Amazon SNS service
The engineering team needs to provision more servers running the AWS Lambda service
As both AWS Lambda and Amazon SNS are serverless and fully managed services, the engineering team
cannot provision more servers. Both of these options are incorrect.
References:
https://aws.amazon.com/sns/
https://aws.amazon.com/sns/faqs/
Domain
Which of the following correctly summarizes these capabilities for the given database?
Multi-AZ follows asynchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow synchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
Correct answer
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow asynchronous replication and can be within an Availability Zone
(AZ), Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans one Availability Zone (AZ) within a single region.
Read replicas follow synchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or
Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow asynchronous replication and can be within an Availability Zone
(AZ), Cross-AZ, or Cross-Region
Overall explanation
Correct option:
Multi-AZ follows synchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow asynchronous replication and can be within an Availability Zone
(AZ), Cross-AZ, or Cross-Region
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB)
instances, making them a natural fit for production database workloads. When you provision a Multi-AZ
DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the
data to a standby instance in a different Availability Zone (AZ). Multi-AZ spans at least two Availability
Zones (AZs) within a single region.
Amazon RDS Read Replicas provide enhanced performance and durability for RDS database (DB)
instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB
instance for read-heavy database workloads. For the MySQL, MariaDB, PostgreSQL, Oracle, and SQL
Server database engines, Amazon RDS creates a second DB instance using a snapshot of the source DB
instance. It then uses the engines' native asynchronous replication to update the read replica whenever
there is a change to the source DB instance.
Amazon RDS replicates all databases in the source DB instance. Read replicas can be within an
Availability Zone (AZ), Cross-AZ, or Cross-Region.
Exam Alert:
Please review this comparison vis-a-vis Multi-AZ vs Read Replica for Amazon RDS:
via - https://aws.amazon.com/rds/features/multi-az/
Incorrect Options:
Multi-AZ follows asynchronous replication and spans one Availability Zone (AZ) within a single region.
Read replicas follow synchronous replication and can be within an Availability Zone (AZ), Cross-AZ, or
Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow synchronous replication and can be within an Availability Zone (AZ),
Cross-AZ, or Cross-Region
Multi-AZ follows asynchronous replication and spans at least two Availability Zones (AZs) within a
single region. Read replicas follow asynchronous replication and can be within an Availability Zone
(AZ), Cross-AZ, or Cross-Region
These three options contradict the earlier details provided in the explanation. To summarize, Multi-AZ
deployment follows synchronous replication for Amazon RDS. Hence these options are incorrect.
References:
https://aws.amazon.com/rds/features/multi-az/
https://aws.amazon.com/rds/features/read-replicas/
Domain
Can you spot the INVALID lifecycle transitions from the options below? (Select two)
Overall explanation
Correct options:
As the question wants to know about the INVALID lifecycle transitions, the following options are the
correct answers -
Following are the unsupported life cycle transitions for S3 storage classes - Any storage class to the
Amazon S3 Standard storage class. Any storage class to the Reduced Redundancy storage class. The
Amazon S3 Intelligent-Tiering storage class to the Amazon S3 Standard-IA storage class. The Amazon S3
One Zone-IA storage class to the Amazon S3 Standard-IA or Amazon S3 Intelligent-Tiering storage classes.
Incorrect options:
Here are the supported life cycle transitions for S3 storage classes - The S3 Standard storage class to any
other storage class. Any storage class to the S3 Glacier or S3 Glacier Deep Archive storage classes. The S3
Standard-IA storage class to the S3 Intelligent-Tiering or S3 One Zone-IA storage classes. The S3
Intelligent-Tiering storage class to the S3 One Zone-IA storage class. The S3 Glacier storage class to the S3
Glacier Deep Archive storage class.
Amazon S3 supports a waterfall model for transitioning between storage classes, as shown in the
diagram below:
via - https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-
considerations.html
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-transition-general-
considerations.html
Domain
As a solutions architect, which of the following steps would you recommend to solve this issue?
Contact AWS support to retrieve the AWS KMS key from their backup
The AWS KMS key can be recovered by the AWS root account user
The company should issue a notification on its web application informing the users about the loss of
their data
Correct answer
As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion' status and hence you
can just cancel the KMS key deletion and recover the key
Overall explanation
Correct option:
As the AWS KMS key was deleted a day ago, it must be in the 'pending deletion' status and hence you
can just cancel the KMS key deletion and recover the key
AWS Key Management Service (KMS) makes it easy for you to create and manage cryptographic keys and
control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and
resilient service that uses hardware security modules that have been validated under FIPS 140-2.
Deleting an AWS KMS key in AWS Key Management Service (AWS KMS) is destructive and potentially
dangerous. Therefore, AWS KMS enforces a waiting period. To delete a KMS key in AWS KMS you
schedule key deletion. You can set the waiting period from a minimum of 7 days up to a maximum of 30
days. The default waiting period is 30 days. During the waiting period, the KMS key status and key state is
Pending deletion. To recover the KMS key, you can cancel key deletion before the waiting period ends.
After the waiting period ends you cannot cancel key deletion, and AWS KMS deletes the KMS key.
Incorrect options:
Contact AWS support to retrieve the AWS KMS key from their backup
The AWS KMS key can be recovered by the AWS root account user
The AWS root account user cannot recover the AWS KMS key and the AWS support does not have access
to KMS keys via any backups. Both these options just serve as distractors.
The company should issue a notification on its web application informing the users about the loss of
their data - This option is not required as the data can be recovered via the cancel key deletion feature.
Reference:
https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html
Domain
Which is the MOST effective way to address this issue so that such incidents do not recur?
Use permissions boundary to control the maximum permissions employees can grant to the IAM
principals
The CTO should review the permissions for each new developer's IAM user so that such incidents
don't recur
Remove full database access for all IAM users in the organization
Only root user should have full database access in the organization
Overall explanation
Correct option:
Use permissions boundary to control the maximum permissions employees can grant to the IAM
principals
A permissions boundary can be used to control the maximum permissions employees can grant to the
IAM principals (that is, users and roles) that they create and manage. As the IAM administrator, you can
define one or more permissions boundaries using managed policies and allow your employee to create a
principal with this boundary. The employee can then attach a permissions policy to this principal.
However, the effective permissions of the principal are the intersection of the permissions boundary and
permissions policy. As a result, the new principal cannot exceed the boundary that you defined.
Therefore, using the permissions boundary offers the right solution for this use-case.
Incorrect options:
Remove full database access for all IAM users in the organization - It is not practical to remove full
access for all IAM users in the organization because a select set of users need this access for database
administration. So this option is not correct.
The CTO should review the permissions for each new developer's IAM user so that such incidents
don't recur - Likewise the CTO is not expected to review the permissions for each new developer's IAM
user, as this is best done via an automated procedure. This option has been added as a distractor.
Only root user should have full database access in the organization - As a best practice, the root user
should not access the AWS account to carry out any administrative procedures. So this option is not
correct.
Reference:
https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-
permissions-boundaries/
Domain
In the event of a failover, Amazon Aurora will promote which of the following read replicas?
Correct answer
Overall explanation
Correct option:
Amazon Aurora features a distributed, fault-tolerant, self-healing storage system that auto-scales up to
128TB per database instance. It delivers high performance and availability with up to 15 low-latency read
replicas, point-in-time recovery, continuous backup to Amazon S3, and replication across three
Availability Zones (AZs).
For Amazon Aurora, each Read Replica is associated with a priority tier (0-15). In the event of a failover,
Amazon Aurora will promote the Read Replica that has the highest priority (the lowest numbered tier). If
two or more Aurora Replicas share the same priority, then Amazon RDS promotes the replica that is
largest in size. If two or more Aurora Replicas share the same priority and size, then Amazon Aurora
promotes an arbitrary replica in the same promotion tier.
Therefore, for this problem statement, the Tier-1 (32 terabytes) replica will be promoted.
Incorrect options:
Given the failover rules discussed earlier in the explanation, these three options are incorrect.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.ht
ml
Domain
Use an Amazon Aurora Global Database for the games table and use Amazon DynamoDB tables for
the users and games_played tables
Correct answer
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for
the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use Amazon Aurora for
the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use Amazon DynamoDB tables for
the users and games_played tables
Overall explanation
Correct option:
Use an Amazon Aurora Global Database for the games table and use Amazon Aurora for
the users and games_played tables
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that
combines the performance and availability of traditional enterprise databases with the simplicity and
cost-effectiveness of open source databases. Amazon Aurora features a distributed, fault-tolerant, self-
healing storage system that auto-scales up to 128TB per database instance. Aurora is not an in-memory
database.
Amazon Aurora Global Database is designed for globally distributed applications, allowing a single
Amazon Aurora database to span multiple AWS regions. It replicates your data with no impact on
database performance, enables fast local reads with low latency in each region, and provides disaster
recovery from region-wide outages. Amazon Aurora Global Database is the correct choice for the given
use-case.
For the given use-case, we, therefore, need to have two Aurora clusters, one for the global table (games
table) and the other one for the local tables (users and games_played tables).
Incorrect options:
Use an Amazon Aurora Global Database for the games table and use Amazon DynamoDB tables for
the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use Amazon Aurora for
the users and games_played tables
Use a Amazon DynamoDB global table for the games table and use Amazon DynamoDB tables for
the users and games_played tables
Here, we want minimal application refactoring. Amazon DynamoDB and Amazon Aurora have a
completely different APIs, due to Amazon Aurora being SQL and Amazon DynamoDB being NoSQL. So all
three options are incorrect, as they have Amazon DynamoDB as one of the components.
Reference:
https://aws.amazon.com/rds/aurora/faqs/
Domain
Which of the following options is the MOST cost-optimal and resource-efficient solution to build this
fleet of Amazon EC2 instances?
Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances
Correct answer
Overall explanation
Correct option:
An instance store provides temporary block-level storage for your instance. This storage is located on
disks that are physically attached to the host instance. Instance store is ideal for the temporary storage
of information that changes frequently such as buffers, caches, scratch data, and other temporary
content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web
servers. Instance store volumes are included as part of the instance's usage cost.
As Instance Store based volumes provide high random I/O performance at low cost (as the storage is part
of the instance's usage cost) and the resilient architecture can adjust for the loss of any instance,
therefore you should use Instance Store based Amazon EC2 instances for this use-case.
Incorrect options:Use Amazon Elastic Block Store (Amazon EBS) based EC2 instances - Amazon Elastic
Block Store (Amazon EBS) based volumes would need to use provisioned IOPS (io1) as the storage type
and that would incur additional costs. As we are looking for the most cost-optimal solution, this option is
ruled out.
Use Amazon EC2 instances with Amazon EFS mount points - Using Amazon Elastic File System (Amazon
EFS) implies that extra resources would have to be provisioned (compared to using instance store where
the storage is located on disks that are physically attached to the host instance itself). As we are looking
for the most resource-efficient solution, this option is also ruled out.
Use Amazon EC2 instances with access to Amazon S3 based storage - Using Amazon EC2 instances with
access to Amazon S3 based storage does not deliver high random I/O performance, this option is just
added as a distractor.
Reference:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
Domain
As a solutions architect, which are the MOST time/resource efficient steps that you would recommend
so that the maintenance work can be completed at the earliest? (Select two)
Put the instance into the Standby state and then update the instance by applying the maintenance
patch. Once the instance is ready, you can exit the Standby state and then return the instance to
service
Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance
patch to the instance. Once the instance is ready, you can you can manually set the instance's health
status back to healthy and activate the ScheduledActions process type again
Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto
Scaling group and add all the instances again using the manual scaling policy
Correct selection
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance
patch to the instance. Once the instance is ready, you can manually set the instance's health status
back to healthy and activate the ReplaceUnhealthy process type again
Take a snapshot of the instance, create a new Amazon Machine Image (AMI) and then launch a new
instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the
Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the
maintenance issue
Overall explanation
Correct options:
Put the instance into the Standby state and then update the instance by applying the maintenance
patch. Once the instance is ready, you can exit the Standby state and then return the instance to
service - You can put an instance that is in the InService state into the Standby state, update some
software or troubleshoot the instance, and then return the instance to service. Instances that are on
standby are still part of the Auto Scaling group, but they do not actively handle application traffic.
Suspend the ReplaceUnhealthy process type for the Auto Scaling group and apply the maintenance
patch to the instance. Once the instance is ready, you can manually set the instance's health status
back to healthy and activate the ReplaceUnhealthy process type again - The ReplaceUnhealthy process
terminates instances that are marked as unhealthy and then creates new instances to replace them.
Amazon EC2 Auto Scaling stops replacing instances that are marked as unhealthy. Instances that fail EC2
or Elastic Load Balancing health checks are still marked as unhealthy. As soon as you resume
the ReplaceUnhealthly process, Amazon EC2 Auto Scaling replaces instances that were marked
unhealthy while this process was suspended.
Incorrect options:
Take a snapshot of the instance, create a new Amazon Machine Image (AMI) and then launch a new
instance using this AMI. Apply the maintenance patch to this new instance and then add it back to the
Auto Scaling Group by using the manual scaling policy. Terminate the earlier instance that had the
maintenance issue - Taking the snapshot of the existing instance to create a new AMI and then creating
a new instance in order to apply the maintenance patch is not time/resource optimal, hence this option
is ruled out.
Delete the Auto Scaling group and apply the maintenance fix to the given instance. Create a new Auto
Scaling group and add all the instances again using the manual scaling policy - It's not recommended to
delete the Auto Scaling group just to apply a maintenance patch on a specific instance.
Suspend the ScheduledActions process type for the Auto Scaling group and apply the maintenance
patch to the instance. Once the instance is ready, you can you can manually set the instance's health
status back to healthy and activate the ScheduledActions process type again - Amazon EC2 Auto Scaling
does not execute scaling actions that are scheduled to run during the suspension period. This option is
not relevant to the given use-case.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-enter-exit-standby.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/health-checks-overview.html
Domain
As a solutions architect, what are your recommendations to address these guidelines? (Select two) ?
Change the configuration on Amazon S3 console so that the user needs to provide additional
confirmation while deleting any Amazon S3 object
Create an event trigger on deleting any Amazon S3 object. The event invokes an Amazon Simple
Notification Service (Amazon SNS) notification via email to the IT manager
Overall explanation
Correct options:
Versioning is a means of keeping multiple variants of an object in the same bucket. You can use
versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3
bucket. Versioning-enabled buckets enable you to recover objects from accidental deletion or overwrite.
For example:
If you overwrite an object, it results in a new object version in the bucket. You can always restore the
previous version. If you delete an object, instead of removing it permanently, Amazon S3 inserts a delete
marker, which becomes the current object version. You can always restore the previous version. Hence,
this is the correct option.
Versioning Overview:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
To provide additional protection, multi-factor authentication (MFA) delete can be enabled. MFA delete
requires secondary authentication to take place before objects can be permanently deleted from an
Amazon S3 bucket. Hence, this is the correct option.
Incorrect options:
Create an event trigger on deleting any Amazon S3 object. The event invokes an Amazon Simple
Notification Service (Amazon SNS) notification via email to the IT manager - Sending an event trigger
after object deletion does not meet the objective of preventing object deletion by mistake because the
object has already been deleted. So, this option is incorrect.
Establish a process to get managerial approval for deleting Amazon S3 objects - This option for getting
managerial approval is just a distractor.
Change the configuration on Amazon S3 console so that the user needs to provide additional
confirmation while deleting any Amazon S3 object - There is no provision to set up Amazon S3
configuration to ask for additional confirmation before deleting an object. This option is incorrect.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
Domain
Use Geo Restriction feature of Amazon CloudFront in a Amazon Virtual Private Cloud (Amazon VPC)
Correct answer
Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a Amazon
Virtual Private Cloud (Amazon VPC)
Overall explanation
Correct option:
AWS Web Application Firewall (AWS WAF) is a web application firewall service that lets you monitor web
requests and protect your web applications from malicious requests. Use AWS WAF to block or allow
requests based on conditions that you specify, such as the IP addresses. You can also use AWS WAF
preconfigured protections to block common attacks like SQL injection or cross-site scripting.
Configure AWS Web Application Firewall (AWS WAF) on the Application Load Balancer in a Amazon
Virtual Private Cloud (Amazon VPC)
You can use AWS WAF with your Application Load Balancer to allow or block requests based on the rules
in a web access control list (web ACL). Geographic (Geo) Match Conditions in AWS WAF allows you to use
AWS WAF to restrict application access based on the geographic location of your viewers. With geo
match conditions you can choose the countries from which AWS WAF should allow access.
Geo match conditions are important for many customers. For example, legal and licensing requirements
restrict some customers from delivering their applications outside certain countries. These customers
can configure a whitelist that allows only viewers in those countries. Other customers need to prevent
the downloading of their encrypted software by users in certain countries. These customers can
configure a blacklist so that end-users from those countries are blocked from downloading their
software.
Incorrect options:
Use Geo Restriction feature of Amazon CloudFront in a Amazon Virtual Private Cloud (Amazon VPC) -
Geo Restriction feature of Amazon CloudFront helps in restricting traffic based on the user's geographic
location. But, CloudFront works from edge locations and doesn't belong to a VPC. Hence, this option
itself is incorrect and given only as a distractor.
Security Groups cannot restrict access based on the user's geographic location.
References:
https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/
https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-load-balancers/
https://aws.amazon.com/about-aws/whats-new/2016/12/AWS-WAF-now-available-on-Application-Load-
Balancer/
Domain
Use on-demand Amazon EC2 instances for the production application and spot instances for the dev
application
Use Amazon EC2 reserved instance (RI) for the production application and spot instances for the dev
application
Correct answer
Use Amazon EC2 reserved instance (RI) for the production application and on-demand instances for
the dev application
Use Amazon EC2 reserved instance (RI) for the production application and spot block instances for the
dev application
Overall explanation
Correct option:
Use Amazon EC2 reserved instance (RI) for the production application and on-demand instances for
the dev application
There are multiple pricing options for EC2 instances, such as On-Demand, Savings Plans, Reserved
Instances, and Spot Instances.
Amazon EC2 Reserved Instances (RI) provide a significant discount (up to 72%) compared to On-Demand
pricing and provide a capacity reservation when used in a specific Availability Zone. RIs provide you with
a significant discount (up to 72%) compared to On-Demand instance pricing. You have the flexibility to
change families, OS types, and tenancies while benefitting from RI pricing when you use Convertible RIs.
via - https://aws.amazon.com/ec2/pricing/
For the given use case, you can use Amazon EC2 Reserved Instances for the production application as it is
run 24*7. This way you can get a 72% discount if you avail a 3-year term. You can use on-demand
instances for the dev application since it is only used for up to 8 hours per day. On-demand offers the
flexibility to only pay for the Amazon EC2 instance when it is being used (0 to 8 hours for the given use
case).
Incorrect options:
Use Amazon EC2 reserved instance (RI) for the production application and spot block instances for the
dev application - Spot blocks can only be used for a span of up to 6 hours, so this option does not meet
the requirements of the given use case where the dev application can be up and running up to 8 hours.
You should also note that AWS has stopped offering Spot blocks to new customers.
Use Amazon EC2 reserved instance (RI) for the production application and spot instances for the dev
application
Use on-demand Amazon EC2 instances for the production application and spot instances for the dev
application
Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud. Spot
Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot
Instances for various stateless, fault-tolerant, or flexible applications.
via - https://aws.amazon.com/ec2/spot/
Spot instances can be taken back by AWS with two minutes of notice, so spot instances cannot be
reliably used for running the dev application (which can be up and running for up to 8 hours). So both
these options are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/blogs/aws/new-ec2-spot-blocks-for-defined-duration-workloads/
https://aws.amazon.com/ec2/spot/
Domain
Which of the following techniques will help the company meet this requirement?
Raise a service request with Amazon to completely delete the data from all their backups
Correct answer
Correct option:
Amazon GuardDuty offers threat detection that enables you to continuously monitor and protect your
AWS accounts, workloads, and data stored in Amazon S3. GuardDuty analyzes continuous streams of
meta-data generated from your account and network activity found in AWS CloudTrail Events, Amazon
VPC Flow Logs, and DNS Logs. It also uses integrated threat intelligence such as known malicious IP
addresses, anomaly detection, and machine learning to identify threats more accurately.
Disabling the service will delete all remaining data, including your findings and configurations before
relinquishing the service permissions and resetting the service. So, this is the correct option for our use
case.
Incorrect options:Suspend the service in the general settings - You can stop Amazon GuardDuty from
analyzing your data sources at any time by choosing to suspend the service in the general settings. This
will immediately stop the service from analyzing data, but does not delete your existing findings or
configurations.
De-register the service under services tab - This is a made-up option, used only as a distractor.
Raise a service request with Amazon to completely delete the data from all their backups - There is no
need to create a service request as you can delete the existing findings by disabling the service.
Reference:https://aws.amazon.com/guardduty/faqs/
Domain
Which of the following represents the best solution for the given scenario?
AWS Trusted Advisor publishes metrics about check results to Amazon CloudWatch. Create an alarm to
track status changes for checks in the Service Limits category for the APIs. The alarm will then notify
when the service quota is reached or exceeded
Run Amazon Athena SQL queries against AWS CloudTrail log files stored in Amazon S3 buckets. Use
Amazon QuickSight to generate reports for managerial dashboards
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details
and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm
based on this metric's rate to send an Amazon SNS notification to the required team
Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use Amazon Kinesis stream-level
metrics in the Amazon CloudWatch to trigger an AWS Lambda function that will trigger an error
workflow
Overall explanation
Correct option:
Create an Amazon CloudWatch metric filter that processes AWS CloudTrail logs having API call details
and looks at any errors by factoring in all the error codes that need to be tracked. Create an alarm
based on this metric's rate to send an Amazon SNS notification to the required team
AWS CloudTrail log data can be ingested into Amazon CloudWatch to monitor and identify your AWS
account activity against security threats, and create a governance framework for security best practices.
You can analyze log trail event data in CloudWatch using features such as Logs Insight, Contributor
Insights, Metric filters, and CloudWatch Alarms.
AWS CloudTrail integrates with the Amazon CloudWatch service to publish the API calls being made to
resources or services in the AWS account. The published event has invaluable information that can be
used for compliance, auditing, and governance of your AWS accounts. Below we introduce several
features available in CloudWatch to monitor API activity, analyze the logs at scale, and take action when
malicious activity is discovered, without provisioning your infrastructure.
For the AWS Cloudtrail logs available in Amazon CloudWatch Logs, you can begin searching and filtering
the log data by creating one or more metric filters. Use these metric filters to turn log data into
numerical CloudWatch metrics that you can graph or set a CloudWatch Alarm on.
Note: AWS CloudTrail Insights helps AWS users identify and respond to unusual activity associated
with write API calls by continuously analyzing CloudTrail management events.
Insights events are logged when AWS CloudTrail detects unusual write management API activity in your
account. If you have AWS CloudTrail Insights enabled and CloudTrail detects unusual activity, Insights
events are delivered to the destination Amazon S3 bucket for your trail. You can also see the type of
insight and the incident time when you view Insights events on the CloudTrail console. Unlike other
types of events captured in a CloudTrail trail, Insights events are logged only when CloudTrail detects
changes in your account's API usage that differ significantly from the account's typical usage patterns.
Incorrect options:
Configure AWS CloudTrail to stream event data to Amazon Kinesis. Use Amazon Kinesis stream-level
metrics in the Amazon CloudWatch to trigger an AWS Lambda function that will trigger an error
workflow - AWS CloudTrail cannot stream data to Amazon Kinesis. Amazon S3 buckets and Amazon
CloudWatch logs are the only destinations possible.
Run Amazon Athena SQL queries against AWS CloudTrail log files stored in Amazon S3 buckets. Use
Amazon QuickSight to generate reports for managerial dashboards - Generating reports and
visualizations help in understanding and analyzing patterns but is not useful as a near-real-time
automatic solution for the given problem.
AWS Trusted Advisor publishes metrics about check results to Amazon CloudWatch. Create an alarm to
track status changes for checks in the Service Limits category for the APIs. The alarm will then notify
when the service quota is reached or exceeded - When AWS Trusted Advisor refreshes your checks,
Trusted Advisor publishes metrics about your check results to Amazon CloudWatch. You can view the
metrics in CloudWatch. You can also create alarms to detect status changes to Trusted Advisor checks
and status changes for resources, and service quota usage (formerly referred to as limits). The alarm will
then notify you when you reach or exceed a service quota for your AWS account. However, the alarm is
triggered only when the service limit is reached. We need a solution that raises an alarm when the
number of API calls randomly increases or an abnormal pattern is detected. Hence, this option is not the
right fit for the given use case.
References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-for-
cloudtrail.html#cloudwatch-alarms-for-cloudtrail-authorization-failures
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-insights-events-with-
cloudtrail.html
https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-metrics-ta.html
Domain
Which of the following is the fastest way to upload the daily compressed file into Amazon S3?
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon
S3TA)
FTP the compressed file into an Amazon EC2 instance that runs in the same region as the Amazon S3
bucket. Then transfer the file from the Amazon EC2 instance into the Amazon S3 bucket
Overall explanation
Correct option:
Upload the compressed file using multipart upload with Amazon S3 Transfer Acceleration (Amazon
S3TA)
Amazon S3 Transfer Acceleration (Amazon S3TA) enables fast, easy, and secure transfers of files over long
distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon
CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to
Amazon S3 over an optimized network path.
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion
of the object's data. You can upload these object parts independently and in any order. If transmission of
any part fails, you can retransmit that part without affecting other parts. After all parts of your object are
uploaded, Amazon S3 assembles these parts and creates the object. If you're uploading large objects
over a stable high-bandwidth network, use multipart uploading to maximize the use of your available
bandwidth by uploading object parts in parallel for multi-threaded performance. If you're uploading over
a spotty network, use multipart uploading to increase resiliency to network errors by avoiding upload
restarts.
Incorrect options:
Upload the compressed file in a single operation - In general, when your object size reaches 100
megabytes, you should consider using multipart uploads instead of uploading the object in a single
operation. Multipart upload provides improved throughput - you can upload parts in parallel to improve
throughput. Therefore, this option is not correct.
Upload the compressed file using multipart upload - Although using multipart upload would certainly
speed up the process, combining with Amazon S3 Transfer Acceleration (Amazon S3TA) would further
improve the transfer speed. Therefore just using multipart upload is not the correct option.
FTP the compressed file into an Amazon EC2 instance that runs in the same region as the Amazon S3
bucket. Then transfer the file from the Amazon EC2 instance into the Amazon S3 bucket - This is a
roundabout process of getting the file into Amazon S3 and added as a distractor. Although it is
technically feasible to follow this process, it would involve a lot of scripting and certainly would not be
the fastest way to get the file into Amazon S3.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/uploadobjusingmpu.html
Domain
Which of the following AWS service is the MOST efficient solution for the given use-case?
Overall explanation
Correct option:
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually
unlimited cloud storage. The service provides three different types of gateways – Tape Gateway, File
Gateway, and Volume Gateway – that seamlessly connect on-premises applications to cloud storage,
caching data locally for low-latency access.
AWS Storage Gateway's file interface, or file gateway, offers you a seamless way to connect to the cloud
in order to store application data files and backup images as durable objects on Amazon S3 cloud
storage. File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. As the
company wants to integrate data files from its analytical instruments into AWS via an NFS interface,
therefore AWS Storage Gateway - File Gateway is the correct answer.
via - https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html
Incorrect options:
AWS Storage Gateway - Volume Gateway - You can configure the AWS Storage Gateway service as a
Volume Gateway to present cloud-based iSCSI block storage volumes to your on-premises applications.
Volume Gateway does not support NFS interface, so this option is not correct.
AWS Storage Gateway - Tape Gateway - AWS Storage Gateway - Tape Gateway allows moving tape
backups to the cloud. Tape Gateway does not support NFS interface, so this option is not correct.
AWS Site-to-Site VPN - AWS Site-to-Site VPN enables you to securely connect your on-premises network
or branch office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend your
data center or branch office network to the cloud with an AWS Site-to-Site VPN (Site-to-Site VPN)
connection. It uses internet protocol security (IPSec) communications to create encrypted VPN tunnels
between two locations. You cannot use AWS Site-to-Site VPN to integrate data files via the NFS interface,
so this option is not correct.
References:
https://aws.amazon.com/storagegateway/
https://aws.amazon.com/storagegateway/volume/
https://aws.amazon.com/storagegateway/file/
https://aws.amazon.com/storagegateway/vtl/
Domain
Can you help the intern by identifying those storage volume types that CANNOT be used as boot
volumes while creating the instances? (Select two)
Correct selection
Instance Store
Overall explanation
Correct options:
Solid state drive (SSD) backed volumes optimized for transactional workloads involving frequent
read/write operations with small I/O size, where the dominant performance attribute is IOPS.
Hard disk drive (HDD) backed volumes optimized for large streaming workloads where throughput
(measured in MiB/s) is a better performance measure than IOPS.
Throughput Optimized HDD (st1) and Cold HDD (sc1) volume types CANNOT be used as a boot volume,
so these two options are correct.
Please see this detailed overview of the volume types for Amazon EBS volumes.
via - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
Incorrect options:
Instance Store
General Purpose SSD (gp2), Provisioned IOPS SSD (io1), and Instance Store can be used as a boot
volume.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html
Domain
As an AWS Certified Solutions Architect – Associate, can you suggest a way to lower the storage costs
while fulfilling the business requirements?
Correct answer
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One
Zone-IA) after 30 days
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3
Standard-IA) after 30 days
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One
Zone-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3
Standard-IA) after 7 days
Overall explanation
Correct option:
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One
Zone-IA) after 30 days
Amazon S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when
needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs),
Amazon S3 One Zone-IA stores data in a single Availability Zone (AZ) and costs 20% less than Amazon S3
Standard-IA. Amazon S3 One Zone-IA is ideal for customers who want a lower-cost option for
infrequently accessed and re-creatable data but do not require the availability and resilience of Amazon
S3 Standard or Amazon S3 Standard-IA. The minimum storage duration is 30 days before you can
transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA.
Amazon S3 One Zone-IA offers the same high durability, high throughput, and low latency of Amazon S3
Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured
at the object level, and a single bucket can contain objects stored across Amazon S3 Standard, Amazon
S3 Intelligent-Tiering, Amazon S3 Standard-IA, and Amazon S3 One Zone-IA. You can also use S3 Lifecycle
policies to automatically transition objects between storage classes without any application changes.
Constraints for Lifecycle storage class transitions:
via - https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-
considerations.html
Incorrect options:
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3
Standard-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One
Zone-IA) after 7 days
As mentioned earlier, the minimum storage duration is 30 days before you can transition objects from
Amazon S3 Standard to Amazon S3 One Zone-IA or Amazon S3 Standard-IA, so both these options are
added as distractors.
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3
Standard-IA) after 30 days - Amazon S3 Standard-IA is for data that is accessed less frequently, but
requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low
latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. This
combination of low cost and high performance makes Amazon S3 Standard-IA ideal for long-term
storage, backups, and as a data store for disaster recovery files. But, it costs more than Amazon S3 One
Zone-IA because of the redundant storage across Availability Zones (AZs). As the data is re-creatable, so
you don't need to incur this additional cost.
References:
https://aws.amazon.com/s3/storage-classes/
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html
Domain
As an AWS Certified Solutions Architect – Associate, which best practices would you recommend (Select
two)?
Configure AWS CloudTrail to log all AWS Identity and Access Management (AWS IAM) actions
Create a minimum number of accounts and share these account credentials among employees
Use user credentials to provide access specific permissions for Amazon EC2 instances
Overall explanation
Correct options:
As per the AWS best practices, it is better to enable Multi Factor Authentication (MFA) for privileged
users via an MFA-enabled mobile device or hardware MFA token.
Configure AWS CloudTrail to log all AWS Identity and Access Management (AWS IAM) actions
AWS recommends to turn on AWS CloudTrail to log all IAM actions for monitoring and audit purposes.
Incorrect options:
Create a minimum number of accounts and share these account credentials among employees - AWS
recommends that user account credentials should not be shared between users. So, this option is
incorrect.
Grant maximum privileges to avoid assigning privileges again - AWS recommends granting the least
privileges required to complete a certain job and avoid giving excessive privileges which can be misused.
So, this option is incorrect.
Use user credentials to provide access specific permissions for Amazon EC2 instances - It is highly
recommended to use roles to grant access permissions for EC2 instances working on different AWS
services. So, this option is incorrect.
References:
https://aws.amazon.com/iam/
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
https://aws.amazon.com/cloudtrail/faqs/
Domain
Which is the most cost-effective solution to build a solution for the workflow?
Overall explanation
Correct option:
Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing capacity for up to 90% off
the On-Demand price.
Applications that have flexible start and end times Applications that are feasible only at very low
compute prices Users with urgent computing needs for large amounts of additional capacity
For the given use case, spot instances offer the most cost-effective solution as the workflow can
withstand disruptions and can be started and stopped multiple times.
For example, considering a process that runs for an hour and needs about 1024 MB of memory, spot
instance pricing for a t2.micro instance (having 1024 MB of RAM) is $0.0035 per hour.
Contrast this with the pricing of a Lambda function (having 1024 MB of allocated memory), which comes
out to $0.0000000167 per 1ms or $0.06 per hour ($0.0000000167 * 1000 * 60 * 60 per hour).
Thus, a spot instance turns out to be about 20 times cost effective than a Lambda function to meet the
requirements of the given use case.
Incorrect options:
Use AWS Lambda function to run the workflow processes - As mentioned in the explanation above, a
Lambda function turns out to be 20 times more expensive than a spot instance to meet the workflow
requirements of the given use case, so this option is incorrect. You should also note that the maximum
execution time of a Lambda function is 15 minutes, so the workflow process would be disrupted for sure.
On the other hand, it is certainly possible that the workflow process can be completed in a single run on
the spot instance (the average frequency of stop instance interruption across all Regions and instance
types is <10%).
You should note that both on-demand and reserved instances are more expensive than spot instances. In
addition, reserved instances have a term of 1 year or 3 years, so they are not suited for the given
workflow. Therefore, both these options are incorrect.
References:
https://aws.amazon.com/ec2/pricing/
https://aws.amazon.com/ec2/spot/pricing/
https://aws.amazon.com/lambda/pricing/
https://aws.amazon.com/ec2/spot/instance-advisor/
Domain
What is the correct order of the storage charges incurred for the test file on these three storage types?
Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon S3 Standard < Cost of test
file storage on Amazon EBS
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EBS < Cost of test
file storage on Amazon EFS
Cost of test file storage on Amazon EBS < Cost of test file storage on Amazon S3 Standard < Cost of test
file storage on Amazon EFS
Correct answer
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EFS < Cost of test
file storage on Amazon EBS
Overall explanation
Correct option:
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EFS < Cost of test
file storage on Amazon EBS
With Amazon EBS Elastic Volumes, you pay only for the resources that you use. The Amazon EFS
Standard Storage pricing is $0.30 per GB per month. Therefore the cost for storing the test file on EFS is
$0.30 for the month.
For Amazon EBS General Purpose SSD (gp2) volumes, the charges are $0.10 per GB-month of provisioned
storage. Therefore, for a provisioned storage of 100GB for this use-case, the monthly cost on EBS is
$0.10*100 = $10. This cost is irrespective of how much storage is actually consumed by the test file.
For S3 Standard storage, the pricing is $0.023 per GB per month. Therefore, the monthly storage cost on
S3 for the test file is $0.023.
Incorrect options:
Cost of test file storage on Amazon S3 Standard < Cost of test file storage on Amazon EBS < Cost of test
file storage on Amazon EFS
Cost of test file storage on Amazon EFS < Cost of test file storage on Amazon S3 Standard < Cost of test
file storage on Amazon EBS
Cost of test file storage on Amazon EBS < Cost of test file storage on Amazon S3 Standard < Cost of test
file storage on Amazon EFS
Following the computations shown earlier in the explanation, these three options are incorrect.
References:
https://aws.amazon.com/ebs/pricing/
https://aws.amazon.com/s3/pricing/(https://aws.amazon.com/s3/pricing/)
https://aws.amazon.com/efs/pricing/
Domain
Overall explanation
Correct option:
You can use Kinesis Data Analytics to transform and analyze streaming data in real-time with Apache
Flink. Kinesis Data Analytics enables you to quickly build end-to-end stream processing applications for
log analytics, clickstream analytics, Internet of Things (IoT), ad tech, gaming, etc. The four most common
use cases are streaming extract-transform-load (ETL), continuous metric generation, responsive real-time
analytics, and interactive querying of data streams. Kinesis Data Analytics for Apache Flink applications
provides your application 50 GB of running application storage per Kinesis Processing Unit (KPU).
Amazon API Gateway is a fully managed service that allows you to publish, maintain, monitor, and secure
APIs at any scale. Amazon API Gateway offers two options to create RESTful APIs, HTTP APIs and REST
APIs, as well as an option to create WebSocket APIs.
For the given use case, you can use Amazon API Gateway to create a REST API that handles incoming
requests having location data from the trucks and sends it to the Kinesis Data Analytics application on
the back end.
Incorrect options:
Leverage Amazon Athena with Amazon S3 - Amazon Athena is an interactive query service that makes it
easy to analyze data in Amazon S3 using standard SQL. Athena cannot be used to build a REST API to
consume data from the source. So this option is incorrect.
Leverage Amazon QuickSight with Amazon Redshift - QuickSight is a cloud-native, serverless business
intelligence service. Quicksight cannot be used to build a REST API to consume data from the source.
Redshift is a fully managed AWS cloud data warehouse. So this option is incorrect.
Leverage Amazon API Gateway with AWS Lambda - You cannot use Lambda to store and retrieve the
location data for analysis, so this option is incorrect.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/integrating-api-with-aws-services-
kinesis.html
https://aws.amazon.com/kinesis/data-analytics/
https://aws.amazon.com/kinesis/data-analytics/faqs/
Domain