0% found this document useful (0 votes)
61 views85 pages

New SAA Practice Set 5

The document outlines various scenarios and solutions for AWS services related to IoT, media storage, load balancing, database solutions, and archival strategies. It includes specific recommendations for services like Amazon Kinesis Data Streams, Amazon S3, and Amazon DynamoDB, along with explanations for their suitability based on performance and requirements. Additionally, it discusses configurations for failover routing and compliance controls using Amazon S3 Glacier.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views85 pages

New SAA Practice Set 5

The document outlines various scenarios and solutions for AWS services related to IoT, media storage, load balancing, database solutions, and archival strategies. It includes specific recommendations for services like Amazon Kinesis Data Streams, Amazon S3, and Amazon DynamoDB, along with explanations for their suitability based on performance and requirements. Additionally, it discusses configurations for failover routing and compliance controls using Amazon S3 Glacier.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

New SAA Practice Set 5 Total points 39/65

The respondent's email ([email protected]) was recorded on submission of


this form.

Name *

Gokul Upadhyay Guragain

Email *

[email protected]
1. An Internet-of-Things (IoT) company is planning on distributing a *1/1
master sensor in people's homes to measure the key metrics from its
smart devices. In order to provide adjustment commands for these
devices, the company would like to have a streaming system that
supports ordered data based on the sensor's key, and also sustains high
throughput messages (thousands of messages per second).

As a solutions architect, which of the following AWS services would you


recommend for this use-case?

Amazon Simple Queue Service (Amazon SQS)

Amazon Simple Notification Service (Amazon SNS)

Amazon Kinesis Data Streams

AWS Lambda

Feedback

Amazon Kinesis Data Streams

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data
streaming service. KDS can continuously capture gigabytes of data per second from
hundreds of thousands of sources such as website clickstreams, database event streams,
financial transactions, social media feeds, IT logs, and location-tracking events. The
throughput of an Amazon Kinesis data stream is designed to scale without limits via
increasing the number of shards within a data stream.

However, there are certain limits you should keep in mind while using Amazon Kinesis
Data Streams:

A Kinesis data stream stores records from 24 hours by default, up to 8760 hours (365
days).

The maximum size of a data blob (the data payload before Base64-encoding) within one
record is 1 megabyte (MB). Each shard can support up to 1000 PUT records per second.

Kinesis is the right answer here, as by providing a partition key in your message, you can
guarantee ordered messages for a specific sensor, even if your stream is sharded.
2. A media company is evaluating the possibility of moving its IT *0/1
infrastructure to the AWS Cloud. The company needs at least 10
terabytes of storage with the maximum possible I/O performance for
processing certain files which are mostly large videos. The company also
needs close to 450 terabytes of very durable storage for storing media
content and almost double of it, i.e. 900 terabytes for archival of legacy
data.

As a Solutions Architect, which set of services will you recommend to


meet these requirements?

Amazon EBS for maximum performance, Amazon S3 for durable data storage,
and Amazon S3 Glacier for archival storage

Amazon S3 standard storage for maximum performance, Amazon S3 Intelligent-


Tiering for intelligent, durable storage, and Amazon S3 Glacier Deep Archive for
archival storage

Amazon EC2 instance store for maximum performance, Amazon S3 for durable
data storage, and Amazon S3 Glacier for archival storage

Amazon EC2 instance store for maximum performance, AWS Storage Gateway for
on-premises durable data access and Amazon S3 Glacier Deep Archive for archival
storage

Correct answer

Amazon EC2 instance store for maximum performance, Amazon S3 for durable
data storage, and Amazon S3 Glacier for archival storage

Feedback

Amazon S3 standard storage for maximum performance, Amazon S3 Intelligent-Tiering for


intelligent, durable storage, and Amazon S3 Glacier Deep Archive for archival storage -
Amazon EC2 instance store volumes provide the best I/O performance for low latency
requirement, as in the current use case. The Amazon S3 Intelligent-Tiering storage class is
designed to optimize costs by automatically moving data to the most cost-effective
access tier, without performance impact or operational overhead.

Amazon S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports
long-term retention and digital preservation for data that may be accessed once or twice a
year. It is designed for customers — particularly those in highly-regulated industries, such
as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10
years or longer to meet regulatory compliance requirements.

Amazon EBS for maximum performance, Amazon S3 for durable data storage, and
Amazon S3 Glacier for archival storage - Amazon Elastic Block Store (Amazon EBS)
provides block-level storage volumes for use with EC2 instances. Amazon EBS volumes
are particularly well-suited for use as the primary storage for file systems, databases, or
for any applications that require fine granular updates and access to raw, unformatted,
block-level storage. For high I/O performance, instance store volumes are a better option.

Amazon EC2 instance store for maximum performance, AWS Storage Gateway for on-
premises durable data access and Amazon S3 Glacier Deep Archive for archival storage -
AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access
to virtually unlimited cloud storage. AWS Storage Gateway will be the right answer if the
customer wanted to retain the on-premises data storage and just move the applications to
AWS Cloud. In the absence of such requirements, instance store is a better option for high
performance and Amazon S3 for durable storage.

Reference:

https://aws.amazon.com/s3/storage-classes/
3. Your company is evolving towards a microservice approach for their *1/1
website. The company plans to expose the website from the same load
balancer, linked to different target groups with different URLs, that are
similar to these - checkout.mycorp.com, www.mycorp.com,
mycorp.com/profile, and mycorp.com/search.

As a Solutions Architect, which Load Balancer type do you recommend to


achieve this routing feature with MINIMUM configuration and
development effort?

Create an NGINX based load balancer on an Amazon EC2 instance to have


advanced routing capabilities

Create a Network Load Balancer

Create an Application Load Balancer

Create a Classic Load Balancer

Feedback

Create an Application Load Balancer

Application Load Balancer can automatically distribute incoming application traffic across
multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions. It can handle the varying load of your application traffic in a single Availability
Zone or across multiple Availability Zones.

If your application is composed of several individual services, an Application Load


Balancer can route a request to a service based on the content of the request.

Here are the different types -

Host-based Routing: You can route a client request based on the Host field of the HTTP
header allowing you to route to multiple domains from the same load balancer. You can
use host conditions to define rules that route requests based on the hostname in the host
header (also known as host-based routing). This enables you to support multiple domains
using a single load balancer. Example hostnames: example.com test.example.com
*.example.com The rule *.example.com matches test.example.com but doesn't match
example.com.

Path-based Routing: You can route a client request based on the URL path of the HTTP
header. You can use path conditions to define rules that route requests based on the URL
in the request (also known as path-based routing). Example path patterns: /img/*
/img//pics The path pattern is used to route requests but does not alter them. For
example, if a rule has a path pattern of /img/, the rule would forward a request for
/img/picture.jpg to the specified target group as a request for /img/picture.jpg. The path
pattern is applied only to the path of the URL, not to its query parameters.
HTTP header-based routing: You can route a client request based on the value of any
standard or custom HTTP header.

HTTP method-based routing: You can route a client request based on any standard or
custom HTTP method.

Query string parameter-based routing: You can route a client request based on query string
or query parameters.

Source IP address CIDR-based routing: You can route a client request based on source IP
address CIDR from where the request originates.

Path based routing and host based routing are only available for the Application Load
Balancer (ALB). Therefore this is the correct option for the given use-case.
4. An Internet-of-Things (IoT) company is looking for a database solution *1/1
on AWS Cloud that has Auto Scaling capabilities and is highly available.
The database should be able to handle any changes in data attributes
over time, in case the company updates the data feed from its IoT
devices. The database must provide the capability to output a continuous
stream with details of any changes to the underlying data.

As a Solutions Architect, which database will you recommend?

Amazon DynamoDB

Amazon Aurora

Amazon Relational Database Service (Amazon RDS)

Amazon Redshift

Feedback

Amazon DynamoDB

Amazon DynamoDB is a key-value and document database that delivers single-digit


millisecond performance at any scale. It's a fully managed, multi-Region, multi-master,
durable database with built-in security, backup and restore, and in-memory caching for
internet-scale applications. DynamoDB can handle more than 10 trillion requests per day
and can support peaks of more than 20 million requests per second. DynamoDB is
serverless with no servers to provision, patch, or manage and no software to install,
maintain, or operate.

A Amazon DynamoDB stream is an ordered flow of information about changes to items in


a DynamoDB table. When you enable a stream on a table, Amazon DynamoDB captures
information about every modification to data items in the table.

Whenever an application creates, updates, or deletes items in the table, DynamoDB


Streams writes a stream record with the primary key attributes of the items that were
modified. A stream record contains information about a data modification to a single item
in a DynamoDB table. You can configure the stream so that the stream records capture
additional information, such as the "before" and "after" images of modified items.

Amazon DynamoDB is horizontally scalable, has a DynamoDB streams capability and is


multi-AZ by default. On top of it, we can adjust the RCU and WCU automatically using Auto
Scaling. This is the right choice for current requirements.
5. A company's cloud architect has set up a solution that uses Amazon *1/1
Route 53 to configure the DNS records for the primary website with the
domain pointing to the Application Load Balancer (ALB). The company
wants a solution where users will be directed to a static error page,
configured as a backup, in case of unavailability of the primary website.

Which configuration will meet the company's requirements, while keeping


the changes to a bare minimum?

Use Amazon Route 53 Latency-based routing. Create a latency record to point to


the Amazon S3 bucket that holds the error page to be displayed

Set up Amazon Route 53 active-active type of failover routing policy. If Amazon


Route 53 health check determines the Application Load Balancer endpoint as
unhealthy, the traffic will be diverted to a static error page, hosted on Amazon S3
bucket

Set up Amazon Route 53 active-passive type of failover routing policy. If


Amazon Route 53 health check determines the Application Load Balancer
endpoint as unhealthy, the traffic will be diverted to a static error page, hosted
on Amazon S3 bucket

Use Amazon Route 53 Weighted routing to give minimum weight to Amazon S3


bucket that holds the error page to be displayed. In case of primary failure, the
requests get routed to the error page

Feedback

Set up Amazon Route 53 active-passive type of failover routing policy. If Amazon Route 53
health check determines the Application Load Balancer endpoint as unhealthy, the traffic
will be diverted to a static error page, hosted on Amazon S3 bucket

Use an active-passive failover configuration when you want a primary resource or group of
resources to be available the majority of the time and you want a secondary resource or
group of resources to be on standby in case all the primary resources become unavailable.
When responding to queries, Amazon Route 53 includes only healthy primary resources. If
all the primary resources are unhealthy, Route 53 begins to include only the healthy
secondary resources in response to DNS queries.
6. Which of the following is true regarding cross-zone load balancing as *1/1
seen in Application Load Balancer versus Network Load Balancer?

By default, cross-zone load balancing is disabled for Application Load Balancer and
enabled for Network Load Balancer

By default, cross-zone load balancing is disabled for both Application Load


Balancer and Network Load Balancer

By default, cross-zone load balancing is enabled for both Application Load Balancer
and Network Load Balancer

By default, cross-zone load balancing is enabled for Application Load Balancer


and disabled for Network Load Balancer

Feedback

By default, cross-zone load balancing is enabled for Application Load Balancer and
disabled for Network Load Balancer

By default, cross-zone load balancing is enabled for Application Load Balancer and
disabled for Network Load Balancer. When cross-zone load balancing is enabled, each
load balancer node distributes traffic across the registered targets in all the enabled
Availability Zones. When cross-zone load balancing is disabled, each load balancer node
distributes traffic only across the registered targets in its Availability Zone.
7. A silicon valley based healthcare startup uses AWS Cloud for its IT *1/1
infrastructure. The startup stores patient health records on Amazon
Simple Storage Service (Amazon S3). The engineering team needs to
implement an archival solution based on Amazon S3 Glacier to enforce
regulatory and compliance controls on data access.

As a solutions architect, which of the following solutions would you


recommend?

Use Amazon S3 Glacier to store the sensitive archived data and then use an
Amazon S3 lifecycle policy to enforce compliance controls

Use Amazon S3 Glacier to store the sensitive archived data and then use an
Amazon S3 Access Control List to enforce compliance controls

Use Amazon S3 Glacier vault to store the sensitive archived data and then use an
Amazon S3 Access Control List to enforce compliance controls

Use Amazon S3 Glacier vault to store the sensitive archived data and then use a
vault lock policy to enforce compliance controls

Feedback

Use Amazon S3 Glacier vault to store the sensitive archived data and then use a vault lock
policy to enforce compliance controls

Amazon S3 Glacier is a secure, durable, and extremely low-cost Amazon S3 cloud storage
class for data archiving and long-term backup. It is designed to deliver 99.999999999%
durability, and provide comprehensive security and compliance capabilities that can help
meet even the most stringent regulatory requirements.

An Amazon S3 Glacier vault is a container for storing archives. When you create a vault,
you specify a vault name and the AWS Region in which you want to create the vault.
Amazon S3 Glacier Vault Lock allows you to easily deploy and enforce compliance
controls for individual Amazon S3 Glacier vaults with a vault lock policy. You can specify
controls such as “write once read many” (WORM) in a vault lock policy and lock the policy
from future edits. Therefore, this is the correct option.
8. The content division at a digital media agency has an application that *1/1
generates a large number of files on Amazon S3, each approximately 10
megabytes in size. The agency mandates that the files be stored for 5
years before they can be deleted. The files are frequently accessed in the
first 30 days of the object creation but are rarely accessed after the first
30 days. The files contain critical business data that is not easy to
reproduce, therefore, immediate accessibility is always required.

Which solution is the MOST cost-effective for the given use case?

Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3


Standard to Amazon S3 Standard-IA 30 days after object creation. Archive the files
to Amazon S3 Glacier Deep Archive 5 years after object creation

Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3


Standard to Amazon S3 Glacier Flexible Retrieval 30 days after object creation.
Delete the files 5 years after object creation

Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3


Standard to Amazon S3 One Zone-IA 30 days after object creation. Delete the files 5
years after object creation

Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3


Standard to Amazon S3 Standard-IA 30 days after object creation. Delete the
files 5 years after object creation

Feedback

Set up an Amazon S3 bucket lifecycle policy to move files from Amazon S3 Standard to
Amazon S3 Standard-IA 30 days after object creation. Delete the files 5 years after object
creation

Amazon S3 Standard-IA class is for data that is accessed less frequently but requires
rapid access when needed. Amazon S3 Standard-IA offers the high durability, high
throughput, and low latency of S3 Standard, with a low per gigabyte storage price and per
GB retrieval charge.

via - https://aws.amazon.com/s3/storage-classes/

For the given use case, you can set up an Amazon S3 lifecycle configuration and create a
transition action to move objects from Amazon S3 Standard to Amazon S3 Standard-IA 30
days after object creation. You can set up an expiration action to delete the object 5 years
after object creation.
9. A big data analytics company is using Amazon Kinesis Data Streams *0/1
(KDS) to process IoT data from the field devices of an agricultural
sciences company. Multiple consumer applications are using the
incoming data streams and the engineers have noticed a performance
lag for the data delivery speed between producers and consumers of the
data streams.

As a solutions architect, which of the following would you recommend for


improving the performance for the given use-case?

Swap out Amazon Kinesis Data Streams with Amazon SQS Standard queues

Use Enhanced Fanout feature of Amazon Kinesis Data Streams

Swap out Amazon Kinesis Data Streams with Amazon Kinesis Data Firehose

Swap out Amazon Kinesis Data Streams with Amazon SQS FIFO queues

Correct answer

Use Enhanced Fanout feature of Amazon Kinesis Data Streams

Feedback

Swap out Amazon Kinesis Data Streams with Amazon Kinesis Data Firehose - Amazon
Kinesis Data Firehose is the easiest way to reliably load streaming data into data lakes,
data stores, and analytics tools. It is a fully managed service that automatically scales to
match the throughput of your data and requires no ongoing administration. It can also
batch, compress, transform, and encrypt the data before loading it, minimizing the amount
of storage used at the destination and increasing security. Amazon Kinesis Data Firehose
can only write to Amazon S3, Amazon Redshift, Amazon Elasticsearch or Splunk. You can't
have applications consuming data streams from Amazon Kinesis Data Firehose, that's the
job of Amazon Kinesis Data Streams. Therefore this option is not correct.

Swap out Amazon Kinesis Data Streams with Amazon SQS Standard queues

Swap out Amazon Kinesis Data Streams with Amazon SQS FIFO queues

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing
service that enables you to decouple and scale microservices, distributed systems, and
serverless applications. Amazon SQS offers two types of message queues. Standard
queues offer maximum throughput, best-effort ordering, and at-least-once delivery.
Amazon SQS FIFO queues are designed to guarantee that messages are processed
exactly once, in the exact order that they are sent. As multiple applications are consuming
the same stream concurrently, both Amazon SQS Standard and Amazon SQS FIFO are not
the right fit for the given use-case.

Exam Alert:
Please understand the differences between the capabilities of Amazon Kinesis Data
Streams vs Amazon SQS, as you may be asked scenario-based questions on this topic in
the exam.

References:

https://aws.amazon.com/blogs/aws/kds-enhanced-fanout/

https://aws.amazon.com/kinesis/data-streams/faqs/
10. An IT company has built a custom data warehousing solution for a *1/1
retail organization by using Amazon Redshift. As part of the cost
optimizations, the company wants to move any historical data (any data
older than a year) into Amazon S3, as the daily analytical reports
consume data for just the last one year. However the analysts want to
retain the ability to cross-reference this historical data along with the
daily reports.

The company wants to develop a solution with the LEAST amount of


effort and MINIMUM cost. As a solutions architect, which option would
you recommend to facilitate this use-case?

Use the Amazon Redshift COPY command to load the Amazon S3 based historical
data into Amazon Redshift. Once the ad-hoc queries are run for the historic data, it
can be removed from Amazon Redshift

Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables


pointing to the underlying historical data in Amazon S3. The analytics team can
then query this historical data to cross-reference with the daily reports from
Redshift

Use AWS Glue ETL job to load the Amazon S3 based historical data into Redshift.
Once the ad-hoc queries are run for the historic data, it can be removed from
Amazon Redshift

Setup access to the historical data via Amazon Athena. The analytics team can run
historical data queries on Amazon Athena and continue the daily reporting on
Amazon Redshift. In case the reports need to be cross-referenced, the analytics
team need to export these in flat files and then do further analysis

Feedback

Use Amazon Redshift Spectrum to create Amazon Redshift cluster tables pointing to the
underlying historical data in Amazon S3. The analytics team can then query this historical
data to cross-reference with the daily reports from Redshift

Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product


designed for large scale data set storage and analysis.

Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and
semistructured data from files in Amazon S3 without having to load the data into Amazon
Redshift tables.

Amazon Redshift Spectrum resides on dedicated Amazon Redshift servers that are
independent of your cluster. Redshift Spectrum pushes many compute-intensive tasks,
such as predicate filtering and aggregation, down to the Redshift Spectrum layer. Thus,
Amazon Redshift Spectrum queries use much less of your cluster's processing capacity
than other queries.
11. A financial services company is moving its IT infrastructure to AWS *1/1
Cloud and wants to enforce adequate data protection mechanisms on
Amazon Simple Storage Service (Amazon S3) to meet compliance
guidelines. The engineering team has hired you as a solutions architect to
build a solution for this requirement.

Can you help the team identify the INCORRECT option from the choices
below?

Amazon S3 can encrypt object metadata by using Server-Side Encryption

Amazon S3 can encrypt data in transit using HTTPS (TLS)

Amazon S3 can protect data at rest using Client-Side Encryption

Amazon S3 can protect data at rest using Server-Side Encryption

Feedback

Amazon S3 can encrypt object metadata by using Server-Side Encryption

Amazon S3 is a simple key-value store designed to store as many objects as you want.
You store these objects in one or more buckets, and each object can be up to 5 TB in size.

An object consists of the following:

Key – The name that you assign to an object. You use the object key to retrieve the object.

Version ID – Within a bucket, a key and version ID uniquely identify an object.

Value – The content that you are storing.

Metadata – A set of name-value pairs with which you can store information regarding the
object.

Subresources – Amazon S3 uses the subresource mechanism to store object-specific


additional information.

Access Control Information – You can control access to the objects you store in Amazon
S3.

Metadata, which can be included with the object, is not encrypted while being stored on
Amazon S3. Therefore, AWS recommends that customers not place sensitive information
in Amazon S3 metadata.
12. Computer vision researchers at a university are trying to optimize the *1/1
I/O bound processes for a proprietary algorithm running on Amazon EC2
instances. The ideal storage would facilitate high-performance IOPS
when doing file processing in a temporary storage space before
uploading the results back into Amazon S3.

As a solutions architect, which of the following AWS storage options


would you recommend as the MOST performant as well as cost-optimal?

Use Amazon EC2 instances with Amazon EBS Provisioned IOPS SSD (io1) as the
storage option

Use Amazon EC2 instances with Amazon EBS General Purpose SSD (gp2) as the
storage option

Use Amazon EC2 instances with Instance Store as the storage option

Use Amazon EC2 instances with Amazon EBS Throughput Optimized HDD (st1) as
the storage option

Feedback

Use Amazon EC2 instances with Instance Store as the storage option

An instance store provides temporary block-level storage for your instance. This storage is
located on disks that are physically attached to the host computer. Instance store is ideal
for the temporary storage of information that changes frequently, such as buffers, caches,
scratch data, and other temporary content, or for data that is replicated across a fleet of
instances, such as a load-balanced pool of web servers. Some instance types use NVMe
or SATA-based solid-state drives (SSD) to deliver high random I/O performance. This is a
good option when you need storage with very low latency, but you don't need the data to
persist when the instance terminates or you can take advantage of fault-tolerant
architectures.

As Instance Store delivers high random I/O performance, it can act as a temporary storage
space, and these volumes are included as part of the instance's usage cost, therefore this
is the correct option.
13. The data engineering team at an e-commerce company has set up a *1/1
workflow to ingest the clickstream data into the raw zone of the Amazon
S3 data lake. The team wants to run some SQL based data sanity checks
on the raw zone of the data lake.

What AWS services would you recommend for this use-case such that
the solution is cost-effective and easy to maintain?

Load the incremental raw zone data into Amazon Redshift on an hourly basis and
run the SQL based sanity checks

Load the incremental raw zone data into Amazon RDS on an hourly basis and run
the SQL based sanity checks

Load the incremental raw zone data into an Amazon EMR based Spark Cluster on
an hourly basis and use SparkSQL to run the SQL based sanity checks

Use Amazon Athena to run SQL based analytics against Amazon S3 data

Feedback

Use Amazon Athena to run SQL based analytics against Amazon S3 data

Amazon Athena is an interactive query service that makes it easy to analyze data directly
in Amazon S3 using standard SQL. Amazon Athena is serverless, so there is no
infrastructure to set up or manage, and customers pay only for the queries they run. You
can use Athena to process logs, perform ad-hoc analysis, and run interactive queries.
14. A company wants to publish an event into an Amazon Simple Queue *0/1
Service (Amazon SQS) queue whenever a new object is uploaded on
Amazon S3.

Which of the following statements are true regarding this functionality?

Both Standard Amazon SQS queue and FIFO SQS queue are allowed as an
Amazon S3 event notification destination

Only FIFO Amazon SQS queue is allowed as an Amazon S3 event notification


destination, whereas Standard SQS queue is not allowed

Only Standard Amazon SQS queue is allowed as an Amazon S3 event notification


destination, whereas FIFO SQS queue is not allowed

Neither Standard Amazon SQS queue nor FIFO SQS queue are allowed as an
Amazon S3 event notification destination

Correct answer

Only Standard Amazon SQS queue is allowed as an Amazon S3 event notification


destination, whereas FIFO SQS queue is not allowed

Feedback

Both Standard Amazon SQS queue and FIFO SQS queue are allowed as an Amazon S3
event notification destination

Neither Standard Amazon SQS queue nor FIFO SQS queue are allowed as an Amazon S3
event notification destination

Only FIFO Amazon SQS queue is allowed as an Amazon S3 event notification destination,
whereas Standard SQS queue is not allowed

These three options contradict the details provided in the explanation above. To
summarize, the Standard Amazon SQS queue is only allowed as an Amazon S3 event
notification destination, whereas the FIFO SQS queue is not allowed. Hence these three
options are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
15. A silicon valley based startup helps its users legally sign highly *0/1
confidential contracts. To meet the compliance guidelines, the startup
must ensure that the signed contracts are encrypted using the AES-256
algorithm via an encryption key that is generated as well as managed
internally. The startup is now migrating to AWS Cloud and would like the
data to be encrypted on AWS. The startup wants to continue using their
existing encryption key generation as well as key management
mechanism.

What do you recommend?

SSE-C

Client-Side Encryption

SSE-KMS

SSE-S3

Correct answer

SSE-C

Feedback

SSE-KMS - AWS Key Management Service (AWS KMS) is a service that combines secure,
highly available hardware and software to provide a key management system scaled for
the cloud. When you use server-side encryption with AWS KMS (SSE-KMS), you can specify
a customer-managed CMK that you have already created. But, you never get to know the
actual key here.

SSE-S3 - When you use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3),
each object is encrypted with a unique key. However, this option does not provide the
ability to audit trail the usage of the encryption keys.

Client-Side Encryption - Client-side encryption is the act of encrypting data before sending
it to Amazon S3. To enable client-side encryption, you have the following options: Use a
AWS KMS key stored in AWS Key Management Service (AWS KMS), Use a master key you
store within your application. Since the customer wants to use AWS provided facility, this
is not an option.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
16. A gaming company is doing pre-launch testing for its new product. *1/1
The company runs its production database on an Aurora MySQL DB
cluster and the performance testing team wants access to multiple test
databases that must be re-created from production data. The company
has hired you as an AWS Certified Solutions Architect - Associate to
deploy a solution to create these test databases quickly with the LEAST
required effort.

What would you suggest to address this use case?

Take a backup of the Aurora MySQL database instance using the mysqldump utility,
create multiple new test database instances and restore each test database from
the backup

Enable database Backtracking on the production database and let the testing team
use the production database

Set up binlog replication in the Aurora MySQL database instance to create multiple
new test database instances

Use database cloning to create multiple clones of the production database and
use each clone as a test database

Feedback

Use database cloning to create multiple clones of the production database and use each
clone as a test database

You can quickly create clones of an Aurora DB by using the database cloning feature. In
addition, database cloning uses a copy-on-write protocol, in which data is copied only at
the time the data changes, either on the source database or the clone database. Cloning is
much faster than a manual snapshot of the DB cluster.

For the given use case, the most optimal solution is to clone the DB cluster. This would
allow the performance testing team to have quick access to the production data in an
isolated way. The team can iterate over the various test phases by deleting existing test
databases and then cloning the production DB to create new test databases.

You cannot clone databases across AWS regions. The clone databases must be created in
the same region as the source databases. Currently, you are limited to 15 clones based on
a copy, including clones based on other clones. After that, only copies can be created.
However, each copy can also have up to 15 clones.
17. Reporters at a news agency upload/download video files (about 500 *1/1
megabytes each) to/from an Amazon S3 bucket as part of their daily
work. As the agency has started offices in remote locations, it has
resulted in poor latency for uploading and accessing data to/from the
given Amazon S3 bucket. The agency wants to continue using a
serverless storage solution such as Amazon S3 but wants to improve the
performance.

As a solutions architect, which of the following solutions do you propose


to address this issue? (Select two)

Create new Amazon S3 buckets in every region where the agency has a remote
office, so that each office can maintain its storage for the media assets

Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This
would speed up uploads as well as downloads for the video files

Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3


bucket. This would speed up uploads as well as downloads for the video files

Move Amazon S3 data into Amazon Elastic File System (Amazon EFS) created in a
US region, connect to Amazon EFS file system from Amazon EC2 instances in other
AWS regions using an inter-region VPC peering connection

Spin up Amazon EC2 instances in each region where the agency has a remote
office. Create a daily job to transfer Amazon S3 data into Amazon EBS volumes
attached to the Amazon EC2 instances

Feedback

Use Amazon CloudFront distribution with origin as the Amazon S3 bucket. This would
speed up uploads as well as downloads for the video files

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, within a developer-friendly environment. When an object from Amazon S3 that is
set up with Amazon CloudFront CDN is requested, the request would come through the
Edge Location transfer paths only for the first request. Thereafter, it would be served from
the nearest edge location to the users until it expires. So in this way, you can speed up
uploads as well as downloads for the video files.

Following is a good reference blog for a deep-dive:

https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-
methods/

Enable Amazon S3 Transfer Acceleration (Amazon S3TA) for the Amazon S3 bucket. This
would speed up uploads as well as downloads for the video files
Amazon S3 Transfer Acceleration (Amazon S3TA) can speed up content transfers to and
from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects.
Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge
locations. As the data arrives at an edge location, data is routed to Amazon S3 over an
optimized network path. So this option is also correct.

18. You have just terminated an instance in the us-west-1a Availability *1/1
Zone (AZ). The attached Amazon EBS volume is now available for
attachment to other instances. An intern launches a new Linux Amazon
EC2 instance in the us-west-1b Availability Zone (AZ) and is attempting to
attach the Amazon EBS volume. The intern informs you that it is not
possible and needs your help.

Which of the following explanations would you provide to them?

Amazon EBS volumes are region locked

Amazon EBS volumes are Availability Zone (AZ) locked

The required IAM permissions are missing

The Amazon EBS volume is encrypted

Feedback

Amazon EBS volumes are Availability Zone (AZ) locked

An Amazon EBS volume is a durable, block-level storage device that you can attach to your
instances. After you attach a volume to an instance, you can use it as you would use a
physical hard drive. Amazon EBS volumes are flexible. For current-generation volumes
attached to current-generation instance types, you can dynamically increase size, modify
the provisioned IOPS capacity, and change volume type on live production volumes.

When you create an Amazon EBS volume, it is automatically replicated within its
Availability Zone to prevent data loss due to the failure of any single hardware component.
You can attach an Amazon EBS volume to an Amazon EC2 instance in the same
Availability Zone (AZ).
19. The engineering team at a weather tracking company wants to *1/1
enhance the performance of its relational database and is looking for a
caching solution that supports geospatial data.

As a solutions architect, which of the following solutions will you


suggest?

Use Amazon DynamoDB Accelerator (DAX)

Use AWS Global Accelerator

Use Amazon ElastiCache for Memcached

Use Amazon ElastiCache for Redis

Feedback

Use Amazon ElastiCache for Redis

Amazon ElastiCache is a web service that makes it easy to set up, manage, and scale a
distributed in-memory data store or cache environment in the cloud. Redis, which stands
for Remote Dictionary Server, is a fast, open-source, in-memory key-value data store for
use as a database, cache, message broker, and queue. Redis now delivers sub-millisecond
response times enabling millions of requests per second for real-time applications in
Gaming, Ad-Tech, Financial Services, Healthcare, and IoT. Redis is a popular choice for
caching, session management, gaming, leaderboards, real-time analytics, geospatial, ride-
hailing, chat/messaging, media streaming, and pub/sub apps.

All Redis data resides in the server’s main memory, in contrast to databases such as
PostgreSQL, Cassandra, MongoDB and others that store most data on disk or on SSDs. In
comparison to traditional disk based databases where most operations require a roundtrip
to disk, in-memory data stores such as Redis don’t suffer the same penalty. They can
therefore support an order of magnitude more operations and faster response times. The
result is – blazing fast performance with average read or write operations taking less than
a millisecond and support for millions of operations per second.

Redis has purpose-built commands for working with real-time geospatial data at scale.
You can perform operations like finding the distance between two elements (for example
people or places) and finding all elements within a given distance of a point.
20. A company has noticed that its application performance has *0/1
deteriorated after a new Auto Scaling group was deployed a few days
back. Upon investigation, the team found out that the Launch
Configuration selected for the Auto Scaling group is using the incorrect
instance type that is not optimized to handle the application workflow.

As a solutions architect, what would you recommend to provide a long


term resolution for this issue?

Create a new launch configuration to use the correct instance type. Modify the Auto
Scaling group to use this new launch configuration. Delete the old launch
configuration as it is no longer needed

No need to modify the launch configuration. Just modify the Auto Scaling group to
use more number of existing instance types. More instances may offset the loss of
performance

No need to modify the launch configuration. Just modify the Auto Scaling group to
use the correct instance type

Modify the launch configuration to use the correct instance type and continue
to use the existing Auto Scaling group

Correct answer

Create a new launch configuration to use the correct instance type. Modify the Auto
Scaling group to use this new launch configuration. Delete the old launch
configuration as it is no longer needed

Feedback

Modify the launch configuration to use the correct instance type and continue to use the
existing Auto Scaling group - As mentioned earlier, it is not possible to modify a launch
configuration once it is created. Hence, this option is incorrect.

No need to modify the launch configuration. Just modify the Auto Scaling group to use the
correct instance type - You cannot use an Auto Scaling group to directly modify the
instance type of the underlying instances. Hence, this option is incorrect.

No need to modify the launch configuration. Just modify the Auto Scaling group to use
more number of existing instance types. More instances may offset the loss of
performance - Using the Auto Scaling group to increase the number of instances to cover
up for the performance loss is not recommended as it does not address the root cause of
the problem. The Machine Learning workflow requires a certain instance type that is
optimized to handle Machine Learning computations. Hence, this option is incorrect.

Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html
21. A leading media company wants to do an accelerated online *0/1
migration of hundreds of terabytes of files from their on-premises data
center to Amazon S3 and then establish a mechanism to access the
migrated data for ongoing updates from the on-premises applications.

As a solutions architect, which of the following would you select as the


MOST performant solution for the given use-case?

Use AWS DataSync to migrate existing data to Amazon S3 and then use File
Gateway to retain access to the migrated data for ongoing updates from the on-
premises applications

Use File Gateway configuration of AWS Storage Gateway to migrate data to


Amazon S3 and then use Amazon S3 Transfer Acceleration (Amazon S3TA) for
ongoing updates from the on-premises applications

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to migrate existing data


to Amazon S3 and then use AWS DataSync for ongoing updates from the on-
premises applications

Use AWS DataSync to migrate existing data to Amazon S3 as well as access the
Amazon S3 data for ongoing updates

Correct answer

Use AWS DataSync to migrate existing data to Amazon S3 and then use File
Gateway to retain access to the migrated data for ongoing updates from the on-
premises applications

Feedback

Use AWS DataSync to migrate existing data to Amazon S3 as well as access the Amazon
S3 data for ongoing updates - AWS DataSync is used to easily transfer data to and from
AWS with up to 10x faster speeds. It is used to transfer data and cannot be used to
facilitate ongoing updates to the migrated files from the on-premises applications.

Use File Gateway configuration of AWS Storage Gateway to migrate data to Amazon S3
and then use Amazon S3 Transfer Acceleration (Amazon S3TA) for ongoing updates from
the on-premises applications - File Gateway can be used to move on-premises data to
AWS Cloud, but it not an optimal solution for high volumes. Migration services such as
AWS DataSync are best suited for this purpose. Amazon S3 Transfer Acceleration cannot
facilitate ongoing updates to the migrated files from the on-premises applications.

Use Amazon S3 Transfer Acceleration (Amazon S3TA) to migrate existing data to Amazon
S3 and then use AWS DataSync for ongoing updates from the on-premises applications - If
your application is already integrated with the Amazon S3 API, and you want higher
throughput for transferring large files to Amazon S3, Amazon S3 Transfer Acceleration can
be used. However AWS DataSync cannot be used to facilitate ongoing updates to the
migrated files from the on-premises applications.
Reference:

https://aws.amazon.com/datasync/features/
22. An online gaming company wants to block access to its application *1/1
from specific countries; however, the company wants to allow its remote
development team (from one of the blocked countries) to have access to
the application. The application is deployed on Amazon EC2 instances
running under an Application Load Balancer with AWS Web Application
Firewall (AWS WAF).

As a solutions architect, which of the following solutions can be


combined to address the given use-case? (Select two)

Use Application Load Balancer geo match statement listing the countries that you
want to block

Create a deny rule for the blocked countries in the network access control list
(network ACL) associated with each of the Amazon EC2 instances

Use AWS WAF geo match statement listing the countries that you want to block

Use AWS WAF IP set statement that specifies the IP addresses that you want to
allow through

Use Application Load Balancer IP set statement that specifies the IP addresses
that you want to allow through

Feedback

Use AWS WAF geo match statement listing the countries that you want to block

Use AWS WAF IP set statement that specifies the IP addresses that you want to allow
through

AWS WAF is a web application firewall that helps protect your web applications or APIs
against common web exploits that may affect availability, compromise security, or
consume excessive resources. AWS WAF gives you control over how traffic reaches your
applications by enabling you to create security rules that block common attack patterns
and rules that filter out specific traffic patterns you define.

You can deploy AWS WAF on Amazon CloudFront as part of your CDN solution, the
Application Load Balancer that fronts your web servers or origin servers running on
Amazon EC2, or Amazon API Gateway for your APIs.

To block specific countries, you can create a AWS WAF geo match statement listing the
countries that you want to block, and to allow traffic from IPs of the remote development
team, you can create a WAF IP set statement that specifies the IP addresses that you want
to allow through. You can combine the two rules.
23. A retail company wants to establish encrypted network connectivity *1/1
between its on-premises data center and AWS Cloud. The company
wants to get the solution up and running in the fastest possible time and
it should also support encryption in transit.

As a solutions architect, which of the following solutions would you


suggest to the company?

Use AWS Data Sync to establish encrypted network connectivity between the on-
premises data center and AWS Cloud

Use AWS Secrets Manager to establish encrypted network connectivity between


the on-premises data center and AWS Cloud

Use AWS Direct Connect to establish encrypted network connectivity between the
on-premises data center and AWS Cloud

Use AWS Site-to-Site VPN to establish encrypted network connectivity between


the on-premises data center and AWS Cloud

Feedback

Use AWS Site-to-Site VPN to establish encrypted network connectivity between the on-
premises data center and AWS Cloud

AWS Site-to-Site VPN enables you to securely connect your on-premises network or branch
office site to your Amazon Virtual Private Cloud (Amazon VPC). You can securely extend
your data center or branch office network to the cloud with an AWS Site-to-Site VPN
connection. A VPC VPN Connection utilizes IPSec to establish encrypted network
connectivity between your on-premises network and Amazon VPC over the Internet. IPsec
is a protocol suite for securing IP communications by authenticating and encrypting each
IP packet in a data stream.
24. Your firm has implemented a multi-tiered networking structure within *1/1
the VPC - with two public and two private subnets. The public subnets are
used to deploy the Application Load Balancers, while the two private
subnets are used to deploy the application on Amazon EC2 instances.
The development team wants the Amazon EC2 instances to have access
to the internet. The solution has to be fully managed by AWS and needs
to work over IPv4.

What will you recommend?

Internet Gateways deployed in your private subnet

Egress-Only Internet Gateways deployed in your private subnet

NAT Gateways deployed in your public subnet

NAT Instances deployed in your public subnet

Feedback

NAT Gateways deployed in your public subnet

You can use a network address translation (NAT) gateway to enable instances in a private
subnet to connect to the internet or other AWS services, but prevent the internet from
initiating a connection with those instances. A NAT gateway has the following
characteristics and limitations:

A NAT gateway supports 5 Gbps of bandwidth and automatically scales up to 45 Gbps.


You can associate exactly one Elastic IP address with a NAT gateway.
A NAT gateway supports the following protocols: TCP, UDP, and ICMP.
You cannot associate a security group with a NAT gateway.
You can use a network access control list (network ACL) to control the traffic to and from
the subnet in which the NAT gateway is located.
A NAT gateway can support up to 55,000 simultaneous connections to each unique
destination.
Therefore you must use a NAT Gateway in your public subnet in order to provide internet
access to your instances in your private subnets. You are charged for creating and using a
NAT gateway in your account. NAT gateway hourly usage and data processing rates apply.
25. A leading video streaming provider is migrating to AWS Cloud *1/1
infrastructure for delivering its content to users across the world. The
company wants to make sure that the solution supports at least a million
requests per second for its Amazon EC2 server farm.

As a solutions architect, which type of Elastic Load Balancing would you


recommend as part of the solution stack?

Network Load Balancer

Infrastructure Load Balancer

Application Load Balancer

Classic Load Balancer

Feedback

Network Load Balancer

Network Load Balancer is best suited for use-cases involving low latency and high
throughput workloads that involve scaling to millions of requests per second. Network
Load Balancer operates at the connection level (Layer 4), routing connections to targets -
Amazon EC2 instances, microservices, and containers – within Amazon Virtual Private
Cloud (Amazon VPC) based on IP protocol data.
26. A developer in your team has set up a classic 3 tier architecture *1/1
composed of an Application Load Balancer, an Auto Scaling group
managing a fleet of Amazon EC2 instances, and an Amazon Aurora
database. As a Solutions Architect, you would like to adhere to the
security pillar of the well-architected framework.

How do you configure the security group of the Aurora database to only
allow traffic coming from the Amazon EC2 instances?

Add a rule authorizing the Amazon EC2 security group

Add a rule authorizing the Amazon Aurora security group

Add a rule authorizing the Auto Scaling group subnets CIDR

Add a rule authorizing the Elastic Load Balancing security group

Feedback

Add a rule authorizing the Amazon EC2 security group

A security group acts as a virtual firewall that controls the traffic for one or more
instances. When you launch an instance, you can specify one or more security groups;
otherwise, we use the default security group. You can add rules to each security group that
allow traffic to or from its associated instances. You can modify the rules for a security
group at any time; the new rules are automatically applied to all instances that are
associated with the security group. When we decide whether to allow traffic to reach an
instance, we evaluate all the rules from all the security groups that are associated with the
instance.

The following are the characteristics of security group rules:

By default, security groups allow all outbound traffic.

Security group rules are always permissive; you can't create rules that deny access.

Security groups are stateful.

For the given scenario, the Amazon EC2 instances that are part of the Auto Scaling Group
are the ones accessing the database layer. The correct response is to add a rule to the
security group attached to Aurora authorizing the Amazon EC2 instance's security group.
27. The engineering team at an online fashion retailer uses AWS Cloud to *1/1
manage its technology infrastructure. The Amazon EC2 server fleet is
behind an Application Load Balancer and the fleet strength is managed
by an Auto Scaling group. Based on the historical data, the team is
anticipating a huge traffic spike during the upcoming Thanksgiving sale.

As an AWS solutions architect, what feature of the Auto Scaling group


would you leverage so that the potential surge in traffic can be
preemptively addressed?

Auto Scaling group target tracking scaling policy

Auto Scaling group lifecycle hook

Auto Scaling group scheduled action

Auto Scaling group step scaling policy

Feedback

Auto Scaling group scheduled action

The engineering team can create a scheduled action for the Auto Scaling group to pre-
emptively provision additional instances for the sale duration. This makes sure that
adequate instances are ready before the sale goes live. The scheduled action tells
Amazon EC2 Auto Scaling to perform a scaling action at specified times. To create a
scheduled scaling action, you specify the start time when the scaling action should take
effect, and the new minimum, maximum, and desired sizes for the scaling action. At the
specified time, Amazon EC2 Auto Scaling updates the group with the values for minimum,
maximum, and desired size that are specified by the scaling action.
28. The infrastructure team at a company maintains 5 different VPCs *1/1
(let's call these VPCs A, B, C, D, E) for resource isolation. Due to the
changed organizational structure, the team wants to interconnect all
VPCs together. To facilitate this, the team has set up VPC peering
connection between VPC A and all other VPCs in a hub and spoke model
with VPC A at the center. However, the team has still failed to establish
connectivity between all VPCs.

As a solutions architect, which of the following would you recommend as


the MOST resource-efficient and scalable solution?

Use AWS transit gateway to interconnect the VPCs

Use an internet gateway to interconnect the VPCs

Use a VPC endpoint to interconnect the VPCs

Establish VPC peering connections between all VPCs

Feedback

Use AWS transit gateway to interconnect the VPCs

An AWS transit gateway is a network transit hub that you can use to interconnect your
virtual private clouds (VPC) and on-premises networks.

A VPC peering connection is a networking connection between two VPCs that enables you
to route traffic between them using private IPv4 addresses or IPv6 addresses. Transitive
Peering does not work for VPC peering connections. So, if you have a VPC peering
connection between VPC A and VPC B (pcx-aaaabbbb), and between VPC A and VPC C
(pcx-aaaacccc). Then, there is no VPC peering connection between VPC B and VPC C.
Instead of using VPC peering, you can use an AWS Transit Gateway that acts as a network
transit hub, to interconnect your VPCs or connect your VPCs with on-premises networks.
Therefore this is the correct option.
29. A medium-sized business has a taxi dispatch application deployed on *0/1
an Amazon EC2 instance. Because of an unknown bug, the application
causes the instance to freeze regularly. Then, the instance has to be
manually restarted via the AWS management console.

Which of the following is the MOST cost-optimal and resource-efficient


way to implement an automated solution until a permanent fix is
delivered by the development team?

Use Amazon EventBridge events to trigger an AWS Lambda function to reboot the
instance status every 5 minutes

Setup an Amazon CloudWatch alarm to monitor the health status of the


instance. In case of an Instance Health Check failure, Amazon CloudWatch
Alarm can publish to an Amazon Simple Notification Service (Amazon SNS)
event which can then trigger an AWS lambda function. The AWS lambda
function can use Amazon EC2 API to reboot the instance

Setup an Amazon CloudWatch alarm to monitor the health status of the instance.
In case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm
Action can be used to reboot the instance

Use Amazon EventBridge events to trigger an AWS Lambda function to check the
instance status every 5 minutes. In the case of Instance Health Check failure, the
AWS lambda function can use Amazon EC2 API to reboot the instance

Correct answer

Setup an Amazon CloudWatch alarm to monitor the health status of the instance. In
case of an Instance Health Check failure, an EC2 Reboot CloudWatch Alarm Action
can be used to reboot the instance

Feedback

Setup an Amazon CloudWatch alarm to monitor the health status of the instance. In case
of an Instance Health Check failure, Amazon CloudWatch Alarm can publish to an Amazon
Simple Notification Service (Amazon SNS) event which can then trigger an AWS lambda
function. The AWS lambda function can use Amazon EC2 API to reboot the instance

Use Amazon EventBridge events to trigger an AWS Lambda function to check the instance
status every 5 minutes. In the case of Instance Health Check failure, the AWS lambda
function can use Amazon EC2 API to reboot the instance

Use Amazon EventBridge events to trigger an AWS Lambda function to reboot the instance
status every 5 minutes

Using Amazon EventBridge event or Amazon CloudWatch alarm to trigger an AWS lambda
function, directly or indirectly, is wasteful of resources. You should just use the EC2
Reboot CloudWatch Alarm Action to reboot the instance. So all the options that trigger the
AWS lambda function are incorrect.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/UsingAlarmAction
s.html
30. A mobile chat application uses Amazon DynamoDB as its database *0/1
service to provide low latency chat updates. A new developer has joined
the team and is reviewing the configuration settings for Amazon
DynamoDB which have been tweaked for certain technical requirements.
AWS CloudTrail service has been enabled on all the resources used for
the project. Yet, Amazon DynamoDB encryption details are nowhere to be
found.

Which of the following options can explain the root cause for the given
issue?

By default, all Amazon DynamoDB tables are encrypted using Data keys, which do
not write to AWS CloudTrail logs

By default, all Amazon DynamoDB tables are encrypted under AWS managed
Keys, which do not write to AWS CloudTrail logs

By default, all Amazon DynamoDB tables are encrypted under Customer managed
keys, which do not write to AWS CloudTrail logs

By default, all Amazon DynamoDB tables are encrypted using AWS owned keys,
which do not write to AWS CloudTrail logs

Correct answer

By default, all Amazon DynamoDB tables are encrypted using AWS owned keys,
which do not write to AWS CloudTrail logs

Feedback

By default, all Amazon DynamoDB tables are encrypted under AWS managed Keys, which
do not write to AWS CloudTrail logs

By default, all Amazon DynamoDB tables are encrypted under Customer managed keys,
which do not write to AWS CloudTrail logs

By default, all Amazon DynamoDB tables are encrypted using Data keys, which do not
write to AWS CloudTrail logs

These three options contradict the explanation provided above, so these options are
incorrect.

References:

https://docs.aws.amazon.com/kms/latest/developerguide/services-dynamodb.html

https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys
31. An application hosted on Amazon EC2 contains sensitive personal *1/1
information about all its customers and needs to be protected from all
types of cyber-attacks. The company is considering using the AWS Web
Application Firewall (AWS WAF) to handle this requirement.

Can you identify the correct solution leveraging the capabilities of AWS
WAF?

AWS WAF can be directly configured on Amazon EC2 instances for ensuring the
security of the underlying application data

AWS WAF can be directly configured only on an Application Load Balancer or an


Amazon API Gateway. One of these two services can then be configured with
Amazon EC2 to build the needed secure architecture

Create Amazon CloudFront distribution for the application on Amazon EC2


instances. Deploy AWS WAF on Amazon CloudFront to provide the necessary
safety measures

Configure an Application Load Balancer (ALB) to balance the workload for all the
Amazon EC2 instances. Configure Amazon CloudFront to distribute from an
Application Load Balancer since AWS WAF cannot be directly configured on ALB.
This configuration not only provides necessary safety but is scalable too

Feedback

Create Amazon CloudFront distribution for the application on Amazon EC2 instances.
Deploy AWS WAF on Amazon CloudFront to provide the necessary safety measures

When you use AWS WAF with Amazon CloudFront, you can protect your applications
running on any HTTP webserver, whether it's a webserver that's running in Amazon Elastic
Compute Cloud (Amazon EC2) or a web server that you manage privately. You can also
configure Amazon CloudFront to require HTTPS between CloudFront and your own
webserver, as well as between viewers and Amazon CloudFront.

AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer
(ALB), services that AWS customers commonly use to deliver content for their websites
and applications. When you use AWS WAF on Amazon CloudFront, your rules run in all
AWS Edge Locations, located around the world close to your end-users. This means
security doesn’t come at the expense of performance. Blocked requests are stopped
before they reach your web servers. When you use AWS WAF on Application Load
Balancer, your rules run in the region and can be used to protect internet-facing as well as
internal load balancers.
32. The engineering team at a retail company manages 3 Amazon EC2 *1/1
instances that make read-heavy database requests to the Amazon RDS
for the PostgreSQL database instance. As an AWS Certified Solutions
Architect - Associate, you have been tasked to make the database
instance resilient from a disaster recovery perspective.

Which of the following features will help you in disaster recovery of the
database? (Select two)

Use cross-Region Read Replicas

Use Amazon RDS Provisioned IOPS (SSD) Storage in place of General Purpose
(SSD) Storage

Enable the automated backup feature of Amazon RDS in a multi-AZ deployment


that creates backups in a single AWS Region

Use the database cloning feature of the Amazon RDS Database cluster

Enable the automated backup feature of Amazon RDS in a multi-AZ deployment


that creates backups across multiple Regions

Feedback

Use cross-Region Read Replicas

In addition to using Read Replicas to reduce the load on your source database instance,
you can also use Read Replicas to implement a DR solution for your production DB
environment. If the source DB instance fails, you can promote your Read Replica to a
standalone source server. Read Replicas can also be created in a different Region than the
source database. Using a cross-Region Read Replica can help ensure that you get back up
and running if you experience a regional availability issue.

Enable the automated backup feature of Amazon RDS in a multi-AZ deployment that
creates backups across multiple Regions

Amazon RDS provides high availability and failover support for database instances using
Multi-AZ deployments. Amazon RDS uses several different technologies to provide failover
support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances
use Amazon's failover technology.

The automated backup feature of Amazon RDS enables point-in-time recovery for your
database instance. Amazon RDS will back up your database and transaction logs and
store both for a user-specified retention period. If it’s a Multi-AZ configuration, backups
occur on standby to reduce the I/O impact on the primary. Amazon RDS supports Cross-
Region Automated Backups. Manual snapshots and Read Replicas are also supported
across multiple Regions.
33. A pharma company is working on developing a vaccine for the COVID- *0/1
19 virus. The researchers at the company want to process the reference
healthcare data in a highly available as well as HIPAA compliant in-
memory database that supports caching results of SQL queries.

As a solutions architect, which of the following AWS services would you


recommend for this task?

Amazon DynamoDB Accelerator (DAX)

Amazon DynamoDB

Amazon ElastiCache for Redis/Memcached

Amazon DocumentDB

Correct answer

Amazon ElastiCache for Redis/Memcached

Feedback

Amazon DynamoDB Accelerator (DAX) - Amazon DynamoDB is a key-value and document


database that delivers single-digit millisecond performance at any scale. It's a fully
managed, multi-region, multi-master, durable database with built-in security, backup and
restore, and in-memory caching for internet-scale applications. DAX is a DynamoDB-
compatible caching service that enables you to benefit from fast in-memory performance
for demanding applications. DAX does not support SQL query caching.

Amazon DynamoDB - Amazon DynamoDB is a key-value and document database that


delivers single-digit millisecond performance at any scale. It's a fully managed, multi-
region, multi-master, durable database with built-in security, backup and restore, and in-
memory caching (via DAX) for internet-scale applications. Amazon DynamoDB is not an in-
memory database, so this option is incorrect.

Amazon DocumentDB - Amazon DocumentDB is a fast, scalable, highly available, and fully
managed document database service that supports MongoDB workloads. As a document
database, Amazon DocumentDB makes it easy to store, query, and index JSON data.
Amazon DocumentDB is not an in-memory database, so this option is incorrect.

References:

https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-elasticache-for-redis-
is-now-hipaa-eligible-to-help-you-power-secure-healthcare-applications-with-sub-
millisecond-latency/

https://aws.amazon.com/elasticache/redis/

https://aws.amazon.com/about-aws/whats-new/2022/08/amazon-elasticache-
memcached-hipaa-eligible/
https://aws.amazon.com/blogs/database/automating-sql-caching-for-amazon-
elasticache-and-amazon-rds/

34. As a Solutions Architect, you have been hired to work with the *1/1
engineering team at a company to create a REST API using the serverless
architecture.

Which of the following solutions will you recommend to move the


company to the serverless architecture?

AWS Fargate with AWS Lambda at the front

Amazon Route 53 with Amazon EC2 as backend

Public-facing Application Load Balancer with Amazon Elastic Container Service


(Amazon ECS) on Amazon EC2

Amazon API Gateway exposing AWS Lambda Functionality

Feedback

Amazon API Gateway exposing AWS Lambda Functionality

Amazon API Gateway is a fully managed service that makes it easy for developers to
create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front
door" for applications to access data, business logic, or functionality from your backend
services.

AWS Lambda lets you run code without provisioning or managing servers. You pay only for
the compute time you consume.

Amazon API Gateway can expose AWS Lambda functionality through RESTful APIs. Both
are serverless options offered by AWS and hence the right choice for this scenario,
considering all the functionality they offer.
35. A multi-national company is looking at optimizing their AWS *1/1
resources across various countries and regions. They want to understand
the best practices on cost optimization, performance, and security for
their system architecture spanning across multiple business units.

Which AWS service is the best fit for their requirements?

AWS Trusted Advisor

AWS Config

AWS Systems Manager

AWS Management Console

Feedback

AWS Trusted Advisor

AWS Trusted Advisor is an online tool that draws upon best practices learned from AWS’s
aggregated operational history of serving hundreds of thousands of AWS customers. AWS
Trusted Advisor inspects your AWS environment and makes recommendations for saving
money, improving system performance, or closing security gaps. It scans your AWS
infrastructure and compares it to AWS Best practices in five categories (Cost Optimization,
Performance, Security, Fault Tolerance, Service limits) and then provides
recommendations.
36. A retail company maintains an AWS Direct Connect connection to *0/1
AWS and has recently migrated its data warehouse to AWS. The data
analysts at the company query the data warehouse using a visualization
tool. The average size of a query returned by the data warehouse is 60
megabytes and the query responses returned by the data warehouse are
not cached in the visualization tool. Each webpage returned by the
visualization tool is approximately 600 kilobytes.

Which of the following options offers the LOWEST data transfer egress
cost for the company?

Deploy the visualization tool on-premises. Query the data warehouse directly
over an AWS Direct Connect connection at a location in the same AWS region

Deploy the visualization tool in the same AWS region as the data warehouse.
Access the visualization tool over a Direct Connect connection at a location in the
same region

Deploy the visualization tool on-premises. Query the data warehouse over the
internet at a location in the same AWS region

Deploy the visualization tool in the same AWS region as the data warehouse.
Access the visualization tool over the internet at a location in the same region

Correct answer

Deploy the visualization tool in the same AWS region as the data warehouse.
Access the visualization tool over a Direct Connect connection at a location in the
same region

Feedback

Deploy the visualization tool in the same AWS region as the data warehouse. Access the
visualization tool over the internet at a location in the same region

Deploy the visualization tool on-premises. Query the data warehouse over the internet at a
location in the same AWS region

Data transfer pricing over AWS Direct Connect is lower than data transfer pricing over the
internet, so both of these options are incorrect.

Deploy the visualization tool on-premises. Query the data warehouse directly over an AWS
Direct Connect connection at a location in the same AWS region - As mentioned in the
explanation above, if you deploy the visualization tool on-premises, then you need to pay
for the 60 megabytes of DTO charges for the query response from the data warehouse to
the visualization tool. So this option is incorrect.

References:
https://aws.amazon.com/directconnect/pricing/

https://aws.amazon.com/getting-started/hands-on/connect-data-center-to-aws/services-
costs/

https://aws.amazon.com/directconnect/faqs/
37. A media company wants to get out of the business of owning and *0/1
maintaining its own IT infrastructure. As part of this digital
transformation, the media company wants to archive about 5 petabytes
of data in its on-premises data center to durable long term storage.

As a solutions architect, what is your recommendation to migrate this


data in the MOST cost-optimal way?

Setup AWS direct connect between the on-premises data center and AWS
Cloud. Use this connection to transfer the data into Amazon S3 Glacier

Setup AWS Site-to-Site VPN connection between the on-premises data center and
AWS Cloud. Use this connection to transfer the data into Amazon S3 Glacier

Transfer the on-premises data into multiple AWS Snowball Edge Storage Optimized
devices. Copy the AWS Snowball Edge data into Amazon S3 Glacier

Transfer the on-premises data into multiple AWS Snowball Edge Storage Optimized
devices. Copy the AWS Snowball Edge data into Amazon S3 and create a lifecycle
policy to transition the data into Amazon S3 Glacier

Correct answer

Transfer the on-premises data into multiple AWS Snowball Edge Storage Optimized
devices. Copy the AWS Snowball Edge data into Amazon S3 and create a lifecycle
policy to transition the data into Amazon S3 Glacier

Feedback

Transfer the on-premises data into multiple AWS Snowball Edge Storage Optimized
devices. Copy the AWS Snowball Edge data into Amazon S3 Glacier - As mentioned earlier,
you can't directly copy data from AWS Snowball Edge devices into Amazon S3 Glacier.
Hence, this option is incorrect.

Setup AWS direct connect between the on-premises data center and AWS Cloud. Use this
connection to transfer the data into Amazon S3 Glacier - AWS Direct Connect lets you
establish a dedicated network connection between your network and one of the AWS
Direct Connect locations. Using industry-standard 802.1q VLANs, this dedicated
connection can be partitioned into multiple virtual interfaces. Direct Connect involves
significant monetary investment and takes more than a month to set up, therefore it's not
the correct fit for this use-case where just a one-time data transfer has to be done.

Setup AWS Site-to-Site VPN connection between the on-premises data center and AWS
Cloud. Use this connection to transfer the data into Amazon S3 Glacier - AWS Site-to-Site
VPN enables you to securely connect your on-premises network or branch office site to
your Amazon Virtual Private Cloud (Amazon VPC). VPN Connections are a good solution if
you have an immediate need, and have low to modest bandwidth requirements. Because
of the high data volume for the given use-case, Site-to-Site VPN is not the correct choice.
Reference:

https://aws.amazon.com/snowball/
38. A pharmaceutical company is considering moving to AWS Cloud to *0/1
accelerate the research and development process. Most of the daily
workflows would be centered around running batch jobs on Amazon EC2
instances with storage on Amazon Elastic Block Store (Amazon EBS)
volumes. The CTO is concerned about meeting HIPAA compliance norms
for sensitive data stored on Amazon EBS.

Which of the following options outline the correct capabilities of an


encrypted Amazon EBS volume? (Select three)

Any snapshot created from the volume is encrypted

Any snapshot created from the volume is NOT encrypted

Data moving between the volume and the instance is encrypted

Data at rest inside the volume is NOT encrypted

Data moving between the volume and the instance is NOT encrypted

Data at rest inside the volume is encrypted

Correct answer

Any snapshot created from the volume is encrypted

Data moving between the volume and the instance is encrypted

Data at rest inside the volume is encrypted

Feedback

Data moving between the volume and the instance is NOT encrypted

Any snapshot created from the volume is NOT encrypted

Data at rest inside the volume is NOT encrypted

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
39. A junior developer is learning to build websites using HTML, CSS, and *0/1
JavaScript. He has created a static website and then deployed it on
Amazon S3. Now he can't seem to figure out the endpoint for his super
cool website.

As a solutions architect, can you help him figure out the allowed formats
for the Amazon S3 website endpoints? (Select two)

http://bucket-name.s3-website.Region.amazonaws.com

http://bucket-name.Region.s3-website.amazonaws.com

http://bucket-name.s3-website-Region.amazonaws.com

http://s3-website-Region.bucket-name.amazonaws.com

http://s3-website.Region.bucket-name.amazonaws.com

Correct answer

http://bucket-name.s3-website.Region.amazonaws.com

http://bucket-name.s3-website-Region.amazonaws.com

Feedback

http://s3-website-Region.bucket-name.amazonaws.com

http://s3-website.Region.bucket-name.amazonaws.com

http://bucket-name.Region.s3-website.amazonaws.com

These three options do not meet the specifications for the Amazon S3 website endpoints
format, so these are incorrect.
40. A cyber security company is running a mission critical application *1/1
using a single Spread placement group of Amazon EC2 instances. The
company needs 15 Amazon EC2 instances for optimal performance.

How many Availability Zones (AZs) will the company need to deploy
these Amazon EC2 instances per the given use-case?

14

15

Feedback

When you launch a new Amazon EC2 instance, the EC2 service attempts to place the
instance in such a way that all of your instances are spread out across underlying
hardware to minimize correlated failures. You can use placement groups to influence the
placement of a group of interdependent instances to meet the needs of your workload.
Depending on the type of workload, you can create a placement group using one of the
following placement strategies:

Cluster placement group

Partition placement group

Spread placement group.

A Spread placement group is a group of instances that are each placed on distinct racks,
with each rack having its own network and power source.

Spread placement groups are recommended for applications that have a small number of
critical instances that should be kept separate from each other. Launching instances in a
spread placement group reduces the risk of simultaneous failures that might occur when
instances share the same racks.

A spread placement group can span multiple Availability Zones in the same Region. You
can have a maximum of seven running instances per Availability Zone per group.
Therefore, to deploy 15 Amazon EC2 instances in a single Spread placement group, the
company needs to use 3 Availability Zones.
41. A company wants to store business-critical data on Amazon Elastic *1/1
Block Store (Amazon EBS) volumes which provide persistent storage
independent of Amazon EC2 instances. During a test run, the
development team found that on terminating an Amazon EC2 instance,
the attached Amazon EBS volume was also lost, which was contrary to
their assumptions.

As a solutions architect, could you explain this issue?

The Amazon EBS volumes were not backed up on Amazon S3 storage, resulting in
the loss of volume

The Amazon EBS volumes were not backed up on Amazon EFS file system storage,
resulting in the loss of volume

On termination of an Amazon EC2 instance, all the attached Amazon EBS volumes
are always terminated

The Amazon EBS volume was configured as the root volume of Amazon EC2
instance. On termination of the instance, the default behavior is to also
terminate the attached root volume

Feedback

The Amazon EBS volume was configured as the root volume of Amazon EC2 instance. On
termination of the instance, the default behavior is to also terminate the attached root
volume

Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage
service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput
and transaction-intensive workloads at any scale.

When you launch an instance, the root device volume contains the image used to boot the
instance. You can choose between AMIs backed by Amazon EC2 instance store and AMIs
backed by Amazon EBS.

By default, the root volume for an AMI backed by Amazon EBS is deleted when the
instance terminates. You can change the default behavior to ensure that the volume
persists after the instance terminates. Non-root EBS volumes remain available even after
you terminate an instance to which the volumes were attached. Therefore, this option is
correct.
42. While troubleshooting, a cloud architect realized that the Amazon EC2 *0/1
instance is unable to connect to the internet using the Internet Gateway.

Which conditions should be met for internet connectivity to be


established? (Select two)

The route table in the instance’s subnet should have a route to an Internet
Gateway

The instance's subnet is associated with multiple route tables with conflicting
configurations

The subnet has been configured to be public and has no access to the internet

The network access control list (network ACL) associated with the subnet must
have rules to allow inbound and outbound traffic

The instance's subnet is not associated with any route table

Correct answer

The route table in the instance’s subnet should have a route to an Internet Gateway

The network access control list (network ACL) associated with the subnet must
have rules to allow inbound and outbound traffic

Feedback

The instance's subnet is not associated with any route table - This is an incorrect
statement. A subnet is implicitly associated with the main route table if it is not explicitly
associated with a particular route table. So, a subnet is always associated with some
route table.

The instance's subnet is associated with multiple route tables with conflicting
configurations - This is an incorrect statement. A subnet can only be associated with one
route table at a time.

The subnet has been configured to be public and has no access to the internet - This is an
incorrect statement. Public subnets have access to the internet via Internet Gateway.

Reference:

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html
43. A solutions architect has been tasked to design a low-latency solution *0/1
for a static, single-page application, accessed by users through a custom
domain name. The solution must be serverless, provide in-transit data
encryption and needs to be cost-effective.

Which AWS services can be combined to build the simplest possible


solution for the company's requirement?

Host the application on Amazon EC2 instance with instance store volume for high
performance and low latency access to users

Host the application on AWS Fargate and front it with Elastic Load Balancing for an
improved performance

Configure Amazon S3 to store the static data and use AWS Fargate for hosting
the application

Use Amazon S3 to host the static website and Amazon CloudFront to distribute the
content for low latency access

Correct answer

Use Amazon S3 to host the static website and Amazon CloudFront to distribute the
content for low latency access

Feedback

Host the application on Amazon EC2 instance with instance store volume for high
performance and low latency access to users - Since the use case speaks about a
serverless solution, Amazon EC2 cannot be the answer, since Amazon EC2 is not
serverless.

Host the application on AWS Fargate and front it with Elastic Load Balancing for an
improved performance - AWS Fargate is a serverless compute engine for containers that
works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes
Service (EKS). Elastic Load Balancing can spread the incoming requests across a fleet of
Amazon EC2 instances. This added complexity is not needed since we are looking at a
static single-page webpage.

Configure Amazon S3 to store the static data and use AWS Fargate for hosting the
application - AWS Fargate is overkill for hosting a static single-page webpage.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
44. An e-commerce company uses a two-tier architecture with *0/1
application servers in the public subnet and an Amazon RDS MySQL DB
in a private subnet. The development team can use a bastion host in the
public subnet to access the MySQL database and run queries from the
bastion host. However, end-users are reporting application errors. Upon
inspecting application logs, the team notices several "could not connect
to server: connection timed out" error messages.

Which of the following options represent the root cause for this issue?

The security group configuration for the database instance does not have the
correct rules to allow inbound connections from the application servers

The database user credentials (username and password) configured for the
application are incorrect

The security group configuration for the application servers does not have the
correct rules to allow inbound connections from the database instance

The database user credentials (username and password) configured for the
application do not have the required privilege for the given database

Correct answer

The security group configuration for the database instance does not have the
correct rules to allow inbound connections from the application servers

Feedback

The security group configuration for the application servers does not have the correct
rules to allow inbound connections from the database instance - As mentioned in the
explanation above, the application servers don't need inbound connections from the
database instance, rather the database instance needs the correct inbound rule with
application servers' security group as the source.

The database user credentials (username and password) configured for the application
are incorrect

The database user credentials (username and password) configured for the application do
not have the required privilege for the given database

These two options have been added as a distractor since the error mentions a "connection
timeout" issue rather than an "access denied" error.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGrou
ps.html
https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect/
45. A financial services firm uses a high-frequency trading system and *0/1
wants to write the log files into Amazon S3. The system will also read
these log files in parallel on a near real-time basis. The engineering team
wants to address any data discrepancies that might arise when the
trading system overwrites an existing log file and then tries to read that
specific log file.

Which of the following options BEST describes the capabilities of


Amazon S3 relevant to this scenario?

A process replaces an existing object and immediately tries to read it. Until the
change is fully propagated, Amazon S3 might return the previous data

A process replaces an existing object and immediately tries to read it. Until the
change is fully propagated, Amazon S3 might return the new data

A process replaces an existing object and immediately tries to read it. Until the
change is fully propagated, Amazon S3 does not return any data

A process replaces an existing object and immediately tries to read it. Amazon S3
always returns the latest version of the object

Correct answer

A process replaces an existing object and immediately tries to read it. Amazon S3
always returns the latest version of the object

Feedback

A process replaces an existing object and immediately tries to read it. Until the change is
fully propagated, Amazon S3 might return the previous data

A process replaces an existing object and immediately tries to read it. Until the change is
fully propagated, Amazon S3 does not return any data

A process replaces an existing object and immediately tries to read it. Until the change is
fully propagated, Amazon S3 might return the new data

These three options contradict the earlier details provided in the explanation.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction.html#ConsistencyMod
el

https://aws.amazon.com/s3/faqs/
46. A financial services company has to retain the activity logs for each *1/1
of their customers to meet compliance guidelines. Depending on the
business line, the company wants to retain the logs for 5-10 years in
highly available and durable storage on AWS. The overall data size is
expected to be in Petabytes. In case of an audit, the data would need to
be accessible within a timeframe of up to 48 hours.

Which AWS storage option is the MOST cost-effective for the given
compliance requirements?

Third party tape storage

Amazon S3 Glacier Deep Archive

Amazon S3 Glacier

Amazon S3 Standard storage

Feedback

Amazon S3 Glacier Deep Archive

Amazon S3 Glacier and Amazon S3 Glacier Deep Archive are secure, durable, and
extremely low-cost Amazon S3 cloud storage classes for data archiving and long-term
backup. They are designed to deliver 99.999999999% durability, and provide
comprehensive security and compliance capabilities that can help meet even the most
stringent regulatory requirements.

Amazon S3 Glacier Deep Archive is a new Amazon S3 storage class that provides secure
and durable object storage for long-term retention of data that is accessed once or twice
in a year. From just $0.00099 per GB-month (less than one-tenth of one cent, or about $1
per TB-month), Amazon S3 Glacier Deep Archive offers the lowest cost storage in the
cloud, at prices significantly lower than storing and maintaining data in on-premises
magnetic tape libraries or archiving data off-site.

Amazon S3 Glacier Deep Archive is up to 75% less expensive than Amazon S3 Glacier and
provides retrieval within 12 hours using the Standard retrieval speed. You may also reduce
retrieval costs by selecting Bulk retrieval, which will return data within 48 hours.

Therefore, Amazon S3 Glacier Deep Archive is the correct choice.


47. A company hires experienced specialists to analyze the customer *0/1
service calls attended by its call center representatives. Now, the
company wants to move to AWS Cloud and is looking at an automated
solution to analyze customer service calls for sentiment analysis via ad-
hoc SQL queries.

As a Solutions Architect, which of the following solutions would you


recommend?

Use Amazon Kinesis Data Streams to read the audio files and Amazon Alexa to
convert them into text. Amazon Kinesis Data Analytics can be used to analyze
these files and Amazon Quicksight can be used to visualize and display the output

Use Amazon Transcribe to convert audio files to text and Amazon Athena to
perform SQL based analysis to understand the underlying customer sentiments

Use Amazon Kinesis Data Streams to read the audio files and machine learning
(ML) algorithms to convert the audio files into text and run customer sentiment
analysis

Use Amazon Transcribe to convert audio files to text and Amazon Quicksight to
perform SQL based analysis on these text files to understand the underlying
patterns. Visualize and display them onto user Dashboards for reporting
purposes

Correct answer

Use Amazon Transcribe to convert audio files to text and Amazon Athena to
perform SQL based analysis to understand the underlying customer sentiments

Feedback

Use Amazon Kinesis Data Streams to read the audio files and machine learning (ML)
algorithms to convert the audio files into text and run customer sentiment analysis -
Amazon Kinesis can be used to stream real-time data for further analysis and storage.
Kinesis Data Streams cannot read audio files. You will still need to use AWS Transcribe for
ASR services.

Use Amazon Kinesis Data Streams to read the audio files and Amazon Alexa to convert
them into text. Amazon Kinesis Data Analytics can be used to analyze these files and
Amazon Quicksight can be used to visualize and display the output - Amazon Kinesis Data
Streams cannot read audio files. Amazon Alexa cannot be used as an Automatic Speech
Recognition (ASR) service, though Alexa internally uses ASR for its working.

Use Amazon Transcribe to convert audio files to text and Amazon Quicksight to perform
SQL based analysis on these text files to understand the underlying patterns. Visualize and
display them onto user Dashboards for reporting purposes - Amazon Quicksight is used
for the visual representation of data through dashboards. However, it is not an SQL query
based analysis tool like Amazon Athena. So, this option is incorrect.

References:

https://aws.amazon.com/blogs/machine-learning/automating-the-analysis-of-multi-
speaker-audio-files-using-amazon-transcribe-and-amazon-athena

https://aws.amazon.com/athena
48. A global media company uses a fleet of Amazon EC2 instances *1/1
(behind an Application Load Balancer) to power its video streaming
application. To improve the performance of the application, the
engineering team has also created an Amazon CloudFront distribution
with the Application Load Balancer as the custom origin. The security
team at the company has noticed a spike in the number and types of SQL
injection and cross-site scripting attack vectors on the application.

As a solutions architect, which of the following solutions would you


recommend as the MOST effective in countering these malicious
attacks?

Use AWS Firewall Manager with CloudFront distribution

Use AWS Security Hub with Amazon CloudFront distribution

Use AWS Web Application Firewall (AWS WAF) with Amazon CloudFront
distribution

Use Amazon Route 53 with Amazon CloudFront distribution

Feedback

Use AWS Web Application Firewall (AWS WAF) with Amazon CloudFront distribution

AWS WAF is a web application firewall that helps protect your web applications or APIs
against common web exploits that may affect availability, compromise security, or
consume excessive resources. AWS WAF gives you control over how traffic reaches your
applications by enabling you to create security rules that block common attack patterns,
such as SQL injection or cross-site scripting, and rules that filter out specific traffic
patterns you define.

A web access control list (web ACL) gives you fine-grained control over the web requests
that your Amazon CloudFront distribution, Amazon API Gateway API, or Application Load
Balancer responds to.

When you create a web ACL, you can specify one or more Amazon CloudFront
distributions that you want AWS WAF to inspect. AWS WAF starts to allow, block, or count
web requests for those distributions based on the conditions that you identify in the web
ACL. Therefore, combining AWS WAF with Amazon CloudFront can prevent SQL injection
and cross-site scripting attacks. So this is the correct option.
49. You have built an application that is deployed with Elastic Load *0/1
Balancing and an Auto Scaling Group. As a Solutions Architect, you have
configured aggressive Amazon CloudWatch alarms, making your Auto
Scaling Group (ASG) scale in and out very quickly, renewing your fleet of
Amazon EC2 instances on a daily basis. A production bug appeared two
days ago, but the team is unable to SSH into the instance to debug the
issue, because the instance has already been terminated by the Auto
Scaling Group. The log files are saved on the Amazon EC2 instance.

How will you resolve the issue and make sure it doesn't happen again?

Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send
logs to Amazon CloudWatch

Make a snapshot of the Amazon EC2 instance just before it gets terminated

Use AWS Lambda to regularly SSH into the Amazon EC2 instances and copy the log
files to Amazon S3

Disable the Termination from the Auto Scaling Group any time a user reports an
issue

Correct answer

Install an Amazon CloudWatch Logs agents on the Amazon EC2 instances to send
logs to Amazon CloudWatch

Feedback

Disable the Termination from the Auto Scaling Group any time a user reports an issue -
Disabling the Termination from the Auto Scaling Group would prevent our Auto Scaling
Group from being Elastic and impact our costs. Therefore this option is incorrect.

Make a snapshot of the Amazon EC2 instance just before it gets terminated - Making a
snapshot of the Amazon EC2 instance before it gets terminated could work but it's
tedious, not elastic and very expensive, since our interest is just the log files. Therefore
this option is not the best fit for the given use-case.

You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-
time snapshots. Snapshots are incremental backups, which means that only the blocks on
the device that have changed after your most recent snapshot are saved. This minimizes
the time required to create the snapshot and saves on storage costs by not duplicating
data.

Use AWS Lambda to regularly SSH into the Amazon EC2 instances and copy the log files
to Amazon S3 - AWS Lambda lets you run code without provisioning or managing servers.
It cannot be used for production-grade serverless log analytics. Using AWS Lambda would
be extremely hard to use for this task. Therefore this option is not the best fit for the given
use-case.

Reference:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.ht
ml

50. A company is looking for a technology that allows its mobile app *1/1
users to connect through a Google login and have the capability to turn
on AWS Multi-Factor Authentication (AWS MFA) to have maximum
security. Ideally, the solution should be fully managed by AWS.

Which technology do you recommend for managing the users' accounts?

AWS Identity and Access Management (AWS IAM)

Enable the AWS Google Login Service

Amazon Cognito

Write an AWS Lambda function with Auth0 3rd party integration

Feedback

Amazon Cognito

Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and
mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports
sign-in with social identity providers, such as Facebook, Google, and Amazon, and
enterprise identity providers via SAML 2.0. Here Cognito is the best technology choice for
managing mobile user accounts.
51. A medical devices company uses Amazon S3 buckets to store critical *1/1
data. Hundreds of buckets are used to keep the data segregated and well
organized. Recently, the development team noticed that the lifecycle
policies on the Amazon S3 buckets have not been applied optimally,
resulting in higher costs.

As a Solutions Architect, can you recommend a solution to reduce


storage costs on Amazon S3 while keeping the IT team's involvement to a
minimum?

Use Amazon S3 Intelligent-Tiering storage class to optimize the Amazon S3


storage costs

Use Amazon S3 Outposts storage class to reduce the costs on Amazon S3 storage
by storing the data on-premises

Use Amazon S3 One Zone-Infrequent Access, to reduce the costs on Amazon S3


storage

Configure Amazon EFS to provide a fast, cost-effective and sharable storage


service

Feedback

Use Amazon S3 Intelligent-Tiering storage class to optimize the Amazon S3 storage costs

The Amazon S3 Intelligent-Tiering storage class is designed to optimize costs by


automatically moving data to the most cost-effective access tier, without performance
impact or operational overhead. It works by storing objects in two access tiers: one tier
that is optimized for frequent access and another lower-cost tier that is optimized for
infrequent access.

For a small monthly monitoring and automation fee per object, Amazon S3 monitors
access patterns of the objects in Amazon S3 Intelligent-Tiering and moves the ones that
have not been accessed for 30 consecutive days to the infrequent access tier. If an object
in the infrequent access tier is accessed, it is automatically moved back to the frequent
access tier. There are no retrieval fees when using the Amazon S3 Intelligent-Tiering
storage class, and no additional tiering fees when objects are moved between access
tiers. It is the ideal storage class for long-lived data with access patterns that are unknown
or unpredictable.

Amazon S3 Storage Classes can be configured at the object level and a single bucket can
contain objects stored in Amazon S3 Standard, Amazon S3 Intelligent-Tiering, Amazon S3
Standard-IA, and Amazon S3 One Zone-IA. You can upload objects directly to Amazon S3
Intelligent-Tiering, or use S3 Lifecycle policies to transfer objects from Amazon S3
Standard and Amazon S3 Standard-IA to Amazon S3 Intelligent-Tiering. You can also
archive objects from Amazon S3 Intelligent-Tiering to Amazon S3 Glacier.
52. A streaming solutions company is building a video streaming product *0/1
by using an Application Load Balancer (ALB) that routes the requests to
the underlying Amazon EC2 instances. The engineering team has noticed
a peculiar pattern. The Application Load Balancer removes an instance
from its pool of healthy instances whenever it is detected as unhealthy
but the Auto Scaling group fails to kick-in and provision the replacement
instance.

What could explain this anomaly?

The Auto Scaling group is using Amazon EC2 based health check and the
Application Load Balancer is using ALB based health check

Both the Auto Scaling group and Application Load Balancer are using Amazon EC2
based health check

The Auto Scaling group is using ALB based health check and the Application
Load Balancer is using Amazon EC2 based health check

Both the Auto Scaling group and Application Load Balancer are using ALB based
health check

Correct answer

The Auto Scaling group is using Amazon EC2 based health check and the
Application Load Balancer is using ALB based health check

Feedback

The Auto Scaling group is using ALB based health check and the Application Load
Balancer is using Amazon EC2 based health check - Application Load Balancer cannot use
EC2 based health checks, so this option is incorrect.

Both the Auto Scaling group and Application Load Balancer are using ALB based health
check - It is recommended to use ALB based health checks for both Auto Scaling group
and Application Load Balancer. If both the Auto Scaling group and Application Load
Balancer use ALB based health checks, then you will be able to avoid the scenario
mentioned in the question.

Both the Auto Scaling group and Application Load Balancer are using Amazon EC2 based
health check - Application Load Balancer cannot use EC2 based health checks, so this
option is incorrect.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-health-
checks.html

https://docs.aws.amazon.com/autoscaling/ec2/userguide/health-checks-overview.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-elb-healthcheck.html

53. A DevOps engineer at an organization is debugging issues related to *0/1


an Amazon EC2 instance. The engineer has SSH'ed into the instance and
he needs to retrieve the instance public IP from within a shell script
running on the instance command line.

Can you identify the correct URL path to get the instance public IP?

http://254.169.254.169/latest/meta-data/public-ipv4

http://169.254.169.254/latest/meta-data/public-ipv4

http://169.254.169.254/latest/user-data/public-ipv4

http://254.169.254.169/latest/user-data/public-ipv4

Correct answer

http://169.254.169.254/latest/meta-data/public-ipv4

Feedback

http://169.254.169.254/latest/user-data/public-ipv4

http://254.169.254.169/latest/meta-data/public-ipv4

http://254.169.254.169/latest/user-data/public-ipv4

These three options do not meet the specification for the URL path to get the instance
public IP, so these are incorrect.

Reference:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-
retrieval.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-
data.html
54. An e-commerce company uses Amazon Simple Queue Service *1/1
(Amazon SQS) queues to decouple their application architecture. The
engineering team has observed message processing failures for some
customer orders.

As a solutions architect, which of the following solutions would you


recommend for handling such message failures?

Use long polling to handle message processing failures

Use short polling to handle message processing failures

Use a temporary queue to handle message processing failures

Use a dead-letter queue to handle message processing failures

Feedback

Use a dead-letter queue to handle message processing failures

Dead-letter queues can be used by other queues (source queues) as a target for messages
that can't be processed (consumed) successfully. Dead-letter queues are useful for
debugging your application or messaging system because they let you isolate problematic
messages to determine why their processing doesn't succeed. Sometimes, messages
can’t be processed because of a variety of possible issues, such as when a user
comments on a story but it remains unprocessed because the original story itself is
deleted by the author while the comments were being posted. In such a case, the dead-
letter queue can be used to handle message processing failures.
55. An application with global users across AWS Regions had suffered an *1/1
issue when the Elastic Load Balancing (ELB) in a Region malfunctioned
thereby taking down the traffic with it. The manual intervention cost the
company significant time and resulted in major revenue loss.

What should a solutions architect recommend to reduce internet latency


and add automatic failover across AWS Regions?

Set up AWS Direct Connect as the backbone for each of the AWS Regions where
the application is deployed

Create Amazon S3 buckets in different AWS Regions and configure Amazon


CloudFront to pick the nearest edge location to the user

Set up an Amazon Route 53 geoproximity routing policy to route traffic

Set up AWS Global Accelerator and add endpoints to cater to users in different
geographic locations

Feedback

Set up AWS Global Accelerator and add endpoints to cater to users in different geographic
locations

As your application architecture grows, so does the complexity, with longer user-facing IP
lists and more nuanced traffic routing logic. AWS Global Accelerator solves this by
providing you with two static IPs that are anycast from our globally distributed edge
locations, giving you a single entry point to your application, regardless of how many AWS
Regions it’s deployed in. This allows you to add or remove origins, Availability Zones or
Regions without reducing your application availability. Your traffic routing is managed
manually, or in console with endpoint traffic dials and weights. If your application endpoint
has a failure or availability issue, AWS Global Accelerator will automatically redirect your
new connections to a healthy endpoint within seconds.

By using AWS Global Accelerator, you can:

1. Associate the static IP addresses provided by AWS Global Accelerator to regional AWS
resources or endpoints, such as Network Load Balancers, Application Load Balancers, EC2
Instances, and Elastic IP addresses. The IP addresses are anycast from AWS edge
locations so they provide onboarding to the AWS global network close to your users.

2. Easily move endpoints between Availability Zones or AWS Regions without needing to
update your DNS configuration or change client-facing applications.

3. Dial traffic up or down for a specific AWS Region by configuring a traffic dial percentage
for your endpoint groups. This is especially useful for testing performance and releasing
updates.
4. Control the proportion of traffic directed to each endpoint within an endpoint group by
assigning weights across the endpoints.

56. A company wants to ensure high availability for its Amazon RDS *1/1
database. The development team wants to opt for Multi-AZ deployment
and they would like to understand what happens when the primary
instance of the Multi-AZ configuration goes down.

As a Solutions Architect, which of the following will you identify as the


outcome of the scenario?

An email will be sent to the System Administrator asking for manual intervention

The application will be down until the primary database has recovered itself

The CNAME record will be updated to point to the standby database

The URL to access the database will change to the standby database

Feedback

The CNAME record will be updated to point to the standby database

Amazon RDS provides high availability and failover support for DB instances using Multi-
AZ deployments. Amazon RDS uses several different technologies to provide failover
support. Multi-AZ deployments for MariaDB, MySQL, Oracle, and PostgreSQL DB instances
use Amazon's failover technology. SQL Server DB instances use SQL Server Database
Mirroring (DBM) or Always On Availability Groups (AGs).

In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a


synchronous standby replica in a different Availability Zone. The primary DB instance is
synchronously replicated across Availability Zones to a standby replica to provide data
redundancy, eliminate I/O freezes, and minimize latency spikes during system backups.
Running a DB instance with high availability can enhance availability during planned
system maintenance, and help protect your databases against DB instance failure and
Availability Zone disruption.

Failover is automatically handled by Amazon RDS so that you can resume database
operations as quickly as possible without administrative intervention. When failing over,
Amazon RDS simply flips the canonical name record (CNAME) for your DB instance to
point at the standby, which is in turn promoted to become the new primary. Multi-AZ
means the URL is the same, the failover is automated, and the CNAME will automatically
be updated to point to the standby database.
57. A development team has deployed a microservice to the Amazon *1/1
Elastic Container Service (Amazon ECS). The application layer is in a
Docker container that provides both static and dynamic content through
an Application Load Balancer. With increasing load, the Amazon ECS
cluster is experiencing higher network usage. The development team has
looked into the network usage and found that 90% of it is due to
distributing static content of the application.

As a Solutions Architect, what do you recommend to improve the


application's network usage and decrease costs?

Distribute the dynamic content through Amazon S3

Distribute the static content through Amazon S3

Distribute the dynamic content through Amazon EFS

Distribute the static content through Amazon EFS

Feedback

Distribute the static content through Amazon S3

You can use Amazon S3 to host a static website. On a static website, individual web pages
include static content. They might also contain client-side scripts. To host a static website
on Amazon S3, you configure an Amazon S3 bucket for website hosting and then upload
your website content to the bucket. When you configure a bucket as a static website, you
must enable website hosting, set permissions, and create and add an index document.
Depending on your website requirements, you can also configure redirects, web traffic
logging, and a custom error document.

Distributing the static content through Amazon S3 allows us to offload most of the
network usage to Amazon S3 and free up our applications running on Amazon ECS.
58. A company needs a massive PostgreSQL database and the *1/1
engineering team would like to retain control over managing the patches,
version upgrades for the database, and consistent performance with high
IOPS. The team wants to install the database on an Amazon EC2
instance with the optimal storage type on the attached Amazon EBS
volume.

As a solutions architect, which of the following configurations would you


suggest to the engineering team?

Amazon EC2 with Amazon EBS volume of Throughput Optimized HDD (st1) type

Amazon EC2 with Amazon EBS volume of General Purpose SSD (gp2) type

Amazon EC2 with Amazon EBS volume of cold HDD (sc1) type

Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type

Feedback

Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type

Amazon EBS provides the following volume types, which differ in performance
characteristics and price so that you can tailor your storage performance and cost to the
needs of your applications.

The volumes types fall into two categories:

SSD-backed volumes optimized for transactional workloads involving frequent read/write


operations with small I/O size, where the dominant performance attribute is IOPS

HDD-backed volumes optimized for large streaming workloads where throughput


(measured in MiB/s) is a better performance measure than IOPS

Provision IOPS type supports critical business applications that require sustained IOPS
performance, or more than 16,000 IOPS or 250 MiB/s of throughput per volume. Examples
are large database workloads, such as: MongoDB Cassandra Microsoft SQL Server MySQL
PostgreSQL Oracle

Therefore, Amazon EC2 with Amazon EBS volume of Provisioned IOPS SSD (io1) type is
the right fit for the given use-case.
59. You are a cloud architect at an IT company. The company has *0/1
multiple enterprise customers that manage their own mobile applications
that capture and send data to Amazon Kinesis Data Streams. They have
been getting a ProvisionedThroughputExceededException exception. You
have been contacted to help and upon analysis, you notice that
messages are being sent one by one at a high rate.

Which of the following options will help with the exception while keeping
costs at a minimum?

Decrease the Stream retention duration

Use batch messages

Increase the number of shards

Use Exponential Backoff

Correct answer

Use batch messages

Feedback

Use Exponential Backoff - While this may help in the short term, as soon as the request
rate increases, you will see the ProvisionedThroughputExceededException exception
again.

Increase the number of shards - Increasing shards could be a short term fix but will
substantially increase the cost, so this option is ruled out.

Decrease the Stream retention duration - This operation may result in data loss and won't
help with the exceptions, so this option is incorrect.

References:

https://aws.amazon.com/blogs/big-data/implementing-efficient-and-reliable-producers-
with-the-amazon-kinesis-producer-library/

https://aws.amazon.com/kinesis/data-streams/
60. The engineering team at an e-commerce company uses an AWS *0/1
Lambda function to write the order data into a single DB instance
Amazon Aurora cluster. The team has noticed that many order- writes to
its Aurora cluster are getting missed during peak load times. The
diagnostics data has revealed that the database is experiencing high CPU
and memory consumption during traffic spikes. The team also wants to
enhance the availability of the Aurora DB.

Which of the following steps would you combine to address the given
scenario? (Select two)

Use Amazon EC2 instances behind an Application Load Balancer to write the order
data into Amazon Aurora cluster

Increase the concurrency of the AWS Lambda function so that the order-writes do
not get missed during traffic spikes

Create a standby Aurora instance in another Availability Zone to improve the


availability as the standby can serve as a failover target

Create a replica Aurora instance in another Availability Zone to improve the


availability as the replica can serve as a failover target

Handle all read operations for your application by connecting to the reader
endpoint of the Amazon Aurora cluster so that Aurora can spread the load for
read-only connections across the Aurora replica

Correct answer

Create a replica Aurora instance in another Availability Zone to improve the


availability as the replica can serve as a failover target

Handle all read operations for your application by connecting to the reader endpoint
of the Amazon Aurora cluster so that Aurora can spread the load for read-only
connections across the Aurora replica

Feedback

Create a standby Aurora instance in another Availability Zone to improve the availability as
the standby can serve as a failover target - There are no standby instances in Aurora.
Aurora performs an automatic failover to a read replica when a problem is detected. So
this option is incorrect.

Increase the concurrency of the AWS Lambda function so that the order-writes do not get
missed during traffic spikes - Increasing the concurrency of the AWS Lambda function
would not resolve the issue since the bottleneck is at the database layer, as exhibited by
the high CPU and memory consumption for the Aurora instance. This option has been
added as a distractor.

Use Amazon EC2 instances behind an Application Load Balancer to write the order data
into Amazon Aurora cluster - Using Amazon EC2 instances behind an Application Load
Balancer would not resolve the issue since the bottleneck is at the database layer, as
exhibited by the high CPU and memory consumption for the Aurora instance. This option
has been added as a distractor.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.h
tml

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHig
hAvailability.html

https://aws.amazon.com/rds/features/read-replicas/
61. A DevOps engineer at an IT company was recently added to the *1/1
admin group of the company's AWS account.
The AdministratorAccess managed policy is attached to this group.
Can you identify the AWS tasks that the DevOps engineer CANNOT
perform even though he has full Administrator privileges (Select two)?

Change the password for his own IAM user account

Delete the IAM user for his manager

Configure an Amazon S3 bucket to enable AWS Multi-Factor Authentication


(AWS MFA) delete

Delete an Amazon S3 bucket from the production environment

Close the company's AWS account

Feedback

Configure an Amazon S3 bucket to enable AWS Multi-Factor Authentication (AWS MFA)


delete

Close the company's AWS account

An IAM user with full administrator access can perform almost all AWS tasks except a few
tasks designated only for the root account user. Some of the AWS tasks that only a root
account user can do are as follows: change account name or root password or root email
address, change AWS support plan, close AWS account, enable AWS Multi-Factor
Authentication (AWS MFA) on S3 bucket delete, create Cloudfront key pair, register for
GovCloud. Even though the DevOps engineer is part of the admin group, he cannot
configure an Amazon S3 bucket to enable AWS MFA delete or close the company's AWS
account.
62. A Customer relationship management (CRM) application is facing *0/1
user experience issues with users reporting frequent sign-in requests
from the application. The application is currently hosted on multiple
Amazon EC2 instances behind an Application Load Balancer. The
engineering team has identified the root cause as unhealthy servers
causing session data to be lost. The team would like to implement a
distributed in-memory cache-based session management solution.

As a solutions architect, which of the following solutions would you


recommend?

Use Amazon RDS for distributed in-memory cache based session management

Use Amazon DynamoDB for distributed in-memory cache based session


management

Use Amazon Elasticache for distributed in-memory cache based session


management

Use Application Load Balancer sticky sessions

Correct answer

Use Amazon Elasticache for distributed in-memory cache based session


management

Feedback

Use Amazon RDS for distributed in-memory cache based session management - Amazon
Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a
relational database in the cloud. It cannot be used as a distributed in-memory cache for
session management, hence this option is incorrect.

Use Amazon DynamoDB for distributed in-memory cache based session management -
Amazon DynamoDB is a key-value and document database that delivers single-digit
millisecond performance at any scale. Amazon DynamoDB is a NoSQL database and is
not the right fit for a distributed in-memory cache-based session management solution.

Use Application Load Balancer sticky sessions - Although sticky sessions enable each
user to interact with one server and one server only, however, in case of an unhealthy
server, all the session data is gone as well. Therefore Amazon Elasticache powered
distributed in-memory cache-based session management is a better solution.

References:

https://aws.amazon.com/getting-started/hands-on/building-fast-session-caching-with-
amazon-elasticache-for-redis/
https://aws.amazon.com/elasticache/
63. A startup has created a cost-effective backup solution in another *0/1
AWS Region. The application is running in warm standby mode and has
Application Load Balancer (ALB) to support it from the front. The current
failover process is manual and requires updating the DNS alias record to
point to the secondary Application Load Balancer in another Region in
case of failure of the primary Application Load Balancer.

As a Solutions Architect, what will you recommend to automate the


failover process?

Configure AWS Trusted Advisor to check on unhealthy instances

Enable an Amazon Route 53 health check

Enable an Amazon EC2 instance health check

Enable an ALB health check

Correct answer

Enable an Amazon Route 53 health check

Feedback

Enable an ALB health check - ELB health check verifies that a specified TCP port on an
instance is accepting connections or a specified page has returned an error code of 200. It
is not useful for the given failover scenario.

Enable an Amazon EC2 instance health check - Instance status checks monitor the
software and network configuration of your instance. It is not intelligent enough to
understand if the application on the instance is working correctly. Hence, this is not the
right choice for the given use-case.

Configure AWS Trusted Advisor to check on unhealthy instances - AWS Trusted Advisor
examines the health check configuration for Auto Scaling groups. If Elastic Load Balancing
is being used for an Auto Scaling group, the recommended configuration is to enable an
Elastic Load Balancing health check. AWS Trusted Advisor recommends certain
configuration changes by comparing your system configurations to AWS Best practices. It
cannot handle a failover the way Amazon Route 53 does.

References:

https://aws.amazon.com/blogs/aws/amazon-route-53-elb-integration-dns-failover/

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-system-instance-
status-check.html

https://docs.aws.amazon.com/elasticloadbalancing/latest/application/target-group-
health-checks.html
https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-
checklist/
64. A financial services company runs its flagship web application on *0/1
AWS. The application serves thousands of users during peak hours. The
company needs a scalable near-real-time solution to share hundreds of
thousands of financial transactions with multiple internal applications.
The solution should also remove sensitive details from the transactions
before storing the cleansed transactions in a document database for low-
latency retrieval.

As an AWS Certified Solutions Architect Associate, which of the following


would you recommend?

Batch process the raw transactions data into Amazon S3 flat files. Use S3 events to
trigger an AWS Lambda function to remove sensitive data from the raw
transactions in the flat file and then store the cleansed transactions in Amazon
DynamoDB. Leverage DynamoDB Streams to share the transactions data with the
internal applications

Feed the streaming transactions into Amazon Kinesis Data Firehose. Leverage AWS
Lambda integration to remove sensitive data from every transaction and then store
the cleansed transactions in Amazon DynamoDB. The internal applications can
consume the raw transactions off the Amazon Kinesis Data Firehose

Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS
Lambda integration to remove sensitive data from every transaction and then store
the cleansed transactions in Amazon DynamoDB. The internal applications can
consume the raw transactions off the Amazon Kinesis Data Stream

Persist the raw transactions into Amazon DynamoDB. Configure a rule in


Amazon DynamoDB to update the transaction by removing sensitive data
whenever any new raw transaction is written. Leverage Amazon DynamoDB
Streams to share the transactions data with the internal applications

Correct answer

Feed the streaming transactions into Amazon Kinesis Data Streams. Leverage AWS
Lambda integration to remove sensitive data from every transaction and then store
the cleansed transactions in Amazon DynamoDB. The internal applications can
consume the raw transactions off the Amazon Kinesis Data Stream

Feedback

Batch process the raw transactions data into Amazon S3 flat files. Use S3 events to trigger
an AWS Lambda function to remove sensitive data from the raw transactions in the flat file
and then store the cleansed transactions in Amazon DynamoDB. Leverage DynamoDB
Streams to share the transactions data with the internal applications- The use case
requires a near-real-time solution for cleansing, processing and storing the transactions,
so using a batch process would be incorrect.

Feed the streaming transactions into Amazon Kinesis Data Firehose. Leverage AWS
Lambda integration to remove sensitive data from every transaction and then store the
cleansed transactions in Amazon DynamoDB. The internal applications can consume the
raw transactions off the Amazon Kinesis Data Firehose - Amazon Kinesis Data Firehose is
an extract, transform, and load (ETL) service that reliably captures, transforms, and
delivers streaming data to data lakes, data stores, and analytics services. You cannot set
up multiple consumers for Amazon Kinesis Data Firehose delivery streams as it can dump
data in a single data repository at a time, so this option is incorrect.

Persist the raw transactions into Amazon DynamoDB. Configure a rule in Amazon
DynamoDB to update the transaction by removing sensitive data whenever any new raw
transaction is written. Leverage Amazon DynamoDB Streams to share the transactions
data with the internal applications - There is no such rule within Amazon DynamoDB that
can auto-update every time a new item is written in a DynamoDB table. You would need to
use a Amazon DynamoDB trigger to invoke an external service like a Lambda function on
every new write, which can then cleanse and update the item. In addition, this process
introduces inefficiency in the workflow as the same item is written and then updated for
cleansing purposes. Therefore this option is incorrect.
65. A health-care company manages its web application on Amazon EC2 *1/1
instances running behind Auto Scaling group (ASG). The company
provides ambulances for critical patients and needs the application to be
reliable. The workload of the company can be managed on 2 Amazon
EC2 instances and can peak up to 6 instances when traffic increases.

As a Solutions Architect, which of the following configurations would you


select as the best fit for these requirements?

The Auto Scaling group should be configured with the minimum capacity set to 2,
with 1 instance each in two different Availability Zones. The maximum capacity of
the Auto Scaling group should be set to 6

The Auto Scaling group should be configured with the minimum capacity set to
4, with 2 instances each in two different Availability Zones. The maximum
capacity of the Auto Scaling group should be set to 6

The Auto Scaling group should be configured with the minimum capacity set to 2
and the maximum capacity set to 6 in a single Availability Zone

The Auto Scaling group should be configured with the minimum capacity set to 4,
with 2 instances each in two different AWS Regions. The maximum capacity of the
Auto Scaling group should be set to 6

Feedback

The Auto Scaling group should be configured with the minimum capacity set to 4, with 2
instances each in two different Availability Zones. The maximum capacity of the Auto
Scaling group should be set to 6

You configure the size of your Auto Scaling group by setting the minimum, maximum, and
desired capacity. The minimum and maximum capacity are required to create an Auto
Scaling group, while the desired capacity is optional. If you do not define your desired
capacity upfront, it defaults to your minimum capacity.

Amazon EC2 Auto Scaling enables you to take advantage of the safety and reliability of
geographic redundancy by spanning Auto Scaling groups across multiple Availability
Zones within a Region. When one Availability Zone becomes unhealthy or unavailable,
Auto Scaling launches new instances in an unaffected Availability Zone. When the
unhealthy Availability Zone returns to a healthy state, Auto Scaling automatically
redistributes the application instances evenly across all of the designated Availability
Zones. Since the application is extremely critical and needs to have a reliable architecture
to support it, the Amazon EC2 instances should be maintained in at least two Availability
Zones (AZs) for uninterrupted service.

Amazon EC2 Auto Scaling attempts to distribute instances evenly between the Availability
Zones that are enabled for your Auto Scaling group. This is why the minimum capacity
should be 4 instances and not 2. Auto Scaling group will launch 2 instances each in both
the AZs and this redundancy is needed to keep the service available always.

This form was created inside of Adex International Pvt. Ltd..


Does this form look suspicious? Report

Forms

You might also like