0% found this document useful (0 votes)
9 views110 pages

Aws Csaa Practice Exam 4

The document presents a series of AWS Certified Solutions Architect Associate practice exam questions and answers. It covers various scenarios involving cost-effective EC2 instance usage, high availability architecture, and disaster recovery strategies using AWS services like EC2, S3, Lambda, and RDS. Each question includes correct and incorrect answer options along with explanations for the choices made.

Uploaded by

jsiddela6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views110 pages

Aws Csaa Practice Exam 4

The document presents a series of AWS Certified Solutions Architect Associate practice exam questions and answers. It covers various scenarios involving cost-effective EC2 instance usage, high availability architecture, and disaster recovery strategies using AWS services like EC2, S3, Lambda, and RDS. Each question includes correct and incorrect answer options along with explanations for the choices made.

Uploaded by

jsiddela6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Course: AWS Certified Solutions Architect Associate Practice Exams

Assessment: AWS CSAA Practice Exam 4


Username: Jaganmohanarao Siddela
e-mail: [email protected]

A company runs a large batch processing job at the end of every quarter. The
processing job runs for 5 days and uses 15 Amazon EC2 instances. The processing
must run uninterrupted for 5 hours per day. The company is investigating ways to
reduce the cost of the batch processing job.

Which pricing model should the company choose?

◉ Reserved Instances
○ Spot Instances
○ On-Demand Instances
○ Dedicated Instances

Correct answer
On-Demand Instances

Feedback

Explanation:

Each EC2 instance runs for 5 hours a day for 5 days per quarter or 20 days per year.
This is time duration is insufficient to warrant reserved instances as these require a
commitment of a minimum of 1 year and the discounts would not outweigh the costs of
having the reservations unused for a large percentage of time. In this case, there are no
options presented that can reduce the cost and therefore on-demand instances should
be used.

CORRECT: "On-Demand Instances" is the correct answer.

INCORRECT: "Reserved Instances" is incorrect. Reserved instances are good for


continuously running workloads that run for a period of 1 or 3 years.

INCORRECT: "Spot Instances" is incorrect. Spot instances may be interrupted and this
is not acceptable. Note that Spot Block is deprecated and unavailable to new
customers.

INCORRECT: "Dedicated Instances" is incorrect. These do not provide any cost


advantages and will actually be more expensive.

References:

https://aws.amazon.com/ec2/pricing/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

A company plans to make an Amazon EC2 Linux instance unavailable outside of


business hours to save costs. The instance is backed by an Amazon EBS volume.
There is a requirement that the contents of the instance’s memory must be
preserved when it is made unavailable.

How can a solutions architect meet these requirements?

○ Stop the instance outside business hours. Start the instance again when required.
◉ Hibernate the instance outside business hours. Start the instance again when required.
○ Use Auto Scaling to scale down the instance outside of business hours. Scale up the
instance when required.
○ Terminate the instance outside business hours. Recover the instance again when required.
Correct answer
Hibernate the instance outside business hours. Start the instance again when required.

Feedback

Explanation:

When you hibernate an instance, Amazon EC2 signals the operating system to perform
hibernation (suspend-to-disk). Hibernation saves the contents from the instance
memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume.
Amazon EC2 persists the instance's EBS root volume and any attached EBS data
volumes. When you start your instance:

The EBS root volume is restored to its previous state


The RAM contents are reloaded
The processes that were previously running on the instance are resumed
Previously attached data volumes are reattached and the instance retains its
instance ID

CORRECT: "Hibernate the instance outside business hours. Start the instance again
when required" is the correct answer.

INCORRECT: "Stop the instance outside business hours. Start the instance again when
required" is incorrect. When an instance is stopped the operating system is shut down
and the contents of memory will be lost.

INCORRECT: "Use Auto Scaling to scale down the instance outside of business hours.
Scale out the instance when required" is incorrect. Auto Scaling scales does not scale
up and down, it scales in by terminating instances and out by launching instances.
When scaling out new instances are launched and no state will be available from
terminated instances.

INCORRECT: "Terminate the instance outside business hours. Recover the instance
again when required" is incorrect. You cannot recover terminated instances, you can
recover instances that have become impaired in some circumstances.
References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

A company hosts a multiplayer game on AWS. The application uses Amazon EC2
instances in a single Availability Zone and users connect over Layer 4. Solutions
Architect has been tasked with making the architecture highly available and also
more cost-effective.

How can the solutions architect best meet these requirements? (Select TWO.)

○ Configure an Auto Scaling group to add or remove instances in the Availability Zone
automatically
○ Increase the number of instances and use smaller EC2 instance types
○ Configure a Network Load Balancer in front of the EC2 instances
◉ Configure an Application Load Balancer in front of the EC2 instances
◉ Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically

Correct answers

Configure a Network Load Balancer in front of the EC2 instances


Configure an Auto Scaling group to add or remove instances in multiple Availability
Zones automatically

Feedback
Explanation:

The solutions architect must enable high availability for the architecture and ensure it is
cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be
created to add and remove instances across multiple availability zones.

In order to distribute the traffic to the instances the architecture should use a Network
Load Balancer which operates at Layer 4. This architecture will also be cost-effective as
the Auto Scaling group will ensure the right number of instances are running based on
demand.

CORRECT: "Configure a Network Load Balancer in front of the EC2 instances" is a


correct answer.

CORRECT: "Configure an Auto Scaling group to add or remove instances in multiple


Availability Zones automatically" is also a correct answer.

INCORRECT: "Increase the number of instances and use smaller EC2 instance types"
is incorrect as this is not the most cost-effective option. Auto Scaling should be used to
maintain the right number of active instances.

INCORRECT: "Configure an Auto Scaling group to add or remove instances in the


Availability Zone automatically" is incorrect as this is not highly available as it’s a single
AZ.

INCORRECT: "Configure an Application Load Balancer in front of the EC2 instances" is


incorrect as an ALB operates at Layer 7 rather than Layer 4.

References:

https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/
A company requires a solution to allow customers to customize images that are
stored in an online catalog. The image customization parameters will be sent in
requests to Amazon API Gateway. The customized image will then be generated on-
demand and can be accessed online.

The solutions architect requires a highly available solution. Which solution will be
MOST cost-effective?

○ Use Amazon EC2 instances to manipulate the original images into the requested
customization. Store the original and manipulated images in Amazon S3. Configure an Elastic
Load Balancer in front of the EC2 instances
◉ Use AWS Lambda to manipulate the original images to the requested customization. Store
the original and manipulated images in Amazon S3. Configure an Amazon CloudFront
distribution with the S3 bucket as the origin
○ Use AWS Lambda to manipulate the original images to the requested customization. Store
the original images in Amazon S3 and the manipulated images in Amazon DynamoDB.
Configure an Elastic Load Balancer in front of the Amazon EC2 instances
○ Use Amazon EC2 instances to manipulate the original images into the requested
customization. Store the original images in Amazon S3 and the manipulated images in Amazon
DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin

Correct answer
Use AWS Lambda to manipulate the original images to the requested customization. Store
the original and manipulated images in Amazon S3. Configure an Amazon CloudFront
distribution with the S3 bucket as the origin

Feedback

Explanation:

All solutions presented are highly available. The key requirement that must be satisfied
is that the solution should be cost-effective and you must choose the most cost-effective
option.

Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these
require ongoing costs even when they’re not used. Instead, a fully serverless solution
should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use
for these requirements.

CORRECT: "Use AWS Lambda to manipulate the original images to the requested
customization. Store the original and manipulated images in Amazon S3. Configure an
Amazon CloudFront distribution with the S3 bucket as the origin" is the correct answer.

INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the
requested customization. Store the original and manipulated images in Amazon S3.
Configure an Elastic Load Balancer in front of the EC2 instances" is incorrect. This is
not the most cost-effective option as the ELB and EC2 instances will incur costs even
when not used.

INCORRECT: "Use AWS Lambda to manipulate the original images to the requested
customization. Store the original images in Amazon S3 and the manipulated images in
Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2
instances" is incorrect. This is not the most cost-effective option as the ELB will incur
costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when
running and is not the best choice for storing images.

INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the
requested customization. Store the original images in Amazon S3 and the manipulated
images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the
S3 bucket as the origin" is incorrect. This is not the most cost-effective option as the
EC2 instances will incur costs even when not used

References:

https://aws.amazon.com/serverless/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/
https://digitalcloud.training/aws-lambda/

https://digitalcloud.training/amazon-cloudfront/

A company's application is running on Amazon EC2 instances in a single Region. In


the event of a disaster, a solutions architect needs to ensure that the resources can
also be deployed to a second Region.

Which combination of actions should the solutions architect take to accomplish this?
(Select TWO.)

○ Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second
Region
◉ Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
○ Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the
new instance
◉ Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region
for the destination
○ Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an
EC2 instance in the second Region using that EBS volume

Correct answers

Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second
Region
Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second
Region for the destination

Feedback
Explanation:

You can copy an Amazon Machine Image (AMI) within or across AWS Regions using
the AWS Management Console, the AWS Command Line Interface or SDKs, or the
Amazon EC2 API, all of which support the CopyImage action.

Using the copied AMI the solutions architect would then be able to launch an instance
from the same EBS volume in the second Region.

Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3
management console or work with them programmatically using the S3 API.

CORRECT: "Copy an Amazon Machine Image (AMI) of an EC2 instance and specify
the second Region for the destination" is a correct answer.

CORRECT: "Launch a new EC2 instance from an Amazon Machine Image (AMI) in the
second Region" is also a correct answer.

INCORRECT: "Detach a volume on an EC2 instance and copy it to an Amazon S3


bucket in the second Region" is incorrect. You cannot copy EBS volumes directly from
EBS to Amazon S3.

INCORRECT: "Launch a new EC2 instance in the second Region and copy a volume
from Amazon S3 to the new instance" is incorrect. You cannot create an EBS volume
directly from Amazon S3.

INCORRECT: "Copy an Amazon Elastic Block Store (Amazon EBS) volume from
Amazon S3 and launch an EC2 instance in the second Region using that EBS volume"
is incorrect. You cannot create an EBS volume directly from Amazon S3.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ebs/
A web application runs in public and private subnets. The application architecture
consists of a web tier and database tier running on Amazon EC2 instances. Both
tiers run in a single Availability Zone (AZ).

Which combination of steps should a solutions architect take to provide high


availability for this architecture? (Select TWO.)

○ Create new public and private subnets in the same AZ for high availability
○ Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning
multiple AZs
○ Add the existing web application instances to an Auto Scaling group behind an Application
Load Balancer (ALB)
○ Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in
one AZ
◉ Create new public and private subnets in the same VPC, each in a new AZ. Migrate the
database to an Amazon RDS multi-AZ deployment

Correct answers

Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB)
spanning multiple AZs
Create new public and private subnets in the same VPC, each in a new AZ. Migrate the
database to an Amazon RDS multi-AZ deployment

Feedback

Explanation:

To add high availability to this architecture both the web tier and database tier require
changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will
ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take
advantage of a managed database with Multi-AZ functionality. This will ensure that if
there is an issue preventing access to the primary database a secondary database can
take over.

CORRECT: "Create an Amazon EC2 Auto Scaling group and Application Load
Balancer (ALB) spanning multiple AZs" is the correct answer.

CORRECT: "Create new public and private subnets in the same VPC, each in a new
AZ. Migrate the database to an Amazon RDS multi-AZ deployment" is the correct
answer.

INCORRECT: "Create new public and private subnets in the same AZ for high
availability" is incorrect as this would not add high availability.

INCORRECT: "Add the existing web application instances to an Auto Scaling group
behind an Application Load Balancer (ALB)" is incorrect because the existing servers
are in a single subnet. For HA we need to instances in multiple subnets.

INCORRECT: "Create new public and private subnets in a new AZ. Create a database
using Amazon EC2 in one AZ" is incorrect because we also need HA for the database
layer.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-
balancer.html

https://aws.amazon.com/rds/features/multi-az/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/

https://digitalcloud.training/amazon-ec2-auto-scaling/

https://digitalcloud.training/amazon-rds/
A solutions architect is designing the infrastructure to run an application on Amazon
EC2 instances. The application requires high availability and must dynamically scale
based on demand to be cost efficient.

What should the solutions architect do to meet these requirements?

◉ Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Regions
○ Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy
instances to multiple Regions
○ Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Availability Zones
○ Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances
to multiple Availability Zones

Correct answer
Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Availability Zones

Feedback

Explanation:

The Amazon EC2-based application must be highly available and elastically scalable.
Auto Scaling can provide the elasticity by dynamically launching and terminating
instances based on demand. This can take place across availability zones for high
availability.

Incoming connections can be distributed to the instances by using an Application Load


Balancer (ALB).

CORRECT: "Configure an Application Load Balancer in front of an Auto Scaling group


to deploy instances to multiple Availability Zones" is the correct answer.
INCORRECT: "Configure an Amazon API Gateway API in front of an Auto Scaling
group to deploy instances to multiple Availability Zones" is incorrect as API gateway is
not used for load balancing connections to Amazon EC2 instances.

INCORRECT: "Configure an Application Load Balancer in front of an Auto Scaling


group to deploy instances to multiple Regions" is incorrect as you cannot launch
instances in multiple Regions from a single Auto Scaling group.

INCORRECT: "Configure an Amazon CloudFront distribution in front of an Auto Scaling


group to deploy instances to multiple Regions" is incorrect as you cannot launch
instances in multiple Regions from a single Auto Scaling group.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-
scaling.html

https://aws.amazon.com/elasticloadbalancing/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2-auto-scaling/

https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/

Amazon EC2 instances in a development environment run between 9am and 5pm
Monday-Friday. Production instances run 24/7. Which pricing models should be
used to optimize cost and ensure capacity is available? (Select TWO.)

○ Use Spot instances for the development environment


○ Use Reserved instances for the development environment
◉ On-demand capacity reservations for the development environment
◉ Use Reserved instances for the production environment
○ Use On-Demand instances for the production environment
Correct answers

On-demand capacity reservations for the development environment


Use Reserved instances for the production environment

Feedback

Explanation:

Capacity reservations have no commitment and can be created and canceled as


needed. This is ideal for the development environment as it will ensure the capacity is
available. There is no price advantage but none of the other options provide a price
advantage whilst also ensuring capacity is available

Reserved instances are a good choice for workloads that run continuously. This is a
good option for the production environment.

CORRECT: "On-demand capacity reservations for the development environment" is a


correct answer.

CORRECT: "Use Reserved instances for the production environment" is also a correct
answer.

INCORRECT: "Use Spot instances for the development environment" is incorrect. Spot
Instances are a cost-effective choice if you can be flexible about when your applications
run and if your applications can be interrupted. Spot instances are not suitable for the
development environment as important work may be interrupted.

INCORRECT: "Use Reserved instances for the development environment" is incorrect


as they require a long-term commitment which is not ideal for a development
environment.

INCORRECT: "Use On-Demand instances for the production environment" is incorrect.


There is no long-term commitment required when you purchase On-Demand Instances.
However, you do not get any discount and therefore this is the most expensive option.
References:

https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-purchasing-
options.html

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-
reservations.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/

An application running on an Amazon ECS container instance using the EC2 launch
type needs permissions to write data to Amazon DynamoDB.

How can you assign these permissions only to the specific ECS task that is running
the application?

○ Create an IAM policy with permissions to DynamoDB and attach it to the container instance
◉ Create an IAM policy with permissions to DynamoDB and assign It to a task using the
taskRoleArn parameter
○ Use a security group to allow outbound connections to DynamoDB and assign it to the
container instance
○ Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB

Correct answer
Create an IAM policy with permissions to DynamoDB and assign It to a task using the
taskRoleArn parameter

Feedback

Explanation:
To specify permissions for a specific task on Amazon ECS you should use IAM Roles
for Tasks. The permissions policy can be applied to tasks when creating the task
definition, or by using an IAM task role override using the AWS CLI or SDKs. The
taskRoleArn parameter is used to specify the policy.

CORRECT: "Create an IAM policy with permissions to DynamoDB and assign It to a


task using the taskRoleArn parameter" is the correct answer.

INCORRECT: "Create an IAM policy with permissions to DynamoDB and attach it to the
container instance" is incorrect. You should not apply the permissions to the container
instance as they will then apply to all tasks running on the instance as well as the
instance itself.

INCORRECT: "Use a security group to allow outbound connections to DynamoDB and


assign it to the container instance" is incorrect. Though you will need a security group to
allow outbound connections to DynamoDB, the question is asking how to assign
permissions to write data to DynamoDB and a security group cannot provide those
permissions.

INCORRECT: "Modify the AmazonECSTaskExecutionRolePolicy policy to add


permissions for DynamoDB" is incorrect. The AmazonECSTaskExecutionRolePolicy
policy is the Task Execution IAM Role. This is used by the container agent to be able to
pull container images, write log file etc.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ecs-and-eks/

A legacy tightly-coupled High Performance Computing (HPC) application will be


migrated to AWS. Which network adapter type should be used?

○ Elastic Network Interface (ENI)


○ Elastic Network Adapter (ENA)
◉ Elastic Fabric Adapter (EFA)
○ Elastic IP Address

Correct answer
Elastic Fabric Adapter (EFA)

Feedback

Explanation:

An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added
capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS
Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the
Message Passing Interface (MPI).

CORRECT: "Elastic Fabric Adapter (EFA)" is the correct answer.

INCORRECT: "Elastic Network Interface (ENI)" is incorrect. The ENI is a basic type of
adapter and is not the best choice for this use case.

INCORRECT: "Elastic Network Adapter (ENA)" is incorrect. The ENA, which provides
Enhanced Networking, does provide high bandwidth and low inter-instance latency but
it does not support the features for a tightly-coupled app that the EFA does.

INCORRECT: "Elastic IP Address" is incorrect. An Elastic IP address is just a static


public IP address, it is not a type of network adapter.

References:

https://aws.amazon.com/blogs/aws/now-available-elastic-fabric-adapter-efa-for-tightly-
coupled-hpc-workloads/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/
A company runs several NFS file servers in an on-premises data center. The NFS
servers must run periodic backups to Amazon S3 using automatic synchronization
for small volumes of data.

Which solution meets these requirements and is MOST cost-effective?

○ Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3.
◉ Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon
S3.
○ Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to
Amazon S3.
○ Set up an AWS Direct Connect connection between the on-premises data center and AWS
and copy the data to Amazon S3.

Correct answer
Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon
S3.

Feedback

Explanation:

AWS DataSync is an online data transfer service that simplifies, automates, and
accelerates copying large amounts of data between on-premises systems and AWS
Storage services, as well as between AWS Storage services. DataSync can copy data
between Network File System (NFS) shares, or Server Message Block (SMB)
shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service
(Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and
Amazon FSx for Windows File Server file systems.

This is the most cost-effective solution from the answer options available.
CORRECT: "Set up an AWS DataSync agent on the on-premises servers and sync the
data to Amazon S3" is the correct answer.

INCORRECT: "Set up an SFTP sync using AWS Transfer for SFTP to sync data from
on premises to Amazon S3" is incorrect. This solution does not provide the scheduled
synchronization features of AWS DataSync and is more expensive.

INCORRECT: "Set up AWS Glue to extract the data from the NFS shares and load it
into Amazon S3" is incorrect. AWS Glue is an ETL service and cannot be used for
copying data to Amazon S3 from NFS shares.

INCORRECT: "Set up an AWS Direct Connect connection between the on-premises


data center and AWS and copy the data to Amazon S3" is incorrect. An AWS Direct
Connect connection is an expensive option and no solution is provided for automatic
synchronization.

References:

https://aws.amazon.com/datasync/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-migration-services/

A company has uploaded some highly critical data to an Amazon S3 bucket.


Management are concerned about data availability and require that steps are taken
to protect the data from accidental deletion. The data should still be accessible, and
a user should be able to delete the data intentionally.

Which combination of steps should a solutions architect take to accomplish this?


(Select TWO.)

◉ Enable versioning on the S3 bucket.


◉ Enable MFA Delete on the S3 bucket.
○ Create a bucket policy on the S3 bucket.
○ Enable default encryption on the S3 bucket.
○ Create a lifecycle policy for the objects in the S3 bucket.

Correct answers

Enable versioning on the S3 bucket.


Enable MFA Delete on the S3 bucket.

Feedback

Explanation:

Multi-factor authentication (MFA) delete adds an additional step before an object can be
deleted from a versioning-enabled bucket.

With MFA delete the bucket owner must include the x-amz-mfa request header in
requests to permanently delete an object version or change the versioning state of the
bucket.

CORRECT: "Enable versioning on the S3 bucket" is a correct answer.

CORRECT: "Enable MFA Delete on the S3 bucket" is also a correct answer.

INCORRECT: "Create a bucket policy on the S3 bucket" is incorrect. A bucket policy is


not required to enable MFA delete.

INCORRECT: "Enable default encryption on the S3 bucket" is incorrect. Encryption


does not protect against deletion.

INCORRECT: "Create a lifecycle policy for the objects in the S3 bucket" is incorrect. A
lifecycle policy will move data to another storage class but does not protect against
deletion.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html

Save time with our AWS cheat sheets:


https://digitalcloud.training/amazon-s3-and-glacier/

Storage capacity has become an issue for a company that runs application servers
on-premises. The servers are connected to a combination of block storage and NFS
storage solutions. The company requires a solution that supports local caching
without re-architecting its existing applications.

Which combination of changes can the company make to meet these requirements?
(Select TWO.)

◉ Use an AWS Storage Gateway file gateway to replace the NFS storage.
○ Use the mount command on servers to mount Amazon S3 buckets using NFS.
○ Use AWS Direct Connect and mount an Amazon FSx for Windows File Server using iSCSI.
◉ Use an AWS Storage Gateway volume gateway to replace the block storage.
○ Use Amazon Elastic File System (EFS) volumes to replace the block storage.

Correct answers

Use an AWS Storage Gateway file gateway to replace the NFS storage.
Use an AWS Storage Gateway volume gateway to replace the block storage.

Feedback

Explanation:

In this scenario the company should use cloud storage to replace the existing storage
solutions that are running out of capacity. The on-premises servers mount the existing
storage using block protocols (iSCSI) and file protocols (NFS). As there is a
requirement to avoid re-architecting existing applications these protocols must be used
in the revised solution.
The AWS Storage Gateway volume gateway should be used to replace the block-based
storage systems as it is mounted over iSCSI and the file gateway should be used to
replace the NFS file systems as it uses NFS.

CORRECT: "Use an AWS Storage Gateway file gateway to replace the NFS storage" is
a correct answer.

CORRECT: "Use an AWS Storage Gateway volume gateway to replace the block
storage" is a correct answer.

INCORRECT: "Use the mount command on servers to mount Amazon S3 buckets


using NFS" is incorrect. You cannot mount S3 buckets using NFS as it is an object-
based storage system (not file-based) and uses an HTTP REST API.

INCORRECT: "Use AWS Direct Connect and mount an Amazon FSx for Windows File
Server using iSCSI" is incorrect. You cannot mount FSx for Windows File Server file
systems using iSCSI, you must use SMB.

INCORRECT: "Use Amazon Elastic File System (EFS) volumes to replace the block
storage" is incorrect. You cannot use EFS to replace block storage as it uses NFS
rather than iSCSI.

References:

https://docs.aws.amazon.com/storagegateway/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-storage-gateway/

A solutions architect is creating a document submission application for a school. The


application will use an Amazon S3 bucket for storage. The solution must prevent
accidental deletion of the documents and ensure that all versions of the documents
are available. Users must be able to upload and modify the documents.
Which combination of actions should be taken to meet these requirements? (Select
TWO.)

○ Set read-only permissions on the bucket


◉ Enable versioning on the bucket
○ Attach an IAM policy to the bucket
◉ Enable MFA Delete on the bucket
○ Encrypt the bucket using AWS SSE-S3

Correct answers

Enable versioning on the bucket


Enable MFA Delete on the bucket

Feedback

Explanation:

None of the options present a good solution for specifying permissions required to write
and modify objects so that requirement needs to be taken care of separately. The other
requirements are to prevent accidental deletion and the ensure that all versions of the
document are available.

The two solutions for these requirements are versioning and MFA delete. Versioning will
retain a copy of each version of the document and multi-factor authentication delete
(MFA delete) will prevent any accidental deletion as you need to supply a second factor
when attempting a delete.

CORRECT: "Enable versioning on the bucket" is a correct answer.

CORRECT: "Enable MFA Delete on the bucket" is also a correct answer.

INCORRECT: "Set read-only permissions on the bucket" is incorrect as this will also
prevent any writing to the bucket which is not desired.
INCORRECT: "Attach an IAM policy to the bucket" is incorrect as users need to modify
documents which will also allow delete. Therefore, a method must be implemented to
just control deletes.

INCORRECT: "Encrypt the bucket using AWS SSE-S3" is incorrect as encryption


doesn’t stop you from deleting an object.

References:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

A solutions architect needs to backup some application log files from an online
ecommerce store to Amazon S3. It is unknown how often the logs will be accessed
or which logs will be accessed the most. The solutions architect must keep costs as
low as possible by using the appropriate S3 storage class.

Which S3 storage class should be implemented to meet these requirements?

○ S3 Glacier
◉ S3 Intelligent-Tiering
○ S3 Standard-Infrequent Access (S3 Standard-IA)
○ S3 One Zone-Infrequent Access (S3 One Zone-IA)

Correct answer
S3 Intelligent-Tiering

Feedback
Explanation:

The S3 Intelligent-Tiering storage class is designed to optimize costs by automatically


moving data to the most cost-effective access tier, without performance impact or
operational overhead.

It works by storing objects in two access tiers: one tier that is optimized for frequent
access and another lower-cost tier that is optimized for infrequent access. This is an
ideal use case for intelligent-tiering as the access patterns for the log files are not
known.

CORRECT: "S3 Intelligent-Tiering" is the correct answer.

INCORRECT: "S3 Standard-Infrequent Access (S3 Standard-IA)" is incorrect as if the


data is accessed often retrieval fees could become expensive.

INCORRECT: "S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect as if the
data is accessed often retrieval fees could become expensive.

INCORRECT: "S3 Glacier" is incorrect as if the data is accessed often retrieval fees
could become expensive. Glacier also requires more work in retrieving the data from
the archive and quick access requirements can add further costs.

References:

https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

A team are planning to run analytics jobs on log files each day and require a storage
solution. The size and number of logs is unknown and data will persist for 24 hours
only.
What is the MOST cost-effective solution?

○ Amazon S3 Glacier Deep Archive


◉ Amazon S3 Standard
○ Amazon S3 Intelligent-Tiering
○ Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)

Correct answer
Amazon S3 Standard

Feedback

Explanation:

S3 standard is the best choice in this scenario for a short term storage solution. In this
case the size and number of logs is unknown and it would be difficult to fully assess the
access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective,
provides immediate access, and there are no retrieval fees or minimum capacity charge
per object.

CORRECT: "Amazon S3 Standard" is the correct answer.

INCORRECT: "Amazon S3 Intelligent-Tiering" is incorrect as there is an additional fee


for using this service and for a short-term requirement it may not be beneficial.

INCORRECT: "Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect


as this storage class has a minimum capacity charge per object (128 KB) and a per GB
retrieval fee.

INCORRECT: "Amazon S3 Glacier Deep Archive" is incorrect as this storage class is


used for archiving data. There are retrieval fees and it take hours to retrieve data from
an archive.

References:

https://aws.amazon.com/s3/storage-classes/
Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

A solutions architect is designing a new service that will use an Amazon API
Gateway API on the frontend. The service will need to persist data in a backend
database using key-value requests. Initially, the data requirements will be around 1
GB and future growth is unknown. Requests can range from 0 to over 800 requests
per second.

Which combination of AWS services would meet these requirements? (Select TWO.)

○ AWS Fargate
◉ AWS Lambda
◉ Amazon DynamoDB
○ Amazon EC2 Auto Scaling
○ Amazon RDS

Correct answers

AWS Lambda
Amazon DynamoDB

Feedback

Explanation:

In this case AWS Lambda can perform the computation and store the data in an
Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand
easily and DynamoDB is built for key-value data storage requirements and is also
serverless and easily scalable. This is therefore a cost effective solution for
unpredictable workloads.

CORRECT: "AWS Lambda" is a correct answer.

CORRECT: "Amazon DynamoDB" is also a correct answer.

INCORRECT: "AWS Fargate" is incorrect as containers run constantly and therefore


incur costs even when no requests are being made.

INCORRECT: "Amazon EC2 Auto Scaling" is incorrect as this uses EC2 instances
which will incur costs even when no requests are being made.

INCORRECT: "Amazon RDS" is incorrect as this is a relational database not a No-SQL


database. It is therefore not suitable for key-value data storage requirements.

References:

https://aws.amazon.com/lambda/features/

https://aws.amazon.com/dynamodb/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

https://digitalcloud.training/amazon-dynamodb/

A company runs a web application that serves weather updates. The application
runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an
Application Load Balancer (ALB). The instances store data in an Amazon Aurora
database. A solutions architect needs to make the application more resilient to
sporadic increases in request rates.

Which architecture should the solutions architect implement? (Select TWO.)


○ Add and AWS WAF in front of the ALB
◉ Add Amazon Aurora Replicas
○ Add an AWS Transit Gateway to the Availability Zones
◉ Add an AWS Global Accelerator endpoint
○ Add an Amazon CloudFront distribution in front of the ALB

Correct answers

Add Amazon Aurora Replicas


Add an Amazon CloudFront distribution in front of the ALB

Feedback

Explanation:

The architecture is already highly resilient but the may be subject to performance
degradation if there are sudden increases in request rates. To resolve this situation
Amazon Aurora Read Replicas can be used to serve read traffic which offloads
requests from the main database. On the frontend an Amazon CloudFront distribution
can be placed in front of the ALB and this will cache content for better performance and
also offloads requests from the backend.

CORRECT: "Add Amazon Aurora Replicas" is the correct answer.

CORRECT: "Add an Amazon CloudFront distribution in front of the ALB" is the correct
answer.

INCORRECT: "Add and AWS WAF in front of the ALB" is incorrect. A web application
firewall protects applications from malicious attacks. It does not improve performance.

INCORRECT: "Add an AWS Transit Gateway to the Availability Zones" is incorrect as


this is used to connect on-premises networks to VPCs.

INCORRECT: "Add an AWS Global Accelerator endpoint" is incorrect as this service is


used for directing users to different instances of the application in different regions
based on latency.
References:

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.
html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.ht
ml

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-aurora/

https://digitalcloud.training/amazon-cloudfront/

An Amazon RDS Read Replica is being deployed in a separate region. The master
database is not encrypted but all data in the new region must be encrypted. How can
this be achieved?

○ Enable encryption using Key Management Service (KMS) when creating the cross-region
Read Replica
○ Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read
Replica from the snapshot
○ Enable encryption on the master DB instance, then create an encrypted cross-region Read
Replica
◉ Encrypt a snapshot from the master DB instance, create a new encrypted master DB
instance, and then create an encrypted cross-region Read Replica

Correct answer
Encrypt a snapshot from the master DB instance, create a new encrypted master DB
instance, and then create an encrypted cross-region Read Replica

Feedback
Explanation:

You cannot create an encrypted Read Replica from an unencrypted master DB


instance. You also cannot enable encryption after launch time for the master DB
instance. Therefore, you must create a new master DB by taking a snapshot of the
existing DB, encrypting it, and then creating the new DB from the snapshot. You can
then create the encrypted cross-region Read Replica of the master DB.

CORRECT: "Encrypt a snapshot from the master DB instance, create a new encrypted
master DB instance, and then create an encrypted cross-region Read Replica" is the
correct answer.

INCORRECT: "Enable encryption using Key Management Service (KMS) when creating
the cross-region Read Replica" is incorrect. All other options will not work due to the
limitations explained above.

INCORRECT: "Encrypt a snapshot from the master DB instance, create an encrypted


cross-region Read Replica from the snapshot" is incorrect. All other options will not
work due to the limitations explained above.

INCORRECT: "Enabled encryption on the master DB instance, then create an


encrypted cross-region Read Replica" is incorrect. All other options will not work due to
the limitations explained above.

References:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-rds/
An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions
architect needs to scale read performance and the solution must be configured for
high availability. What is the most cost-effective solution?

◉ Create a read replica as a Multi-AZ DB instance


○ Deploy a read replica in a different AZ to the master DB instance
○ Deploy a read replica using Amazon ElastiCache
○ Deploy a read replica in the same AZ as the master DB instance

Correct answer
Create a read replica as a Multi-AZ DB instance

Feedback

Explanation:

CORRECT: "Create a read replica as a Multi-AZ DB instance" is the correct answer.

INCORRECT: "Deploy a read replica in a different AZ to the master DB instance" is


incorrect as this does not provide high availability for the read replica

INCORRECT: "Deploy a read replica using Amazon ElastiCache" is incorrect as


ElastiCache is not used to create read replicas of RDS database.

INCORRECT: "Deploy a read replica in the same AZ as the master DB instance" is


incorrect as this solution does not include HA for the read replica.

References:

https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-
support-multi-az-deployments/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-rds/
A company has acquired another business and needs to migrate their 50TB of data
into AWS within 1 month. They also require a secure, reliable and private connection
to the AWS cloud.

How are these requirements best accomplished?

○ Provision an AWS Direct Connect connection and migrate the data over the link
◉ Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct
Connect link
○ Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
○ Provision an AWS VPN CloudHub connection and migrate the data over redundant links

Correct answer
Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct
Connect link

Feedback

Explanation:

AWS Direct Connect provides a secure, reliable and private connection. However, lead
times are often longer than 1 month so it cannot be used to migrate data within the
timeframes. Therefore, it is better to use AWS Snowball to move the data and order a
Direct Connect connection to satisfy the other requirement later on. In the meantime the
organization can use an AWS VPN for secure, private access to their VPC.

CORRECT: "Migrate data using AWS Snowball. Provision an AWS VPN initially and
order a Direct Connect link" is the correct answer.

INCORRECT: "Provision an AWS Direct Connect connection and migrate the data over
the link" is incorrect due to the lead time for installation.

INCORRECT: "Launch a Virtual Private Gateway (VPG) and migrate the data over the
AWS VPN" is incorrect. A VPG is the AWS-side of an AWS VPN. A VPN does not
provide a private connection and is not reliable as you can never guarantee the latency
over the Internet

INCORRECT: "Provision an AWS VPN CloudHub connection and migrate the data over
redundant links" is incorrect. AWS VPN CloudHub is a service for connecting multiple
sites into your VPC over VPN connections. It is not used for aggregating links and the
limitations of Internet bandwidth from the company where the data is stored will still be
an issue. It also uses the public Internet so is not a private or reliable connection.

References:

https://aws.amazon.com/snowball/

https://aws.amazon.com/directconnect/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-direct-connect/

https://digitalcloud.training/aws-migration-services/

An organization has a large amount of data on Windows (SMB) file shares in their
on-premises data center. The organization would like to move data into Amazon S3.
They would like to automate the migration of data over their AWS Direct Connect
link.

Which AWS service can assist them?

○ AWS Database Migration Service (DMS)


○ AWS CloudFormation
○ AWS Snowball
◉ AWS DataSync

Correct answer
AWS DataSync

Feedback

Explanation:

AWS DataSync can be used to move large amounts of data online between on-
premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS).
DataSync eliminates or automatically handles many of these tasks, including scripting
copy jobs, scheduling and monitoring transfers, validating data, and optimizing network
utilization. The source datastore can be Server Message Block (SMB) file servers.

CORRECT: "AWS DataSync" is the correct answer.

INCORRECT: "AWS Database Migration Service (DMS)" is incorrect. AWS Database


Migration Service (DMS) is used for migrating databases, not data on file shares.

INCORRECT: "AWS CloudFormation" is incorrect. AWS CloudFormation can be used


for automating infrastructure provisioning. This is not the best use case for
CloudFormation as DataSync is designed specifically for this scenario.

INCORRECT: "AWS Snowball" is incorrect. AWS Snowball is a hardware device that is


used for migrating data into AWS. The organization plan to use their Direct Connect link
for migrating data rather than sending it in via a physical device. Also, Snowball will not
automate the migration.

References:

https://aws.amazon.com/datasync/faqs/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-migration-services/
A company hosts an application on Amazon EC2 instances behind Application Load
Balancers in several AWS Regions. Distribution rights for the content require that
users in different geographies must be served content from specific regions.

Which configuration meets these requirements?

◉ Create Amazon Route 53 records with a geolocation routing policy.


○ Create Amazon Route 53 records with a geoproximity routing policy.
○ Configure Amazon CloudFront with multiple origins and AWS WAF.
○ Configure Application Load Balancers with multi-Region routing.

Correct answer
Create Amazon Route 53 records with a geolocation routing policy.

Feedback

Explanation:

To protect the distribution rights of the content and ensure that users are directed to the
appropriate AWS Region based on the location of the user, the geolocation routing
policy can be used with Amazon Route 53.

Geolocation routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate
from.

When you use geolocation routing, you can localize your content and present some or
all of your website in the language of your users. You can also use geolocation routing
to restrict distribution of content to only the locations in which you have distribution
rights.

CORRECT: "Create Amazon Route 53 records with a geolocation routing policy" is the
correct answer.
INCORRECT: "Create Amazon Route 53 records with a geoproximity routing policy" is
incorrect. Use this routing policy when you want to route traffic based on the location of
your resources and, optionally, shift traffic from resources in one location to resources
in another.

INCORRECT: "Configure Amazon CloudFront with multiple origins and AWS WAF" is
incorrect. AWS WAF protects against web exploits but will not assist with directing
users to different content (from different origins).

INCORRECT: "Configure Application Load Balancers with multi-Region routing" is


incorrect. There is no such thing as multi-Region routing for ALBs.

References:

https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-route-53/

A company delivers content to subscribers distributed globally from an application


running on AWS. The application uses a fleet of Amazon EC2 instance in a private
subnet behind an Application Load Balancer (ALB). Due to an update in copyright
restrictions, it is necessary to block access for specific countries.

What is the EASIEST method to meet this requirement?

○ Modify the ALB security group to deny incoming traffic from blocked countries
○ Modify the security group for EC2 instances to deny incoming traffic from blocked countries
◉ Use Amazon CloudFront to serve the application and deny access to blocked countries
○ Use a network ACL to block the IP address ranges associated with the specific countries

Correct answer
Use Amazon CloudFront to serve the application and deny access to blocked countries
Feedback

Explanation:

When a user requests your content, CloudFront typically serves the requested content
regardless of where the user is located. If you need to prevent users in specific
countries from accessing your content, you can use the CloudFront geo restriction
feature to do one of the following:

Allow your users to access your content only if they're in one of the countries on a
whitelist of approved countries.
Prevent your users from accessing your content if they're in one of the countries
on a blacklist of banned countries.

For example, if a request comes from a country where, for copyright reasons, you are
not authorized to distribute your content, you can use CloudFront geo restriction to
block the request.

This is the easiest and most effective way to implement a geographic restriction for the
delivery of content.

CORRECT: "Use Amazon CloudFront to serve the application and deny access to
blocked countries" is the correct answer.

INCORRECT: "Use a Network ACL to block the IP address ranges associated with the
specific countries" is incorrect as this would be extremely difficult to manage.

INCORRECT: "Modify the ALB security group to deny incoming traffic from blocked
countries" is incorrect as security groups cannot block traffic by country.

INCORRECT: "Modify the security group for EC2 instances to deny incoming traffic
from blocked countries" is incorrect as security groups cannot block traffic by country.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestriction
s.html
Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/

An organization want to share regular updates about their charitable work using
static webpages. The pages are expected to generate a large amount of views from
around the world. The files are stored in an Amazon S3 bucket. A solutions architect
has been asked to design an efficient and effective solution.

Which action should the solutions architect take to accomplish this?

○ Generate presigned URLs for the files


○ Use cross-Region replication to all Regions
○ Use the geoproximity feature of Amazon Route 53
◉ Use Amazon CloudFront with the S3 bucket as its origin

Correct answer
Use Amazon CloudFront with the S3 bucket as its origin

Feedback

Explanation:

Amazon CloudFront can be used to cache the files in edge locations around the world
and this will improve the performance of the webpages.

To serve a static website hosted on Amazon S3, you can deploy a CloudFront
distribution using one of these configurations:

Using a REST API endpoint as the origin with access restricted by an origin
access identity (OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header

CORRECT: "Use Amazon CloudFront with the S3 bucket as its origin" is the correct
answer.

INCORRECT: "Generate presigned URLs for the files" is incorrect as this is used to
restrict access which is not a requirement.

INCORRECT: "Use cross-Region replication to all Regions" is incorrect as this does not
provide a mechanism for directing users to the closest copy of the static webpages.

INCORRECT: "Use the geoproximity feature of Amazon Route 53" is incorrect as this
does not include a solution for having multiple copies of the data in different geographic
lcoations.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-
website/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/

An application is running on Amazon EC2 behind an Elastic Load Balancer (ELB).


Content is being published using Amazon CloudFront and you need to restrict the
ability for users to circumvent CloudFront and access the content directly through the
ELB.

How can you configure this solution?

◉ Create an Origin Access Identity (OAI) and associate it with the distribution
○ Use signed URLs or signed cookies to limit access to the content
○ Use a Network ACL to restrict access to the ELB
○ Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the
CloudFront internal service IP addresses when they change

Correct answer
Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the
CloudFront internal service IP addresses when they change

Feedback

Explanation:

The only way to get this working is by using a VPC Security Group for the ELB that is
configured to allow only the internal service IP ranges associated with CloudFront. As
these are updated from time to time, you can use AWS Lambda to automatically update
the addresses. This is done using a trigger that is triggered when AWS issues an SNS
topic update when the addresses are changed.

CORRECT: "Create a VPC Security Group for the ELB and use AWS Lambda to
automatically update the CloudFront internal service IP addresses when they change" is
the correct answer.

INCORRECT: "Create an Origin Access Identity (OAI) and associate it with the
distribution" is incorrect. You can use an OAI to restrict access to content in Amazon
S3 but not on EC2 or ELB.

INCORRECT: "Use signed URLs or signed cookies to limit access to the content" is
incorrect. Signed cookies and URLs are used to limit access to files but this does not
stop people from circumventing CloudFront and accessing the ELB directly.

INCORRECT: "Use a Network ACL to restrict access to the ELB" is incorrect. A


Network ACL can be used to restrict access to an ELB but it is recommended to use
security groups and this solution is incomplete as it does not account for the fact that
the internal service IP ranges change over time.

References:
https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-
groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/

A company has divested a single business unit and needs to move the AWS account
owned by the business unit to another AWS Organization. How can this be
achieved?

○ Create a new account in the destination AWS Organization and migrate resources
○ Create a new account in the destination AWS Organization and share the original resources
using AWS Resource Access Manager
○ Migrate the account using AWS CloudFormation
◉ Migrate the account using the AWS Organizations console

Correct answer
Migrate the account using the AWS Organizations console

Feedback

Explanation:

Accounts can be migrated between organizations. To do this you must have root or IAM
access to both the member and master accounts. Resources will remain under the
control of the migrated account.

CORRECT: "Migrate the account using the AWS Organizations console" is the correct
answer.
INCORRECT: "Create a new account in the destination AWS Organization and migrate
resources" is incorrect. You do not need to create a new account in the destination
AWS Organization as you can just migrate the existing account.

INCORRECT: "Create a new account in the destination AWS Organization and share
the original resources using AWS Resource Access Manager" is incorrect. You do not
need to create a new account in the destination AWS Organization as you can just
migrate the existing account.

INCORRECT: "Migrate the account using AWS CloudFormation" is incorrect. You do


not need to use AWS CloudFormation. You can use the Organizations API or AWS CLI
for when there are many accounts to migrate and therefore you could use
CloudFormation for any additional automation but it is not necessary for this scenario.

References:

https://aws.amazon.com/premiumsupport/knowledge-center/organizations-move-
accounts/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-organizations/

A website is running on Amazon EC2 instances and access is restricted to a limited


set of IP ranges. A solutions architect is planning to migrate static content from the
website to an Amazon S3 bucket configured as an origin for an Amazon CloudFront
distribution. Access to the static content must be restricted to the same set of IP
addresses.

Which combination of steps will meet these requirements? (Select TWO.)

◉ Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects.
○ Create an origin access identity (OAI) and associate it with the distribution. Generate
presigned URLs that limit access to the OAI.
◉ Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the Amazon S3 bucket.
○ Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the CloudFront distribution.
○ Attach the existing security group that contains the IP restrictions to the Amazon CloudFront
distribution.

Correct answers

Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects.
Create an AWS WAF web ACL that includes the same IP restrictions that exist in the
EC2 security group. Associate this new web ACL with the CloudFront distribution.

Feedback

Explanation:

To prevent users from circumventing the controls implemented on CloudFront (using


WAF or presigned URLs / signed cookies) you can use an origin access identity (OAI).
An OAI is a special CloudFront user that you associate with a distribution.

The next step is to change the permissions either on your Amazon S3 bucket or on the
files in your bucket so that only the origin access identity has read permission (or read
and download permission). This can be implemented through a bucket policy.

To control access at the CloudFront layer the AWS Web Application Firewall (WAF) can
be used. With WAF you must create an ACL that includes the IP restrictions required
and then associate the web ACL with the CloudFront distribution.

CORRECT: "Create an origin access identity (OAI) and associate it with the distribution.
Change the permissions in the bucket policy so that only the OAI can read the objects"
is a correct answer.
CORRECT: "Create an AWS WAF web ACL that includes the same IP restrictions that
exist in the EC2 security group. Associate this new web ACL with the CloudFront
distribution" is also a correct answer.

INCORRECT: "Create an origin access identity (OAI) and associate it with the
distribution. Generate presigned URLs that limit access to the OAI" is incorrect.
Presigned URLs can be used to protect access to CloudFront but they cannot be used
to limit access to an OAI.

INCORRECT: "Create an AWS WAF web ACL that includes the same IP restrictions
that exist in the EC2 security group. Associate this new web ACL with the Amazon S3
bucket" is incorrect. The Web ACL should be associated with CloudFront, not S3.

INCORRECT: "Attach the existing security group that contains the IP restrictions to the
Amazon CloudFront distribution" is incorrect. You cannot attach a security group to a
CloudFront distribution.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-
content-restricting-access-to-s3.html

https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-waf-shield/

An application has been deployed on Amazon EC2 instances behind an Application


Load Balancer (ALB). A Solutions Architect must improve the security posture of the
application and minimize the impact of a DDoS attack on resources.

Which of the following solutions is MOST effective?


◉ Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application
Load Balancer.
○ Create a custom AWS Lambda function that monitors for suspicious traffic and modifies a
network ACL when a potential DDoS attack is identified.
○ Enable VPC Flow Logs and store them in Amazon S3. Use Amazon Athena to parse the logs
and identify and block potential DDoS attacks.
○ Enable access logs on the Application Load Balancer and configure Amazon CloudWatch to
monitor the access logs and trigger a Lambda function when potential attacks are identified.
Configure the Lambda function to modify the ALBs security group and block the attack.

Correct answer
Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application
Load Balancer.

Feedback

Explanation:

A rate-based rule tracks the rate of requests for each originating IP address, and
triggers the rule action on IPs with rates that go over a limit. You set the limit as the
number of requests per 5-minute time span.

You can use this type of rule to put a temporary block on requests from an IP address
that's sending excessive requests. By default, AWS WAF aggregates requests based
on the IP address from the web request origin, but you can configure the rule to use an
IP address from an HTTP header, like X-Forwarded-For, instead.

CORRECT: "Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL
on the Application Load Balancer" is the correct answer.

INCORRECT: "Create a custom AWS Lambda function that monitors for suspicious
traffic and modifies a network ACL when a potential DDoS attack is identified" is
incorrect. There’s not description here of how Lambda is going to monitor for traffic.
INCORRECT: "Enable VPC Flow Logs and store them in Amazon S3. Use Amazon
Athena to parse the logs and identify and block potential DDoS attacks" is incorrect.
Amazon Athena is not able to block DDoS attacks, another service would be needed.

INCORRECT: "Enable access logs on the Application Load Balancer and configure
Amazon CloudWatch to monitor the access logs and trigger a Lambda function when
potential attacks are identified. Configure the Lambda function to modify the ALBs
security group and block the attack" is incorrect. Access logs are exported to S3 but not
to CloudWatch. Also, it would not be possible to block an attack from a specific IP using
a security group (while still allowing any other source access) as they do not support
deny rules.

References:

https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-
based.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-waf-shield/

A website runs on Amazon EC2 instances in an Auto Scaling group behind an


Application Load Balancer (ALB) which serves as an origin for an Amazon
CloudFront distribution. An AWS WAF is being used to protect against SQL injection
attacks. A review of security logs revealed an external malicious IP that needs to be
blocked from accessing the website.

What should a solutions architect do to protect the application?

○ Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP
address
◉ Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP
address
○ Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny
the malicious IP address
○ Modify the security groups for the EC2 instances in the target groups behind the ALB to deny
the malicious IP address.

Correct answer
Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP
address

Feedback

Explanation:

A new version of the AWS Web Application Firewall was released in November 2019.
With AWS WAF classic you create “IP match conditions”, whereas with AWS WAF (new
version) you create “IP set match statements”. Look out for wording on the exam.

The IP match condition / IP set match statement inspects the IP address of a web
request's origin against a set of IP addresses and address ranges. Use this to allow or
block web requests based on the IP addresses that the requests originate from.

AWS WAF supports all IPv4 and IPv6 address ranges. An IP set can hold up to 10,000
IP addresses or IP address ranges to check.

CORRECT: "Modify the configuration of AWS WAF to add an IP match condition to


block the malicious IP address" is the correct answer.

INCORRECT: "Modify the network ACL on the CloudFront distribution to add a deny
rule for the malicious IP address" is incorrect as CloudFront does not sit within a subnet
so network ACLs do not apply to it.

INCORRECT: "Modify the network ACL for the EC2 instances in the target groups
behind the ALB to deny the malicious IP address" is incorrect as the source IP
addresses of the data in the EC2 instances’ subnets will be the ELB IP addresses.
INCORRECT: "Modify the security groups for the EC2 instances in the target groups
behind the ALB to deny the malicious IP address." is incorrect as you cannot create
deny rules with security groups.

References:

https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-ipset-
match.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-waf-shield/

An IoT sensor is being rolled out to thousands of a company’s existing customers.


The sensors will stream high volumes of data each second to a central location. A
solution must be designed to ingest and store the data for analytics. The solution
must provide near-real time performance and millisecond responsiveness.

Which solution should a Solutions Architect recommend?

○ Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda
function and then store the data in Amazon RedShift.
○ Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon DynamoDB.
○ Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda
function and then store the data in Amazon DynamoDB.
◉ Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon RedShift.

Correct answer
Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon DynamoDB.
Feedback

Explanation:

A Kinesis data stream is a set of shards. Each shard contains a sequence of data
records. A consumer is an application that processes the data from a Kinesis data
stream. You can map a Lambda function to a shared-throughput consumer (standard
iterator), or to a dedicated-throughput consumer with enhanced fan-out.

Amazon DynamoDB is the best database for this use case as it supports near-real time
performance and millisecond responsiveness.

CORRECT: "Ingest the data into an Amazon Kinesis Data Stream. Process the data
with an AWS Lambda function and then store the data in Amazon DynamoDB" is the
correct answer.

INCORRECT: "Ingest the data into an Amazon Kinesis Data Stream. Process the data
with an AWS Lambda function and then store the data in Amazon RedShift" is incorrect.
Amazon RedShift cannot provide millisecond responsiveness.

INCORRECT: "Ingest the data into an Amazon SQS queue. Process the data using an
AWS Lambda function and then store the data in Amazon RedShift" is incorrect.
Amazon SQS does not provide near real-time performance and RedShift does not
provide millisecond responsiveness.

INCORRECT: "Ingest the data into an Amazon SQS queue. Process the data using an
AWS Lambda function and then store the data in Amazon DynamoDB" is incorrect.
Amazon SQS does not provide near real-time performance.

References:

https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-kinesis/
An automotive company plans to implement IoT sensors in manufacturing equipment
that will send data to AWS in real time. The solution must receive events in an
ordered manner from each asset and ensure that the data is saved for future
processing.

Which solution would be MOST efficient?

◉ Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
○ Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
○ Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment
asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
○ Use an Amazon SQS standard queue for real-time events with one queue for each
equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to
Amazon S3.

Correct answer
Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.

Feedback

Explanation:

Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The
Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the
same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard)
should be used for each equipment asset.

Amazon Kinesis Firehose can be used to receive streaming data from Data Streams
and then load the data into Amazon S3 for future processing.
CORRECT: "Use Amazon Kinesis Data Streams for real-time events with a partition for
each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3"
is the correct answer.

INCORRECT: "Use Amazon Kinesis Data Streams for real-time events with a shard for
each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon
EBS" is incorrect. A partition should be used rather than a shard as explained above.

INCORRECT: "Use an Amazon SQS FIFO queue for real-time events with one queue
for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save
data to Amazon EFS" is incorrect. Amazon SQS cannot be used for real-time use
cases.

INCORRECT: "Use an Amazon SQS standard queue for real-time events with one
queue for each equipment asset. Trigger an AWS Lambda function from the SQS
queue to save data to Amazon S3" is incorrect. Amazon SQS cannot be used for real-
time use cases.

References:

https://aws.amazon.com/kinesis/data-streams/faqs/

https://aws.amazon.com/kinesis/data-firehose/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-kinesis/

A Solutions Architect has been tasked with re-deploying an application running on


AWS to enable high availability. The application processes messages that are
received in an ActiveMQ queue running on a single Amazon EC2 instance.
Messages are then processed by a consumer application running on Amazon EC2.
After processing the messages the consumer application writes results to a MySQL
database running on Amazon EC2.
Which architecture offers the highest availability and low operational complexity?

○ Deploy a second Active MQ server to another Availability Zone. Launch an additional


consumer EC2 instance in another Availability Zone. Use MySQL database replication to
another Availability Zone.
◉ Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL
database replication to another Availability Zone.
○ Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for
MySQL with Multi-AZ enabled.
○ Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones.
Use an Amazon RDS MySQL database with Multi-AZ enabled.

Correct answer
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones.
Use an Amazon RDS MySQL database with Multi-AZ enabled.

Feedback

Explanation:

The correct answer offers the highest availability as it includes Amazon MQ


active/standby brokers across two AZs, an Auto Scaling group across two AZ,s and a
Multi-AZ Amazon RDS MySQL database deployment.

This architecture not only offers the highest availability it is also operationally simple as
it maximizes the usage of managed services.

CORRECT: "Deploy Amazon MQ with active/standby brokers configured across two


Availability Zones. Create an Auto Scaling group for the consumer EC2 instances
across two Availability Zones. Use an Amazon RDS MySQL database with Multi-AZ
enabled" is the correct answer.

INCORRECT: "Deploy a second Active MQ server to another Availability Zone. Launch


an additional consumer EC2 instance in another Availability Zone. Use MySQL
database replication to another Availability Zone" is incorrect. This architecture does not
offer the highest availability as it does not use Auto Scaling. It is also not the most
operationally efficient architecture as it does not use AWS managed services.

INCORRECT: "Deploy Amazon MQ with active/standby brokers configured across two


Availability Zones. Launch an additional consumer EC2 instance in another Availability
Zone. Use MySQL database replication to another Availability Zone" is incorrect. This
architecture does not use Auto Scaling for best HA or the RDS managed service.

INCORRECT: "Deploy Amazon MQ with active/standby brokers configured across two


Availability Zones. Launch an additional consumer EC2 instance in another Availability
Zone. Use Amazon RDS for MySQL with Multi-AZ enabled" is incorrect. This solution
does not use Auto Scaling.

References:

https://aws.amazon.com/architecture/well-architected/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

https://digitalcloud.training/amazon-ec2-auto-scaling/

https://digitalcloud.training/amazon-rds/

A solutions architect is designing an application on AWS. The compute layer will run
in parallel across EC2 instances. The compute layer should scale based on the
number of jobs to be processed. The compute layer is stateless. The solutions
architect must ensure that the application is loosely coupled and the job items are
durably stored.
Which design should the solutions architect use?

○ Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon
EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on CPU usage
○ Create an Amazon SQS queue to hold the jobs that need to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto
Scaling group to add and remove nodes based on network usage
◉ Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto
Scaling group to add and remove nodes based on the number of items in the SQS queue
○ Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon
EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on the number of messages published to the SNS topic

Correct answer
Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the
Auto Scaling group to add and remove nodes based on the number of items in the SQS
queue

Feedback

Explanation:

In this case we need to find a durable and loosely coupled solution for storing jobs.
Amazon SQS is ideal for this use case and can be configured to use dynamic scaling
based on the number of jobs waiting in the queue.

To configure this scaling you can use the backlog per instance metric with the target
value being the acceptable backlog per instance to maintain. You can calculate
these numbers as follows:
Backlog per instance: To calculate your backlog per instance, start with
the ApproximateNumberOfMessages queue attribute to determine the length of
the SQS queue (number of messages available for retrieval from the queue).
Divide that number by the fleet's running capacity, which for an Auto Scaling
group is the number of instances in the InService state, to get the backlog per
instance.
Acceptable backlog per instance: To calculate your target value, first determine
what your application can accept in terms of latency. Then, take the acceptable
latency value and divide it by the average time that an EC2 instance takes to
process a message.

This solution will scale EC2 instances using Auto Scaling based on the number of jobs
waiting in the SQS queue.

CORRECT: "Create an Amazon SQS queue to hold the jobs that needs to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on the
number of items in the SQS queue" is the correct answer.

INCORRECT: "Create an Amazon SQS queue to hold the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on
network usage" is incorrect as scaling on network usage does not relate to the number
of jobs waiting to be processed.

INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on CPU
usage" is incorrect. Amazon SNS is a notification service so it delivers notifications to
subscribers. It does store data durably but is less suitable than SQS for this use case.
Scaling on CPU usage is not the best solution as it does not relate to the number of
jobs waiting to be processed.

INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on the
number of messages published to the SNS topic" is incorrect. Amazon SNS is a
notification service so it delivers notifications to subscribers. It does store data durably
but is less suitable than SQS for this use case. Scaling on the number of notifications in
SNS is not possible.

References:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2-auto-scaling/

https://digitalcloud.training/aws-application-integration-services/

An application running on Amazon EC2 needs to asynchronously invoke an AWS


Lambda function to perform data processing. The services should be decoupled.

Which service can be used to decouple the compute services?

○ AWS Config
○ Amazon SNS
◉ Amazon MQ
○ Amazon Step Functions

Correct answer
Amazon SNS

Feedback

Explanation:
You can use a Lambda function to process Amazon Simple Notification Service
notifications. Amazon SNS supports Lambda functions as a target for messages sent to
a topic. This solution decouples the Amazon EC2 application from Lambda and ensures
the Lambda function is invoked.

CORRECT: "Amazon SNS" is the correct answer.

INCORRECT: "AWS Config" is incorrect. AWS Config is a service that is used for
continuous compliance, not application decoupling.

INCORRECT: "Amazon MQ" is incorrect. Amazon MQ is similar to SQS but is used for
existing applications that are being migrated into AWS. SQS should be used for new
applications being created in the cloud.

INCORRECT: "AWS Step Functions" is incorrect. AWS Step Functions is a workflow


service. It is not the best solution for this scenario.

References:

https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html

https://aws.amazon.com/sns/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

https://digitalcloud.training/aws-glue/https://digitalcloud.training/aws-application-
integration-services/

A new application will run across multiple Amazon ECS tasks. Front-end application
logic will process data and then pass that data to a back-end ECS task to perform
further processing and write the data to a datastore. The Architect would like to
reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?

○ Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data
to the stream and the back-end to read data from the stream
○ Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3
bucket, configure the front-end to write data to the stream and the back-end to read data from
Amazon S3
○ Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-
end to add messages to the queue
◉ Create an Amazon SQS queue and configure the front-end to add messages to the queue
and the back-end to poll the queue for messages

Correct answer
Create an Amazon SQS queue and configure the front-end to add messages to the queue
and the back-end to poll the queue for messages

Feedback

Explanation:

This is a good use case for Amazon SQS. SQS is a service that is used for decoupling
applications, thus reducing interdependencies, through a message bus. The front-end
application can place messages on the queue and the back-end can then poll the
queue for new messages. Please remember that Amazon SQS is pull-based (polling)
not push-based (use SNS for push-based).

CORRECT: "Create an Amazon SQS queue and configure the front-end to add
messages to the queue and the back-end to poll the queue for messages" is the correct
answer.

INCORRECT: "Create an Amazon Kinesis Firehose delivery stream and configure the
front-end to add data to the stream and the back-end to read data from the stream" is
incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data
is immediately loaded into a destination that can be Amazon S3, RedShift,
Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not
streaming data and there is no need to load data into an additional AWS service.

INCORRECT: "Create an Amazon Kinesis Firehose delivery stream that delivers data to
an Amazon S3 bucket, configure the front-end to write data to the stream and the back-
end to read data from Amazon S3" is incorrect as per the previous explanation.

INCORRECT: "Create an Amazon SQS queue that pushes messages to the back-end.
Configure the front-end to add messages to the queue " is incorrect as SQS is pull-
based, not push-based. EC2 instances must poll the queue to find jobs to process.

References:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.
html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-kinesis/

https://digitalcloud.training/aws-application-integration-services/

A retail organization sends coupons out twice a week and this results in a
predictable surge in sales traffic. The application runs on Amazon EC2 instances
behind an Elastic Load Balancer. The organization is looking for ways lower costs
while ensuring they meet the demands of their customers.

How can they achieve this goal?

◉ Use capacity reservations with savings plans


○ Use a mixture of spot instances and on demand instances
○ Increase the instance size of the existing EC2 instances
○ Purchase Amazon EC2 dedicated hosts

Correct answer
Use capacity reservations with savings plans

Feedback

Explanation:

On-Demand Capacity Reservations enable you to reserve compute capacity for your
Amazon EC2 instances in a specific Availability Zone for any duration. By creating
Capacity Reservations, you ensure that you always have access to EC2 capacity when
you need it, for as long as you need it. When used in combination with savings plans,
you can also gain the advantages of cost reduction.

CORRECT: " Use capacity reservations with savings plans" is the correct answer.

INCORRECT: "Use a mixture of spot instances and on demand instances" is incorrect.


You can mix spot and on-demand in an auto scaling group. However, there’s a risk the
spot price may not be good, and this is a regular, predictable increase in traffic.

INCORRECT: "Increase the instance size of the existing EC2 instances" is incorrect.
This would add more cost all the time rather than catering for the temporary increases
in traffic.

INCORRECT: "Purchase Amazon EC2 dedicated hosts" is incorrect. This is not a way
to save cost as dedicated hosts are much more expensive than shared hosts.

References:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-
reservations.html#capacity-reservations-differences

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-ec2/
A Solutions Architect is designing an application that consists of AWS Lambda and
Amazon RDS Aurora MySQL. The Lambda function must use database credentials
to authenticate to MySQL and security policy mandates that these credentials must
not be stored in the function code.

How can the Solutions Architect securely store the database credentials and make
them available to the function?

○ Store the credentials in AWS Key Management Service and use environment variables in the
function code pointing to KMS
◉ Store the credentials in Systems Manager Parameter Store and update the function code and
execution role
○ Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL
database
○ Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda
function execution role

Correct answer
Store the credentials in Systems Manager Parameter Store and update the function code and
execution role

Feedback

Explanation:

In this case the scenario requires that credentials are used for authenticating to MySQL.
The credentials need to be securely stored outside of the function code. Systems
Manager Parameter Store provides secure, hierarchical storage for configuration data
management and secrets management.

You can easily reference the parameters from services including AWS Lambda as
depicted in the diagram below:
CORRECT: "Store the credentials in Systems Manager Parameter Store and update
the function code and execution role" is the correct answer.

INCORRECT: "Store the credentials in AWS Key Management Service and use
environment variables in the function code pointing to KMS" is incorrect. You cannot
store credentials in KMS, it is used for creating and managing encryption keys

INCORRECT: "Use the AWSAuthenticationPlugin and associate an IAM user account in


the MySQL database" is incorrect. This is a great way to securely authenticate to RDS
using IAM users or roles. However, in this case the scenario requires database
credentials to be used by the function.

INCORRECT: "Create an IAM policy and store the credentials in the policy. Attach the
policy to the Lambda function execution role" is incorrect. You cannot store credentials
in IAM policies.

References:

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-
parameter-store.html
An application that runs a computational fluid dynamics workload uses a tightly-
coupled HPC architecture that uses the MPI protocol and runs across many nodes.
A service-managed deployment is required to minimize operational overhead.

Which deployment option is MOST suitable for provisioning and managing the
resources required for this use case?

○ Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets


○ Use AWS CloudFormation to deploy a Cluster Placement Group on EC2
◉ Use AWS Batch to deploy a multi-node parallel job
○ Use AWS Elastic Beanstalk to provision and manage the EC2 instances

Correct answer
Use AWS Batch to deploy a multi-node parallel job

Feedback

Explanation:

AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple
Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-
scale, tightly coupled, high performance computing applications and distributed GPU
model training without the need to launch, configure, and manage Amazon EC2
resources directly.

An AWS Batch multi-node parallel job is compatible with any framework that supports
IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or
Message Passing Interface (MPI).

This is the most efficient approach to deploy the resources required and supports the
application requirements most effectively.

CORRECT: "Use AWS Batch to deploy a multi-node parallel job" is the correct answer.
INCORRECT: "Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets "
is incorrect. This is not the best solution for a tightly-coupled HPC workload with specific
requirements such as MPI support.

INCORRECT: "Use AWS CloudFormation to deploy a Cluster Placement Group on


EC2" is incorrect. This would deploy a cluster placement group but not manage it. AWS
Batch is a better fit for large scale workloads such as this.

INCORRECT: "Use AWS Elastic Beanstalk to provision and manage the EC2
instances" is incorrect. You can certainly provision and manage EC2 instances with
Elastic Beanstalk but this scenario is for a specific workload that requires MPI support
and managing a HPC deployment across a large number of nodes. AWS Batch is more
suitable.

References:

https://d1.awsstatic.com/whitepapers/architecture/AWS-HPC-Lens.pdf

https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html

A HR application stores employment records on Amazon S3. Regulations mandate


the records are retained for seven years. Once created the records are accessed
infrequently for the first three months and then must be available within 10 minutes if
required thereafter.

Which lifecycle action meets the requirements whilst MINIMIZING cost?

○ Store the data in S3 Standard for 3 months, then transition to S3 Glacier


◉ Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
○ Store the data in S3 Standard for 3 months, then transition to S3 Standard-IA
○ Store the data in S3 Intelligent Tiering for 3 months, then transition to S3 Standard-IA

Correct answer
Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier

Feedback

Explanation:

The most cost-effective solution is to first store the data in S3 Standard-IA where it will
be infrequently accessed for the first three months. Then, after three months expires,
transition the data to S3 Glacier where it can be stored at lower cost for the remainder
of the seven year period. Expedited retrieval can bring retrieval times down to 1-5
minutes.

CORRECT: "Store the data in S3 Standard-IA for 3 months, then transition to S3


Glacier" is the correct answer.

INCORRECT: "Store the data in S3 Standard for 3 months, then transition to S3


Glacier" is incorrect. S3 Standard is more costly than S3 Standard-IA and the data is
only accessed infrequently.

INCORRECT: "Store the data in S3 Standard for 3 months, then transition to S3


Standard-IA" is incorrect. Neither storage class in this answer is the most cost-effective
option.

INCORRECT: "Store the data in S3 Intelligent Tiering for 3 months, then transition to S3
Standard-IA" is incorrect. Intelligent tiering moves data between tiers based on access
patterns, this is more costly and better suited to use cases that are unknown or
unpredictable.

References:

https://aws.amazon.com/s3/storage-classes/

https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-
steps.html#api-downloading-an-archive-two-steps-retrieval-options

Save time with our AWS cheat sheets:


https://digitalcloud.training/amazon-s3-and-glacier/

Over 500 TB of data must be analyzed using standard SQL business intelligence
tools. The dataset consists of a combination of structured data and unstructured
data. The unstructured data is small and stored on Amazon S3. Which AWS
services are most suitable for performing analytics on the data?

○ Amazon RDS MariaDB with Amazon Athena


○ Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)
○ Amazon ElastiCache for Redis with cluster mode enabled
◉ Amazon Redshift with Amazon Redshift Spectrum

Correct answer
Amazon Redshift with Amazon Redshift Spectrum

Feedback

Explanation:

Amazon Redshift is an enterprise-level, petabyte scale, fully managed data


warehousing service. An Amazon Redshift data warehouse is an enterprise-class
relational database query and management system. Redshift supports client
connections with many types of applications, including business intelligence (BI),
reporting, data, and analytics tools.

Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and
semistructured data from files in Amazon S3 without having to load the data into
Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to
execute very fast against large datasets.
Used together, RedShift and RedShift spectrum are suitable for running massive
analytics jobs on both the structured (RedShift data warehouse) and unstructured
(Amazon S3) data.

CORRECT: "Amazon Redshift with Amazon Redshift Spectrum" is the correct answer.

INCORRECT: "Amazon RDS MariaDB with Amazon Athena" is incorrect. Amazon RDS
is not suitable for analytics (OLAP) use cases as it is designed for transactional (OLTP)
use cases. Athena can however be used for running SQL queries on data on S3.

INCORRECT: "Amazon DynamoDB with Amazon DynamoDB Accelerator (DAX)" is


incorrect. This is an example of a non-relational DB with a caching layer and is not
suitable for an OLAP use case.

INCORRECT: "Amazon ElastiCache for Redis with cluster mode enabled" is incorrect.
This is an example of an in-memory caching service. It is good for performance for
transactional use cases.

References:
https://docs.aws.amazon.com/redshift/latest/dg/c_redshift_system_overview.html

https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-redshift/

Every time an item in an Amazon DynamoDB table is modified a record must be


retained for compliance reasons. What is the most efficient solution to recording this
information?

○ Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log
files and record deleted item data to an Amazon S3 bucket
◉ Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and
record the modified item data to an Amazon S3 bucket
○ Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the
CloudTrail log files and record changed items in another DynamoDB table
○ Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and
save the output directly to an Amazon S3 bucket

Correct answer
Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and
record the modified item data to an Amazon S3 bucket

Feedback

Explanation:

Amazon DynamoDB Streams captures a time-ordered sequence of item-level


modifications in any DynamoDB table and stores this information in a log for up to 24
hours. Applications can access this log and view the data items as they appeared
before and after they were modified, in near-real time.

For example, in the diagram below a DynamoDB stream is being consumed by a


Lambda function which processes the item data and records a record in CloudWatch
Logs

CORRECT: "Enable DynamoDB Streams. Configure an AWS Lambda function to poll


the stream and record the modified item data to an Amazon S3 bucket" is the correct
answer.

INCORRECT: "Enable Amazon CloudWatch Logs. Configure an AWS Lambda function


to monitor the log files and record deleted item data to an Amazon S3 bucket" is
incorrect. The deleted item data will not be recorded in CloudWatch Logs.

INCORRECT: "Enable Amazon CloudTrail. Configure an Amazon EC2 instance to


monitor activity in the CloudTrail log files and record changed items in another
DynamoDB table" is incorrect. CloudTrail records API actions so it will not record the
data from the item that was modified.

INCORRECT: "Enable DynamoDB Global Tables. Enable DynamoDB streams on the


multi-region table and save the output directly to an Amazon S3 bucket" is incorrect.
Global Tables is used for creating a multi-region, multi-master database. It is of no
additional value for this requirement as you could just enable DynamoDB streams on
the main table. You also cannot save modified data straight to an S3 bucket.
References:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-dynamodb/

An application in a private subnet needs to query data in an Amazon DynamoDB


table. Use of the DynamoDB public endpoints must be avoided. What is the most
EFFICIENT and secure method of enabling access to the table?

○ Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
◉ Create a gateway VPC endpoint and add an entry to the route table
○ Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
○ Create a software VPN between DynamoDB and the application in the private subnet

Correct answer
Create a gateway VPC endpoint and add an entry to the route table

Feedback

Explanation:

A VPC endpoint enables you to privately connect your VPC to supported AWS services
and VPC endpoint services powered by AWS PrivateLink without requiring an internet
gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Instances in your VPC do not require public IP addresses to communicate with


resources in the service. Traffic between your VPC and the other service does not leave
the Amazon network.
With a gateway endpoint you configure your route table to point to the endpoint.
Amazon S3 and DynamoDB use gateway endpoints.

The table below helps you to understand the key differences between the two different
types of VPC endpoint:

CORRECT: "Create a gateway VPC endpoint and add an entry to the route table" is the
correct answer.

INCORRECT: "Create an interface VPC endpoint in the VPC with an Elastic Network
Interface (ENI)" is incorrect. This would be used for services that are supported by
interface endpoints, not gateway endpoints.

INCORRECT: "Create a private Amazon DynamoDB endpoint and connect to it using


an AWS VPN" is incorrect. You cannot create an Amazon DynamoDB private endpoint
and connect to it over VPN. Private endpoints are VPC endpoints and are connected to
by instances in subnets via route table entries or via ENIs (depending on which
service).

INCORRECT: "Create a software VPN between DynamoDB and the application in the
private subnet" is incorrect. You cannot create a software VPN between DynamoDB
and an application.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-vpc/

A company is deploying an Amazon ElastiCache for Redis cluster. To enhance


security a password should be required to access the database. What should the
solutions architect use?

○ AWS Directory Service


○ AWS IAM Policy
◉ Redis AUTH command
○ VPC Security Group

Correct answer
Redis AUTH command

Feedback

Explanation:

Redis authentication tokens enable Redis to require a token (password) before allowing
clients to execute commands, thereby improving data security.

You can require that users enter a token on a token-protected Redis server. To do this,
include the parameter --auth-token (API: AuthToken) with the correct token when you
create your replication group or cluster. Also include it in all subsequent commands to
the replication group or cluster.

CORRECT: "Redis AUTH command" is the correct answer.


INCORRECT: "AWS Directory Service" is incorrect. This is a managed Microsoft Active
Directory service and cannot add password protection to Redis.

INCORRECT: "AWS IAM Policy" is incorrect. You cannot use an IAM policy to enforce
a password on Redis.

INCORRECT: "VPC Security Group" is incorrect. A security group protects at the


network layer, it does not affect application authentication.

References:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-elasticache/

A large MongoDB database running on-premises must be migrated to Amazon


DynamoDB within the next few weeks. The database is too large to migrate over the
company’s limited internet bandwidth so an alternative solution must be used. What
should a Solutions Architect recommend?

○ Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the
AWS Database Migration Service (DMS)
◉ Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball
Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon
DynamoDB
○ Enable compression on the MongoDB database and use the AWS Database Migration
Service (DMS) to directly migrate the database to Amazon DynamoDB
○ Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS
Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the
AWS Cloud

Correct answer
Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball
Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to
Amazon DynamoDB

Feedback

Explanation:

Larger data migrations with AWS DMS can include many terabytes of information. This
process can be cumbersome due to network bandwidth limits or just the sheer amount
of data. AWS DMS can use Snowball Edge and Amazon S3 to migrate large databases
more quickly than by other methods.

When you're using an Edge device, the data migration process has the following
stages:

1. You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally
and move it to an Edge device.
2. You ship the Edge device or devices back to AWS.
3. After AWS receives your shipment, the Edge device automatically loads its data
into an Amazon S3 bucket.
4. AWS DMS takes the files and migrates the data to the target data store. If you are
using change data capture (CDC), those updates are written to the Amazon S3
bucket and then applied to the target data store.

CORRECT: "Use the Schema Conversion Tool (SCT) to extract and load the data to an
AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to
migrate the data to Amazon DynamoDB" is the correct answer.

INCORRECT: "Setup an AWS Direct Connect and migrate the database to Amazon
DynamoDB using the AWS Database Migration Service (DMS)" is incorrect as Direct
Connect connections can take several weeks to implement.

INCORRECT: "Enable compression on the MongoDB database and use the AWS
Database Migration Service (DMS) to directly migrate the database to Amazon
DynamoDB" is incorrect. It’s unlikely that compression is going to make the difference
and the company want to avoid the internet link as stated in the scenario.

INCORRECT: "Use the AWS Database Migration Service (DMS) to extract and load the
data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB
using AWS DMS in the AWS Cloud" is incorrect. This is the wrong method, the
Solutions Architect should use the SCT to extract and load to Snowball Edge and then
AWS DMS in the AWS Cloud.

References:

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-migration-services/

To increase performance and redundancy for an application a company has decided


to run multiple implementations in different AWS Regions behind network load
balancers. The company currently advertise the application using two public IP
addresses from separate /24 address ranges and would prefer not to change these.
Users should be directed to the closest available application endpoint.

Which actions should a solutions architect take? (Select TWO.)

○ Create an Amazon Route 53 geolocation based routing policy


◉ Create an AWS Global Accelerator and attach endpoints in each AWS Region
○ Assign new static anycast IP addresses and modify any existing pointers
◉ Migrate both public IP addresses to the AWS Global Accelerator
○ Create PTR records to map existing public IP addresses to an Alias

Correct answers
Create an AWS Global Accelerator and attach endpoints in each AWS Region
Migrate both public IP addresses to the AWS Global Accelerator

Feedback

Explanation:

AWS Global Accelerator uses static IP addresses as fixed entry points for your
application. You can migrate up to two /24 IPv4 address ranges and choose which /32
IP addresses to use when you create your accelerator.

This solution ensures the company can continue using the same IP addresses and they
are able to direct traffic to the application endpoint in the AWS Region closest to the
end user. Traffic is sent over the AWS global network for consistent performance.
CORRECT: "Create an AWS Global Accelerator and attach endpoints in each AWS
Region" is a correct answer.

CORRECT: "Migrate both public IP addresses to the AWS Global Accelerator" is also a
correct answer.

INCORRECT: "Create an Amazon Route 53 geolocation based routing policy" is


incorrect. With this solution new IP addresses will be required as there will be
application endpoints in different regions.

INCORRECT: "Assign new static anycast IP addresses and modify any existing
pointers" is incorrect. This is unnecessary as you can bring your own IP addresses to
AWS Global Accelerator and this is preferred in this scenario.

INCORRECT: "Create PTR records to map existing public IP addresses to an Alias" is


incorrect. This is not a workable solution for mapping existing IP addresses to an
Amazon Route 53 Alias.

References:

https://aws.amazon.com/global-accelerator/features/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-global-accelerator/

Three Amazon VPCs are used by a company in the same region. The company has
two AWS Direct Connect connections to two separate company offices and wishes
to share these with all three VPCs. A Solutions Architect has created an AWS Direct
Connect gateway. How can the required connectivity be configured?

○ Associate the Direct Connect gateway to a transit gateway


○ Associate the Direct Connect gateway to a virtual private gateway in each VPC
◉ Create a VPC peering connection between the VPCs and route entries for the Direct Connect
Gateway
○ Create a transit virtual interface between the Direct Connect gateway and each VPC

Correct answer
Associate the Direct Connect gateway to a transit gateway

Feedback

Explanation:

You can manage a single connection for multiple VPCs or VPNs that are in the same
Region by associating a Direct Connect gateway to a transit gateway. The solution
involves the following components:

A transit gateway that has VPC attachments.


A Direct Connect gateway.
An association between the Direct Connect gateway and the transit gateway.
A transit virtual interface that is attached to the Direct Connect gateway.

The following diagram depicts this configuration:

CORRECT: "Associate the Direct Connect gateway to a transit gateway" is the correct
answer.
INCORRECT: "Associate the Direct Connect gateway to a virtual private gateway in
each VPC" is incorrect. For VPCs in the same region a VPG is not necessary. A transit
gateway can instead be configured.

INCORRECT: "Create a VPC peering connection between the VPCs and route entries
for the Direct Connect Gateway" is incorrect. You cannot add route entries for a Direct
Connect gateway to each VPC and enable routing. Use a transit gateway instead.

INCORRECT: "Create a transit virtual interface between the Direct Connect gateway
and each VPC" is incorrect. The transit virtual interface is attached to the Direct
Connect gateway on the connection side, not the VPC/transit gateway side.

References:

https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-
intro.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-direct-connect/

A Solutions Architect needs to select a low-cost, short-term option for adding


resilience to an AWS Direct Connect connection. What is the MOST cost-effective
solution to provide a backup for the Direct Connect connection?

○ Implement a second AWS Direct Connection


◉ Implement an IPSec VPN connection and use the same BGP prefix
○ Configure AWS Transit Gateway with an IPSec VPN backup
○ Configure an IPSec VPN connection over the Direct Connect link

Correct answer
Implement an IPSec VPN connection and use the same BGP prefix

Feedback
Explanation:

This is the most cost-effective solution. With this option both the Direct Connect
connection and IPSec VPN are active and being advertised using the Border Gateway
Protocol (BGP). The Direct Connect link will always be preferred unless it is
unavailable.

CORRECT: "Implement an IPSec VPN connection and use the same BGP prefix" is the
correct answer.

INCORRECT: "Implement a second AWS Direct Connection" is incorrect. This is not a


short-term or low-cost option as it takes time to implement and is costly.

INCORRECT: "Configure AWS Transit Gateway with an IPSec VPN backup" is


incorrect. This is a workable solution and provides some advantages. However, you do
need to pay for the Transit Gateway so it is not the most cost-effective option and
probably not suitable for a short-term need.

INCORRECT: "Configure an IPSec VPN connection over the Direct Connect link" is
incorrect. This is not a solution to the problem as the VPN connection is going over the
Direct Connect link. This is something you might do to add encryption to Direct Connect
but it doesn’t make it more resilient.

References:

https://docs.aws.amazon.com/whitepapers/latest/hybrid-connectivity/vpn-connection-as-
a-backup-to-aws-dx-connection-example.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-direct-connect/

A highly elastic application consists of three tiers. The application tier runs in an Auto
Scaling group and processes data and writes it to an Amazon RDS MySQL
database. The Solutions Architect wants to restrict access to the database tier to
only accept traffic from the instances in the application tier. However, instances in
the application tier are being constantly launched and terminated.

How can the Solutions Architect configure secure access to the database tier?

◉ Configure the database security group to allow traffic only from the application security group
○ Configure the database security group to allow traffic only from port 3306
○ Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
○ Configure a Network ACL on the database subnet to allow all traffic from the application
subnet

Correct answer
Configure the database security group to allow traffic only from the application security group

Feedback

Explanation:

The best option is to configure the database security group to only allow traffic that
originates from the application security group. You can also define the destination port
as the database port. This setup will allow any instance that is launched and attached to
this security group to connect to the database.

CORRECT: "Configure the database security group to allow traffic only from the
application security group" is the correct answer.

INCORRECT: "Configure the database security group to allow traffic only from port
3306" is incorrect. Port 3306 for MySQL should be the destination port, not the source.

INCORRECT: "Configure a Network ACL on the database subnet to deny all traffic to
ports other than 3306" is incorrect. This does not restrict access specifically to the
application instances.

INCORRECT: "Configure a Network ACL on the database subnet to allow all traffic from
the application subnet" is incorrect. This does not restrict access specifically to the
application instances.

References:

https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-vpc/

An application is being monitored using Amazon GuardDuty. A Solutions Architect


needs to be notified by email of medium to high severity events. How can this be
achieved?

○ Configure an Amazon CloudWatch alarm that triggers based on a GuardDuty metric


◉ Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
○ Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda function
○ Configure an Amazon CloudTrail alarm the triggers based on GuardDuty API activity

Correct answer
Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic

Feedback

Explanation:

A CloudWatch Events rule can be used to set up automatic email notifications for
Medium to High Severity findings to the email address of your choice. You simply create
an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule.

Note: step by step procedures for how to set this up can be found in the article linked in
the references below.
CORRECT: "Create an Amazon CloudWatch events rule that triggers an Amazon SNS
topic" is the correct answer.

INCORRECT: "Configure an Amazon CloudWatch alarm that triggers based on a


GuardDuty metric" is incorrect. There is no metric for GuardDuty that can be used for
specific findings.

INCORRECT: "Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda
function" is incorrect. CloudWatch logs is not the right CloudWatch service to use.
CloudWatch events is used for reacting to changes in service state.

INCORRECT: "Configure an Amazon CloudTrail alarm the triggers based on GuardDuty


API activity" is incorrect. CloudTrail cannot be used to trigger alarms based on
GuardDuty API activity.

References:

https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudwatch/

A company is migrating a decoupled application to AWS. The application uses a


message broker based on the MQTT protocol. The application will be migrated to
Amazon EC2 instances and the solution for the message broker must not require
rewriting application code.

Which AWS service can be used for the migrated message broker?

○ Amazon SQS
○ Amazon SNS
◉ Amazon MQ
○ AWS Step Functions
Correct answer
Amazon MQ

Feedback

Explanation:

Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it
easy to set up and operate message brokers in the cloud. Connecting current
applications to Amazon MQ is easy because it uses industry-standard APIs and
protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and
WebSocket. Using standards means that in most cases, there’s no need to rewrite any
messaging code when you migrate to AWS.

CORRECT: "Amazon MQ" is the correct answer.

INCORRECT: "Amazon SQS" is incorrect. This is an Amazon proprietary service and


does not support industry-standard messaging APIs and protocols.

INCORRECT: "Amazon SNS" is incorrect. This is a notification service not a message


bus.

INCORRECT: "AWS Step Functions" is incorrect. This is a workflow orchestration


service, not a message bus.

References:

https://aws.amazon.com/amazon-mq/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-application-integration-services/

A Solutions Architect is rearchitecting an application with decoupling. The application


will send batches of up to 1000 messages per second that must be received in the
correct order by the consumers.

Which action should the Solutions Architect take?

○ Create an Amazon SQS Standard queue


○ Create an Amazon SNS topic
◉ Create an Amazon SQS FIFO queue
○ Create an AWS Step Functions state machine

Correct answer
Create an Amazon SQS FIFO queue

Feedback

Explanation:

Only FIFO queues guarantee the ordering of messages and therefore a standard queue
would not work. The FIFO queue supports up to 3,000 messages per second with
batching so this is a supported scenario.

CORRECT: "Create an Amazon SQS FIFO queue" is the correct answer.

INCORRECT: "Create an Amazon SQS Standard queue" is incorrect as it does not


guarantee ordering of messages.

INCORRECT: "Create an Amazon SNS topic" is incorrect. SNS is a notification service


and a message queue is a better fit for this use case.

INCORRECT: "Create an AWS Step Functions state machine" is incorrect. Step


Functions is a workflow orchestration service and is not useful for this scenario.

References:

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sq
s-quotas.html

Save time with our AWS cheat sheets:


https://digitalcloud.training/aws-application-integration-services/

A Solutions Architect is designing an application that will run on an Amazon EC2


instance. The application must asynchronously invoke an AWS Lambda function to
analyze thousands of .CSV files. The services should be decoupled.

Which service can be used to decouple the compute services?

○ Amazon SWF
◉ Amazon SNS
○ Amazon Kinesis
○ Amazon OpsWorks

Correct answer
Amazon SNS

Feedback

Explanation:

You can use a Lambda function to process Amazon Simple Notification Service
notifications. Amazon SNS supports Lambda functions as a target for messages sent to
a topic. This solution decouples the Amazon EC2 application from Lambda and ensures
the Lambda function is invoked.

CORRECT: "Amazon SNS" is the correct answer.

INCORRECT: "Amazon SWF" is incorrect. The Simple Workflow Service (SWF) is used
for process automation. It is not well suited to this requirement.

INCORRECT: "Amazon Kinesis" is incorrect as this service is used for ingesting and
processing real time streaming data, it is not a suitable service to be used solely for
invoking a Lambda function.

INCORRECT: "Amazon OpsWorks" is incorrect as this service is used for configuration


management of systems using Chef or Puppet.

References:

https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-glue/https://digitalcloud.training/aws-application-
integration-services/

https://digitalcloud.training/aws-lambda/

A company are finalizing their disaster recovery plan. A limited set of core services
will be replicated to the DR site ready to seamlessly take over the in the event of a
disaster. All other services will be switched off.

Which DR strategy is the company using?

○ Backup and restore


◉ Pilot light
○ Warm standby
○ Multi-site

Correct answer
Pilot light

Feedback

Explanation:
In this DR approach, you simply replicate part of your IT structure for a limited set of
core services so that the AWS cloud environment seamlessly takes over in the event of
a disaster.

A small part of your infrastructure is always running simultaneously syncing mutable


data (as databases or documents), while other parts of your infrastructure are switched
off and used only during testing.

Unlike a backup and recovery approach, you must ensure that your most critical core
elements are already configured and running in AWS (the pilot light). When the time
comes for recovery, you can rapidly provision a full-scale production environment
around the critical core.

CORRECT: "Pilot light" is the correct answer.

INCORRECT: "Backup and restore" is incorrect. This is the lowest cost DR approach
that simply entails creating online backups of all data and applications.

INCORRECT: "Warm standby" is incorrect. The term warm standby is used to describe
a DR scenario in which a scaled-down version of a fully functional environment is
always running in the cloud.

INCORRECT: "Multi-site" is incorrect. A multi-site solution runs on AWS as well as on


your existing on-site infrastructure in an active- active configuration.

References:

https://aws.amazon.com/blogs/publicsector/rapidly-recover-mission-critical-systems-in-
a-disaster/

A Solutions Architect has been tasked with building an application which stores
images to be used for a website. The website will be accessed by thousands of
customers. The images within the application need to be able to be transformed and
processed as they are being retrieved. The solutions architect would prefer to use
managed services to achieve this, and the solution should be highly available and
scalable, and be able to serve users from around the world with low latency.

Which scenario represents the easiest solution for this task?

○ Store the images in a DynamoDB table, with DynamoDB Global Tables enabled. Provision a
Lambda function to process the data on demand as it leaves the table.
◉ Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Event Notifications
to connect to a Lambda function to process and transform the images when a GET request is
initiated on an object.
○ Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to
transform and process the images whenever a GET request is initiated on an object.
○ Store the images in a DynamoDB table, with DynamoDB Accelerator enabled. Use Amazon
EventBridge to pass the data into an event bus as it is retrieved from DynamoDB and use AWS
Lambda to process the data.

Correct answer
Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to
transform and process the images whenever a GET request is initiated on an object.

Feedback

Explanation:

With S3 Object Lambda you can add your own code to S3 GET requests to modify and
process data as it is returned to an application. For the first time, you can use custom
code to modify the data returned by standard S3 GET requests to filter rows,
dynamically resize images, redact confidential data, and much more. Powered by AWS
Lambda functions, your code runs on infrastructure that is fully managed by AWS,
eliminating the need to create and store derivative copies of your data or to run
expensive proxies, all with no changes required to your applications.

CORRECT: "Store the images in Amazon S3, behind a CloudFront distribution. Use S3
Object Lambda to transform and process the images whenever a GET request is
initiated on an object” is the correct answer (as explained above.)

INCORRECT: "Store the images in a DynamoDB table, with DynamoDB Global Tables
enabled. Provision a Lambda function to process the data on demand as it leaves the
table” is incorrect. DynamoDB is not as well designed for Write Once Read Many
workloads and adding a Lambda function to the DynamoDB table takes more manual
provisioning of resources than using S3 Object Lambda.

INCORRECT: "Store the images in Amazon S3, behind a CloudFront distribution. Use
S3 Event Notifications to connect to a Lambda function to process and transform the
images when a GET request is initiated on an object” is incorrect. This would work;
however it is easier to use S3 Object Lambda as this manages the Lambda function for
you.

INCORRECT: "Store the images in a DynamoDB table, with DynamoDB Accelerator


enabled. Use Amazon EventBridge to pass the data into an event bus as it is retrieved
from DynamoDB and use AWS Lambda to process the data” is incorrect. DynamoDB is
not as well designed for Write Once Read Many workloads and adding a Lambda
function to the DynamoDB table takes more manual provisioning of resources than
using S3 Object Lambda.

References:

https://aws.amazon.com/s3/features/object-lambda/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-lambda/

The Chief Financial Officer of a large corporation is looking for an AWS native tool
which will help reduce their cloud spend. After receiving a budget alarm, the
company has decided that they need to reduce their spend across their different
areas of compute and need insights into their spend to decide where they can
reduce cost.
What is the easiest way to achieve this goal?

○ AWS Trusted Advisor


○ Cost and Usage Reports
◉ AWS Compute Optimizer
○ AWS Cost Explorer

Correct answer
AWS Compute Optimizer

Feedback

Explanation:

AWS Compute Optimizer helps you identify the optimal AWS resource configurations,
such as Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block
Store (EBS) volume configurations, and AWS Lambda function memory sizes, using
machine learning to analyze historical utilization metrics. AWS Compute Optimizer
provides a set of APIs and a console experience to help you reduce costs and increase
workload performance by recommending the optimal AWS resources for your AWS
workloads.
CORRECT: "AWS Compute Optimizer" is the correct answer (as explained above.)

INCORRECT: "AWS Trusted Advisor" is incorrect. Whilst you will get some cost
recommendations using Trusted Advisor, when working with reducing cost for compute
specifically, AWS Compute Optimizer is a better choice.

INCORRECT: "Cost and Usage Reports" is incorrect. Cost and Usage Reports are a
highly detailed report of your spend and usage across your entire AWS Environment.
Whilst it can be used to understand cost, it does not make recommendations.

INCORRECT: "AWS Cost Explorer" is incorrect. Cost Explorer gives you insight into
your spend and usage in a graphical format, which can be filtered and grouped by
parameters like Region, instance type and can use Tags to further group resources. It
does not however make any recommendations on how to reduce spend.

References:

https://aws.amazon.com/compute-optimizer/faqs/

Save time with our AWS cheat sheets:

https://digitalcloud.training/aws-billing-and-pricing/
A large customer services company is planning to build a highly scalable and
durable application designed to aggregate data across their support
communications, and extract sentiment on how successfully they are helping their
customers. These communications are generated across chat, social media, emails
and more. They need a solution which stores output from these communication
channels, which then processes the text for sentiment analysis. The outputs must
then be stored in a data warehouse for future use.

Which series of AWS services will provide the functionality the company is looking
for?

○ Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Textract to process the text for sentiment analysis. Then store
the outputs in Amazon RedShift.
○ Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Comprehend to process the text for sentiment analysis. Then
store the outputs in Amazon RedShift.
◉ Use DynamoDB as the original data store for the output from the support communications.
Use Amazon Comprehend to process the text for sentiment analysis. Then store the outputs in
Amazon RedShift.
○ Use DynamoDB as the original data store for the output from the support communications.
Use Amazon Kendra to process the text for sentiment analysis. Then store the outputs in
Amazon RedShift.

Correct answer
Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Comprehend to process the text for sentiment analysis. Then
store the outputs in Amazon RedShift.

Feedback
Explanation:

Amazon Comprehend is a natural-language processing (NLP) service that uses


machine learning to uncover valuable insights and connections in text.

You could easily use Amazon Comprehend to detect customer sentiment and analyze
customer interactions and automatically extract insights from customer surveys to
improve your products. An S3 Data Lake also acts as an ideal data repository for
Machine Learning data used by many different business units and applications.

CORRECT: "Use an Amazon S3 Data Lake as the original date store for the output
from the support communications. Use Amazon Comprehend to process the text for
sentiment analysis. Then store the outputs in Amazon RedShift” is the correct answer
(as explained above.)

INCORRECT: "Use an Amazon S3 Data Lake as the original date store for the output
from the support communications. Use Amazon Textract to process the text for
sentiment analysis. Then store the outputs in Amazon RedShift” is incorrect. Amazon
Textract is a machine learning (ML) service that automatically extracts text, handwriting,
and data from scanned documents, and does not output any sentiment.

INCORRECT: "Use DynamoDB as the original data store for the output from the
support communications. Use Amazon Comprehend to process the text for sentiment
analysis. Then store the outputs in Amazon RedShift” is incorrect. DynamoDB is not as
suitable of a data repository for machine learning data like an Amazon S3 Data Lake
would be.

INCORRECT: "Use DynamoDB as the original data store for the output from the
support communications. Use Amazon Kendra to process the text for sentiment
analysis. Then store the outputs in Amazon RedShift” is incorrect. DynamoDB is not as
suitable of a data repository for machine learning data like an Amazon S3 Data Lake
would be, and Amazon Kendra is a highly accurate intelligent search service powered
by machine learning and does not work to understand sentiment.

References:

https://aws.amazon.com/comprehend/

A Solutions Architect is migrating a distributed application from their on-premises


environment into AWS. This application consists of an Apache Cassandra NoSQL
database, with a containerized SUSE Linux compute layer with an additional storage
layer made up of multiple Microsoft SQL Server databases. Once in the cloud the
company wants to have as little operational overhead as possible, with no schema
conversion during the migration and the company wants to host the architecture in a
highly available and durable way.

Which of the following groups of services will provide the solutions architect with the
best solution ?

○ Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2.
Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
○ Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on
Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
○ Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
◉ Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon Aurora to host the second storage layer.

Correct answer
Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.

Feedback

Explanation:

Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and


managed Apache Cassandra–compatible database service. This combined with a
containerized, serverless compute layer on Amazon ECS for Fargate and a RDS for
Microsoft SQL Server database layer is a fully managed version of what currently exists
on premises.

CORRECT: "Run the NoSQL database on Amazon Keyspaces, and the compute layer
on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the
second storage layer” is the correct answer (as explained above.)

INCORRECT: "Run the NoSQL database on DynamoDB, and the compute layer on
Amazon ECS on EC2. Use Amazon RDS for Microsoft SQL Server to host the second
storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB
therefore it is not the correct answer.

INCORRECT: "Run the NoSQL database on DynamoDB, and the compute layer on
Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the
second storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB
therefore it is not the correct answer.
INCORRECT: "Run the NoSQL database on Amazon Keyspaces, and the compute
layer on Amazon ECS on Fargate. Use Amazon Aurora to host the second storage
layer” is incorrect. Amazon Aurora does not have an option to run a Microsoft SQL
Server database, therefore this answer is not correct.

References:

https://aws.amazon.com/keyspaces/

Save time with our AWS cheat sheets:

https://digitalcloud.training/category/aws-cheat-sheets/aws-database/

A Solutions Architect is tasked with designing a fully Serverless, Microservices


based web application which requires the use of a GraphQL API to provide a single
entry point to the application.

Which AWS managed service could the Solutions Architect use?

○ API Gateway
○ Amazon Athena
◉ AWS AppSync
○ AWS Lambda

Correct answer
AWS AppSync

Feedback

Explanation:
AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies
building modern web and mobile applications.

AWS AppSync GraphQL APIs simplify application development by providing a single


endpoint to securely query or update data from multiple databases, microservices, and
APIs.

CORRECT: "AWS AppSync" is the correct answer (as explained above.)

INCORRECT: "API Gateway" is incorrect. You cannot create GraphQL APIs on API
Gateway.

INCORRECT: "Amazon Athena" is incorrect. Amazon Athena is a Serverless query


service where you can query S3 using SQL statements.

INCORRECT: "AWS Lambda" is incorrect. AWS Lambda is a serverless compute


service and is not designed to build APIs.

References:

https://aws.amazon.com/appsync/

Save time with our AWS cheat sheets:

https://digitalcloud.training/category/aws-cheat-sheets/aws-networking-content-delivery/

A telecommunications company is looking to expand its 5G coverage nationwide,


and as a result needs to provision and build their own private cellular network with
the help of AWS.

Which solution does AWS provide to help with this?

○ AWS Wavelength
○ AWS Outposts
◉ AWS Private 5G
○ AWS CloudHSM

Correct answer
AWS Private 5G

Feedback

Explanation:

AWS Private 5G is a managed service that makes it easy to deploy, operate, and scale
your own private cellular network, with all required hardware and software provided by
AWS.
CORRECT: "AWS Private 5G" is the correct answer (as explained above.)

INCORRECT: "AWS Wavelength" is incorrect. AWS Wavelength embeds AWS


compute and storage services within 5G networks, providing mobile edge computing
infrastructure for developing, deploying, and scaling ultra-low-latency applications.

INCORRECT: "AWS CloudHSM" is incorrect. AWS CloudHSM is a cloud-based


hardware security module (HSM) that enables you to easily generate and use your own
encryption keys on the AWS Cloud and has nothing to do with 5G.
INCORRECT: "AWS Outposts" is incorrect. AWS Outposts is a family of fully managed
solutions delivering AWS infrastructure and services to virtually any on-premises or
edge location for a truly consistent hybrid experience. It is not related to 5G.

References:

https://aws.amazon.com/private5g/

A Solutions Architect has placed an Amazon CloudFront distribution in front of their


web server, which is serving up a highly accessed website, serving content globally.
The Solutions Architect needs to dynamically route the user to a new URL
depending on where the user is accessing from, through running a particular script.
This dynamic routing will happen on every request, and as a result requires the code
to run at extremely low latency, and low cost.

What solution will best achieve this goal?

○ Redirect traffic by running your code within a Lambda function using Lambda@Edge.
◉ At the Edge Location, run your code with CloudFront Functions.
○ Use Path Based Routing to route each user to the appropriate webpage behind an
Application Load Balancer.
○ Use Route 53 Geo Proximity Routing to route users’ traffic to your resources based on their
geographic location.

Correct answer
At the Edge Location, run your code with CloudFront Functions.

Feedback

Explanation:
With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in
JavaScript for high-scale, latency-sensitive CDN customizations. Your functions can
manipulate the requests and responses that flow through CloudFront, perform basic
authentication and authorization, generate HTTP responses at the edge, and more.
CloudFront Functions is approximately 1/6th the cost of Lambda@Edge and is
extremely low latency as the functions are run on the host in the edge location, instead
of the running on a Lambda function elsewhere.

CORRECT: "At the Edge Location, run your code with CloudFront Functions” is the
correct answer (as explained above.)

INCORRECT: "Redirect traffic by running your code within a Lambda function using
Lambda@Edge” is incorrect. Although you could achieve this using Lambda@Edge, the
question states the need for the lowest latency possible, and comparatively the lowest
latency option is CloudFront Functions.

INCORRECT: "Use Path Based Routing to route each user to the appropriate webpage
behind an Application Load Balancer” is incorrect. This architecture does not account
for the fact that custom code needs to be run to make this happen.

INCORRECT: "Use Route 53 Geo Proximity Routing to route users’ traffic to your
resources based on their geographic location.'' is incorrect. This may work, however
again it does not account for the fact that custom code needs to be run to make this
happen.

References:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-
functions.html

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-cloudfront/
As part of a company’s shift to the AWS cloud, they need to gain an insight into their
total on-premises footprint. They have discovered that they are currently struggling
with managing their software licenses. They would like to maintain a hybrid cloud
setup, with some of their licenses stored in the cloud with some stored on-premises.

What actions should be taken to ensure they are managing the licenses
appropriately going forward?

○ Use AWS Secrets Manager to store the licenses as secrets to ensure they are stored
securely
○ Use the AWS Key Management Service to treat the license key safely and store it securely
◉ Use AWS License Manager to manage the software licenses
○ Use Amazon S3 with governance lock to manage the storage of the licenses

Correct answer
Use AWS License Manager to manage the software licenses

Feedback

Explanation:

AWS License Manager makes it easier to manage your software licenses from vendors
such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments.
AWS License Manager lets administrators create customized licensing rules that mirror
the terms of their licensing agreements.

CORRECT: "Use AWS License Manager to manage the software licenses" is the
correct answer (as explained above.)

INCORRECT: "Use AWS Secrets Manager to store the licenses as secrets to ensure
they are stored securely" is incorrect. AWS Secrets Manager helps you protect secrets
needed to access your applications, services, and IT resources. This does not include
license keys.
INCORRECT: "Use the AWS Key Management Service to treat the license key safely
and store it securely" is incorrect. AWS Key Management Service (AWS KMS) makes it
easy for you to create and manage cryptographic keys and control their use across a
wide range of AWS services and in your applications, not license keys.

INCORRECT: "Use Amazon S3 with governance lock to manage the storage of the
licenses" is incorrect. Amazon S3 is not designed to store software licenses.

References:

https://aws.amazon.com/license-manager/

A financial institution with many departments wants to migrate to the AWS Cloud
from their data center. Each department should have their own established AWS
accounts with preconfigured, Limited access to authorized services, based on each
team's needs, by the principle of least privilege.

What actions should be taken to ensure compliance with these security


requirements?

○ Use AWS CloudFormation to create new member accounts and networking and use IAM
roles to allow access to approved AWS services.
◉ Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's AWS
power user permissions on the created accounts.
○ Configure AWS Organizations with SCPs and create new member accounts. Use AWS
CloudFormation templates to configure the member account networking.
○ Deploy a Landing Zone within AWS Organizations. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's AWS
power user permissions on the created accounts.

Correct answer
Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's
AWS power user permissions on the created accounts.

Feedback

Explanation:

AWS Control Tower automates the setup of a new landing zone using best practices
blueprints for identity, federated access, and account structure.

The account factory automates provisioning of new accounts in your organization. As a


configurable account template, it helps you standardize the provisioning of new
accounts with pre-approved account configurations. You can configure your account
factory with pre-approved network configuration and region selections.

CORRECT: "Deploy a Landing Zone within AWS Control Tower. Allow department
administrators to use the Landing Zone to create new member accounts and
networking. Grant the department's AWS power user permissions on the created
accounts” is the correct answer (as explained above.)

INCORRECT: "Use AWS CloudFormation to create new member accounts and


networking and use IAM roles to allow access to approved AWS services” is incorrect.
Although you could perhaps make new AWS Accounts with AWS CloudFormation, the
easiest way to do that is by using AWS Control Tower.

INCORRECT: "Configure AWS Organizations with SCPs and create new member
accounts. Use AWS CloudFormation templates to configure the member account
networking” is incorrect. You can make new accounts using AWS Organizations
however the easiest way to do this is by using the AWS Control Tower service.

INCORRECT: "Deploy a Landing Zone within AWS Organizations. Allow department


administrators to use the Landing Zone to create new member accounts and
networking. Grant the department's AWS power user permissions on the created
accounts” is incorrect. Landing Zones do not get deployed within AWS Organizations.
References:

https://aws.amazon.com/controltower/

A computer scientist working for a university is looking to build a machine learning


application which will use telemetry data to predict weather for a given area at a
given time. This application would benefit from using managed services and will
need to find a solution which uses third party data within the application.

Which of the following combinations of services will deliver the best solution?

○ Use Amazon SageMaker to build the machine learning part of the application and use AWS
DataSync to gain access to the third-party telemetry data.
○ Use a TensorFlow AMI from the AWS Marketplace to build the machine learning part of the
application and use AWS DataSync to gain access to the third-party telemetry data.
○ Use a TensorFlow AMI from the AWS Marketplace to build the machine learning part of the
application and use AWS Data Exchange to gain access to the third-party telemetry data.
◉ Use Amazon SageMaker to build the machine learning part of the application and use AWS
Data Exchange to gain access to the third-party telemetry data.

Correct answer
Use Amazon SageMaker to build the machine learning part of the application and use AWS
Data Exchange to gain access to the third-party telemetry data.

Feedback

Explanation:

Amazon SageMaker allows you to build, train, and deploy machine learning models for
any use case with fully managed infrastructure, tools, and workflows. AWS Data
Exchange allows you to gain access to third party data sets across Automotive,
Financial Services, Gaming, Healthcare & Life Sciences, Manufacturing, Marketing,
Media & Entertainment, Retail, and many more industries.

CORRECT: "Use Amazon SageMaker to build the machine learning part of the
application and use AWS Data Exchange to gain access to the third-party telemetry
data” is the correct answer (as explained above.)

INCORRECT: "Use Amazon SageMaker to build the machine learning part of the
application and use AWS DataSync to gain access to the third-party telemetry data” is
incorrect. AWS DataSync is a secure, online service that automates and accelerates
moving data between on-premises and AWS storage services. It does not give access
to third party data.

INCORRECT: "Use a TensorFlow AMI from the AWS Marketplace to build the machine
learning part of the application and use AWS DataSync to gain access to the third-party
telemetry data” is incorrect. Building an EC2 instance from a TensorFlow AMI would not
involve using managed services and AWS DataSync is a secure, online service that
automates and accelerates moving data between on-premises and AWS storage
services. It does not give access to third party data.

INCORRECT: "Use a TensorFlow AMI from the AWS Marketplace to build the machine
learning part of the application and use AWS Data Exchange to gain access to the third-
party telemetry data” is incorrect. Building an EC2 instance from a TensorFlow AMI
would not involve using managed services.

References:

https://aws.amazon.com/data-exchange/

A Solutions Architect for a large banking company is configuring access control


within the organization for an Amazon S3 bucket containing thousands of financial
records. There are 20 different teams which need to have access to this bucket,
however they all need different permissions. These 20 teams correspond to 20
accounts within the banking company who are currently using AWS Organizations.

What is the simplest way to achieve this, whilst adhering to the principle of least
privilege?

○ Create a new AWS Organizations. Assign each team to a different Organizational Unit and
apply to appropriate permissions granting access to the appropriate resources in the bucket.
○ Copy the items from the bucket to create separate versions of each Separate the items in the
bucket into new buckets. Administer Bucket policies to allow each account to access the
appropriate bucket.
◉ Use S3 Access points to administer different access policies to each team, and control
access points using Service Control Policies within AWS Organizations.
○ Create the S3 Bucket in an individual account. Configure an IAM Role for each user to enable
cross account access for the S3 Bucket with a permissions policy to only access the appropriate
items within the bucket.

Correct answer
Use S3 Access points to administer different access policies to each team, and control
access points using Service Control Policies within AWS Organizations.

Feedback

Explanation:

Amazon S3 Access Points, a feature of S3, simplify data access for any AWS service or
customer application that stores data in S3. With S3 Access Points, customers can
create unique access control policies for each access point to easily control access to
shared datasets. You can also control access point usage using AWS Organizations
support for AWS SCPs.

CORRECT: "Use S3 Access points to administer different access policies to each team,
and control access points using Service Control Policies within AWS Organizations” is
the correct answer (as explained above.)
INCORRECT: "Create a new AWS Organizations. Assign each team to a different
Organizational Unit and apply to appropriate permissions granting access to the
appropriate resources in the bucket” is incorrect. This would not only be incredibly time
consuming but totally unnecessary as you can use the preexisting AWS Organizations
and the Service Control policies to control access via S3 Access Points.

INCORRECT: "Copy the items from the bucket to create separate versions of each
Separate the items in the bucket into new buckets. Administer Bucket policies to allow
each account to access the appropriate bucket” is incorrect. This involves a lot of
operational overhead and would be prone to significant error when administering the
correct permissions to each account.

INCORRECT: "Create the S3 Bucket in an individual account. Configure an IAM Role


for each user to enable cross account access for the S3 Bucket with a permissions
policy to only access the appropriate items within the bucket” is incorrect. This is an
unnecessary complexity as it would be much easier to provision separate policies per
team using S3 Access Points.

References:

https://aws.amazon.com/s3/features/access-points/

Save time with our AWS cheat sheets:

https://digitalcloud.training/amazon-s3-and-glacier/

You might also like