Aws Csaa Practice Exam 4
Aws Csaa Practice Exam 4
A company runs a large batch processing job at the end of every quarter. The
processing job runs for 5 days and uses 15 Amazon EC2 instances. The processing
must run uninterrupted for 5 hours per day. The company is investigating ways to
reduce the cost of the batch processing job.
◉ Reserved Instances
○ Spot Instances
○ On-Demand Instances
○ Dedicated Instances
Correct answer
On-Demand Instances
Feedback
Explanation:
Each EC2 instance runs for 5 hours a day for 5 days per quarter or 20 days per year.
This is time duration is insufficient to warrant reserved instances as these require a
commitment of a minimum of 1 year and the discounts would not outweigh the costs of
having the reservations unused for a large percentage of time. In this case, there are no
options presented that can reduce the cost and therefore on-demand instances should
be used.
INCORRECT: "Spot Instances" is incorrect. Spot instances may be interrupted and this
is not acceptable. Note that Spot Block is deprecated and unavailable to new
customers.
References:
https://aws.amazon.com/ec2/pricing/
https://digitalcloud.training/amazon-ec2/
○ Stop the instance outside business hours. Start the instance again when required.
◉ Hibernate the instance outside business hours. Start the instance again when required.
○ Use Auto Scaling to scale down the instance outside of business hours. Scale up the
instance when required.
○ Terminate the instance outside business hours. Recover the instance again when required.
Correct answer
Hibernate the instance outside business hours. Start the instance again when required.
Feedback
Explanation:
When you hibernate an instance, Amazon EC2 signals the operating system to perform
hibernation (suspend-to-disk). Hibernation saves the contents from the instance
memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume.
Amazon EC2 persists the instance's EBS root volume and any attached EBS data
volumes. When you start your instance:
CORRECT: "Hibernate the instance outside business hours. Start the instance again
when required" is the correct answer.
INCORRECT: "Stop the instance outside business hours. Start the instance again when
required" is incorrect. When an instance is stopped the operating system is shut down
and the contents of memory will be lost.
INCORRECT: "Use Auto Scaling to scale down the instance outside of business hours.
Scale out the instance when required" is incorrect. Auto Scaling scales does not scale
up and down, it scales in by terminating instances and out by launching instances.
When scaling out new instances are launched and no state will be available from
terminated instances.
INCORRECT: "Terminate the instance outside business hours. Recover the instance
again when required" is incorrect. You cannot recover terminated instances, you can
recover instances that have become impaired in some circumstances.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
https://digitalcloud.training/amazon-ec2/
A company hosts a multiplayer game on AWS. The application uses Amazon EC2
instances in a single Availability Zone and users connect over Layer 4. Solutions
Architect has been tasked with making the architecture highly available and also
more cost-effective.
How can the solutions architect best meet these requirements? (Select TWO.)
○ Configure an Auto Scaling group to add or remove instances in the Availability Zone
automatically
○ Increase the number of instances and use smaller EC2 instance types
○ Configure a Network Load Balancer in front of the EC2 instances
◉ Configure an Application Load Balancer in front of the EC2 instances
◉ Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically
Correct answers
Feedback
Explanation:
The solutions architect must enable high availability for the architecture and ensure it is
cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be
created to add and remove instances across multiple availability zones.
In order to distribute the traffic to the instances the architecture should use a Network
Load Balancer which operates at Layer 4. This architecture will also be cost-effective as
the Auto Scaling group will ensure the right number of instances are running based on
demand.
INCORRECT: "Increase the number of instances and use smaller EC2 instance types"
is incorrect as this is not the most cost-effective option. Auto Scaling should be used to
maintain the right number of active instances.
References:
https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
https://digitalcloud.training/amazon-ec2/
https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/
A company requires a solution to allow customers to customize images that are
stored in an online catalog. The image customization parameters will be sent in
requests to Amazon API Gateway. The customized image will then be generated on-
demand and can be accessed online.
The solutions architect requires a highly available solution. Which solution will be
MOST cost-effective?
○ Use Amazon EC2 instances to manipulate the original images into the requested
customization. Store the original and manipulated images in Amazon S3. Configure an Elastic
Load Balancer in front of the EC2 instances
◉ Use AWS Lambda to manipulate the original images to the requested customization. Store
the original and manipulated images in Amazon S3. Configure an Amazon CloudFront
distribution with the S3 bucket as the origin
○ Use AWS Lambda to manipulate the original images to the requested customization. Store
the original images in Amazon S3 and the manipulated images in Amazon DynamoDB.
Configure an Elastic Load Balancer in front of the Amazon EC2 instances
○ Use Amazon EC2 instances to manipulate the original images into the requested
customization. Store the original images in Amazon S3 and the manipulated images in Amazon
DynamoDB. Configure an Amazon CloudFront distribution with the S3 bucket as the origin
Correct answer
Use AWS Lambda to manipulate the original images to the requested customization. Store
the original and manipulated images in Amazon S3. Configure an Amazon CloudFront
distribution with the S3 bucket as the origin
Feedback
Explanation:
All solutions presented are highly available. The key requirement that must be satisfied
is that the solution should be cost-effective and you must choose the most cost-effective
option.
Therefore, it’s best to eliminate services such as Amazon EC2 and ELB as these
require ongoing costs even when they’re not used. Instead, a fully serverless solution
should be used. AWS Lambda, Amazon S3 and CloudFront are the best services to use
for these requirements.
CORRECT: "Use AWS Lambda to manipulate the original images to the requested
customization. Store the original and manipulated images in Amazon S3. Configure an
Amazon CloudFront distribution with the S3 bucket as the origin" is the correct answer.
INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the
requested customization. Store the original and manipulated images in Amazon S3.
Configure an Elastic Load Balancer in front of the EC2 instances" is incorrect. This is
not the most cost-effective option as the ELB and EC2 instances will incur costs even
when not used.
INCORRECT: "Use AWS Lambda to manipulate the original images to the requested
customization. Store the original images in Amazon S3 and the manipulated images in
Amazon DynamoDB. Configure an Elastic Load Balancer in front of the Amazon EC2
instances" is incorrect. This is not the most cost-effective option as the ELB will incur
costs even when not used. Also, Amazon DynamoDB will incur RCU/WCUs when
running and is not the best choice for storing images.
INCORRECT: "Use Amazon EC2 instances to manipulate the original images into the
requested customization. Store the original images in Amazon S3 and the manipulated
images in Amazon DynamoDB. Configure an Amazon CloudFront distribution with the
S3 bucket as the origin" is incorrect. This is not the most cost-effective option as the
EC2 instances will incur costs even when not used
References:
https://aws.amazon.com/serverless/
https://digitalcloud.training/amazon-s3-and-glacier/
https://digitalcloud.training/aws-lambda/
https://digitalcloud.training/amazon-cloudfront/
Which combination of actions should the solutions architect take to accomplish this?
(Select TWO.)
○ Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second
Region
◉ Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
○ Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the
new instance
◉ Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region
for the destination
○ Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an
EC2 instance in the second Region using that EBS volume
Correct answers
Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second
Region
Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second
Region for the destination
Feedback
Explanation:
You can copy an Amazon Machine Image (AMI) within or across AWS Regions using
the AWS Management Console, the AWS Command Line Interface or SDKs, or the
Amazon EC2 API, all of which support the CopyImage action.
Using the copied AMI the solutions architect would then be able to launch an instance
from the same EBS volume in the second Region.
Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3
management console or work with them programmatically using the S3 API.
CORRECT: "Copy an Amazon Machine Image (AMI) of an EC2 instance and specify
the second Region for the destination" is a correct answer.
CORRECT: "Launch a new EC2 instance from an Amazon Machine Image (AMI) in the
second Region" is also a correct answer.
INCORRECT: "Launch a new EC2 instance in the second Region and copy a volume
from Amazon S3 to the new instance" is incorrect. You cannot create an EBS volume
directly from Amazon S3.
INCORRECT: "Copy an Amazon Elastic Block Store (Amazon EBS) volume from
Amazon S3 and launch an EC2 instance in the second Region using that EBS volume"
is incorrect. You cannot create an EBS volume directly from Amazon S3.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
https://digitalcloud.training/amazon-ebs/
A web application runs in public and private subnets. The application architecture
consists of a web tier and database tier running on Amazon EC2 instances. Both
tiers run in a single Availability Zone (AZ).
○ Create new public and private subnets in the same AZ for high availability
○ Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning
multiple AZs
○ Add the existing web application instances to an Auto Scaling group behind an Application
Load Balancer (ALB)
○ Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in
one AZ
◉ Create new public and private subnets in the same VPC, each in a new AZ. Migrate the
database to an Amazon RDS multi-AZ deployment
Correct answers
Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB)
spanning multiple AZs
Create new public and private subnets in the same VPC, each in a new AZ. Migrate the
database to an Amazon RDS multi-AZ deployment
Feedback
Explanation:
To add high availability to this architecture both the web tier and database tier require
changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will
ensure there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take
advantage of a managed database with Multi-AZ functionality. This will ensure that if
there is an issue preventing access to the primary database a secondary database can
take over.
CORRECT: "Create an Amazon EC2 Auto Scaling group and Application Load
Balancer (ALB) spanning multiple AZs" is the correct answer.
CORRECT: "Create new public and private subnets in the same VPC, each in a new
AZ. Migrate the database to an Amazon RDS multi-AZ deployment" is the correct
answer.
INCORRECT: "Create new public and private subnets in the same AZ for high
availability" is incorrect as this would not add high availability.
INCORRECT: "Add the existing web application instances to an Auto Scaling group
behind an Application Load Balancer (ALB)" is incorrect because the existing servers
are in a single subnet. For HA we need to instances in multiple subnets.
INCORRECT: "Create new public and private subnets in a new AZ. Create a database
using Amazon EC2 in one AZ" is incorrect because we also need HA for the database
layer.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-
balancer.html
https://aws.amazon.com/rds/features/multi-az/
https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/
https://digitalcloud.training/amazon-ec2-auto-scaling/
https://digitalcloud.training/amazon-rds/
A solutions architect is designing the infrastructure to run an application on Amazon
EC2 instances. The application requires high availability and must dynamically scale
based on demand to be cost efficient.
◉ Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Regions
○ Configure an Amazon CloudFront distribution in front of an Auto Scaling group to deploy
instances to multiple Regions
○ Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Availability Zones
○ Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy instances
to multiple Availability Zones
Correct answer
Configure an Application Load Balancer in front of an Auto Scaling group to deploy instances
to multiple Availability Zones
Feedback
Explanation:
The Amazon EC2-based application must be highly available and elastically scalable.
Auto Scaling can provide the elasticity by dynamically launching and terminating
instances based on demand. This can take place across availability zones for high
availability.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-
scaling.html
https://aws.amazon.com/elasticloadbalancing/
https://digitalcloud.training/amazon-ec2-auto-scaling/
https://digitalcloud.training/aws-elastic-load-balancing-aws-elb/
Amazon EC2 instances in a development environment run between 9am and 5pm
Monday-Friday. Production instances run 24/7. Which pricing models should be
used to optimize cost and ensure capacity is available? (Select TWO.)
Feedback
Explanation:
Reserved instances are a good choice for workloads that run continuously. This is a
good option for the production environment.
CORRECT: "Use Reserved instances for the production environment" is also a correct
answer.
INCORRECT: "Use Spot instances for the development environment" is incorrect. Spot
Instances are a cost-effective choice if you can be flexible about when your applications
run and if your applications can be interrupted. Spot instances are not suitable for the
development environment as important work may be interrupted.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-purchasing-
options.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-
reservations.html
https://digitalcloud.training/amazon-ec2/
An application running on an Amazon ECS container instance using the EC2 launch
type needs permissions to write data to Amazon DynamoDB.
How can you assign these permissions only to the specific ECS task that is running
the application?
○ Create an IAM policy with permissions to DynamoDB and attach it to the container instance
◉ Create an IAM policy with permissions to DynamoDB and assign It to a task using the
taskRoleArn parameter
○ Use a security group to allow outbound connections to DynamoDB and assign it to the
container instance
○ Modify the AmazonECSTaskExecutionRolePolicy policy to add permissions for DynamoDB
Correct answer
Create an IAM policy with permissions to DynamoDB and assign It to a task using the
taskRoleArn parameter
Feedback
Explanation:
To specify permissions for a specific task on Amazon ECS you should use IAM Roles
for Tasks. The permissions policy can be applied to tasks when creating the task
definition, or by using an IAM task role override using the AWS CLI or SDKs. The
taskRoleArn parameter is used to specify the policy.
INCORRECT: "Create an IAM policy with permissions to DynamoDB and attach it to the
container instance" is incorrect. You should not apply the permissions to the container
instance as they will then apply to all tasks running on the instance as well as the
instance itself.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
https://digitalcloud.training/amazon-ecs-and-eks/
Correct answer
Elastic Fabric Adapter (EFA)
Feedback
Explanation:
An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added
capabilities. The EFA lets you apply the scale, flexibility, and elasticity of the AWS
Cloud to tightly-coupled HPC apps. It is ideal for tightly coupled app as it uses the
Message Passing Interface (MPI).
INCORRECT: "Elastic Network Interface (ENI)" is incorrect. The ENI is a basic type of
adapter and is not the best choice for this use case.
INCORRECT: "Elastic Network Adapter (ENA)" is incorrect. The ENA, which provides
Enhanced Networking, does provide high bandwidth and low inter-instance latency but
it does not support the features for a tightly-coupled app that the EFA does.
References:
https://aws.amazon.com/blogs/aws/now-available-elastic-fabric-adapter-efa-for-tightly-
coupled-hpc-workloads/
https://digitalcloud.training/amazon-ec2/
A company runs several NFS file servers in an on-premises data center. The NFS
servers must run periodic backups to Amazon S3 using automatic synchronization
for small volumes of data.
○ Set up AWS Glue to extract the data from the NFS shares and load it into Amazon S3.
◉ Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon
S3.
○ Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises to
Amazon S3.
○ Set up an AWS Direct Connect connection between the on-premises data center and AWS
and copy the data to Amazon S3.
Correct answer
Set up an AWS DataSync agent on the on-premises servers and sync the data to Amazon
S3.
Feedback
Explanation:
AWS DataSync is an online data transfer service that simplifies, automates, and
accelerates copying large amounts of data between on-premises systems and AWS
Storage services, as well as between AWS Storage services. DataSync can copy data
between Network File System (NFS) shares, or Server Message Block (SMB)
shares, self-managed object storage, AWS Snowcone, Amazon Simple Storage Service
(Amazon S3) buckets, Amazon Elastic File System (Amazon EFS) file systems, and
Amazon FSx for Windows File Server file systems.
This is the most cost-effective solution from the answer options available.
CORRECT: "Set up an AWS DataSync agent on the on-premises servers and sync the
data to Amazon S3" is the correct answer.
INCORRECT: "Set up an SFTP sync using AWS Transfer for SFTP to sync data from
on premises to Amazon S3" is incorrect. This solution does not provide the scheduled
synchronization features of AWS DataSync and is more expensive.
INCORRECT: "Set up AWS Glue to extract the data from the NFS shares and load it
into Amazon S3" is incorrect. AWS Glue is an ETL service and cannot be used for
copying data to Amazon S3 from NFS shares.
References:
https://aws.amazon.com/datasync/features/
https://digitalcloud.training/aws-migration-services/
Correct answers
Feedback
Explanation:
Multi-factor authentication (MFA) delete adds an additional step before an object can be
deleted from a versioning-enabled bucket.
With MFA delete the bucket owner must include the x-amz-mfa request header in
requests to permanently delete an object version or change the versioning state of the
bucket.
INCORRECT: "Create a lifecycle policy for the objects in the S3 bucket" is incorrect. A
lifecycle policy will move data to another storage class but does not protect against
deletion.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
Storage capacity has become an issue for a company that runs application servers
on-premises. The servers are connected to a combination of block storage and NFS
storage solutions. The company requires a solution that supports local caching
without re-architecting its existing applications.
Which combination of changes can the company make to meet these requirements?
(Select TWO.)
◉ Use an AWS Storage Gateway file gateway to replace the NFS storage.
○ Use the mount command on servers to mount Amazon S3 buckets using NFS.
○ Use AWS Direct Connect and mount an Amazon FSx for Windows File Server using iSCSI.
◉ Use an AWS Storage Gateway volume gateway to replace the block storage.
○ Use Amazon Elastic File System (EFS) volumes to replace the block storage.
Correct answers
Use an AWS Storage Gateway file gateway to replace the NFS storage.
Use an AWS Storage Gateway volume gateway to replace the block storage.
Feedback
Explanation:
In this scenario the company should use cloud storage to replace the existing storage
solutions that are running out of capacity. The on-premises servers mount the existing
storage using block protocols (iSCSI) and file protocols (NFS). As there is a
requirement to avoid re-architecting existing applications these protocols must be used
in the revised solution.
The AWS Storage Gateway volume gateway should be used to replace the block-based
storage systems as it is mounted over iSCSI and the file gateway should be used to
replace the NFS file systems as it uses NFS.
CORRECT: "Use an AWS Storage Gateway file gateway to replace the NFS storage" is
a correct answer.
CORRECT: "Use an AWS Storage Gateway volume gateway to replace the block
storage" is a correct answer.
INCORRECT: "Use AWS Direct Connect and mount an Amazon FSx for Windows File
Server using iSCSI" is incorrect. You cannot mount FSx for Windows File Server file
systems using iSCSI, you must use SMB.
INCORRECT: "Use Amazon Elastic File System (EFS) volumes to replace the block
storage" is incorrect. You cannot use EFS to replace block storage as it uses NFS
rather than iSCSI.
References:
https://docs.aws.amazon.com/storagegateway/
https://digitalcloud.training/aws-storage-gateway/
Correct answers
Feedback
Explanation:
None of the options present a good solution for specifying permissions required to write
and modify objects so that requirement needs to be taken care of separately. The other
requirements are to prevent accidental deletion and the ensure that all versions of the
document are available.
The two solutions for these requirements are versioning and MFA delete. Versioning will
retain a copy of each version of the document and multi-factor authentication delete
(MFA delete) will prevent any accidental deletion as you need to supply a second factor
when attempting a delete.
INCORRECT: "Set read-only permissions on the bucket" is incorrect as this will also
prevent any writing to the bucket which is not desired.
INCORRECT: "Attach an IAM policy to the bucket" is incorrect as users need to modify
documents which will also allow delete. Therefore, a method must be implemented to
just control deletes.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
https://digitalcloud.training/amazon-s3-and-glacier/
A solutions architect needs to backup some application log files from an online
ecommerce store to Amazon S3. It is unknown how often the logs will be accessed
or which logs will be accessed the most. The solutions architect must keep costs as
low as possible by using the appropriate S3 storage class.
○ S3 Glacier
◉ S3 Intelligent-Tiering
○ S3 Standard-Infrequent Access (S3 Standard-IA)
○ S3 One Zone-Infrequent Access (S3 One Zone-IA)
Correct answer
S3 Intelligent-Tiering
Feedback
Explanation:
It works by storing objects in two access tiers: one tier that is optimized for frequent
access and another lower-cost tier that is optimized for infrequent access. This is an
ideal use case for intelligent-tiering as the access patterns for the log files are not
known.
INCORRECT: "S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect as if the
data is accessed often retrieval fees could become expensive.
INCORRECT: "S3 Glacier" is incorrect as if the data is accessed often retrieval fees
could become expensive. Glacier also requires more work in retrieving the data from
the archive and quick access requirements can add further costs.
References:
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
https://digitalcloud.training/amazon-s3-and-glacier/
A team are planning to run analytics jobs on log files each day and require a storage
solution. The size and number of logs is unknown and data will persist for 24 hours
only.
What is the MOST cost-effective solution?
Correct answer
Amazon S3 Standard
Feedback
Explanation:
S3 standard is the best choice in this scenario for a short term storage solution. In this
case the size and number of logs is unknown and it would be difficult to fully assess the
access patterns at this stage. Therefore, using S3 standard is best as it is cost-effective,
provides immediate access, and there are no retrieval fees or minimum capacity charge
per object.
References:
https://aws.amazon.com/s3/storage-classes/
Save time with our AWS cheat sheets:
https://digitalcloud.training/amazon-s3-and-glacier/
A solutions architect is designing a new service that will use an Amazon API
Gateway API on the frontend. The service will need to persist data in a backend
database using key-value requests. Initially, the data requirements will be around 1
GB and future growth is unknown. Requests can range from 0 to over 800 requests
per second.
Which combination of AWS services would meet these requirements? (Select TWO.)
○ AWS Fargate
◉ AWS Lambda
◉ Amazon DynamoDB
○ Amazon EC2 Auto Scaling
○ Amazon RDS
Correct answers
AWS Lambda
Amazon DynamoDB
Feedback
Explanation:
In this case AWS Lambda can perform the computation and store the data in an
Amazon DynamoDB table. Lambda can scale concurrent executions to meet demand
easily and DynamoDB is built for key-value data storage requirements and is also
serverless and easily scalable. This is therefore a cost effective solution for
unpredictable workloads.
INCORRECT: "Amazon EC2 Auto Scaling" is incorrect as this uses EC2 instances
which will incur costs even when no requests are being made.
References:
https://aws.amazon.com/lambda/features/
https://aws.amazon.com/dynamodb/
https://digitalcloud.training/aws-lambda/
https://digitalcloud.training/amazon-dynamodb/
A company runs a web application that serves weather updates. The application
runs on a fleet of Amazon EC2 instances in a Multi-AZ Auto scaling group behind an
Application Load Balancer (ALB). The instances store data in an Amazon Aurora
database. A solutions architect needs to make the application more resilient to
sporadic increases in request rates.
Correct answers
Feedback
Explanation:
The architecture is already highly resilient but the may be subject to performance
degradation if there are sudden increases in request rates. To resolve this situation
Amazon Aurora Read Replicas can be used to serve read traffic which offloads
requests from the main database. On the frontend an Amazon CloudFront distribution
can be placed in front of the ALB and this will cache content for better performance and
also offloads requests from the backend.
CORRECT: "Add an Amazon CloudFront distribution in front of the ALB" is the correct
answer.
INCORRECT: "Add and AWS WAF in front of the ALB" is incorrect. A web application
firewall protects applications from malicious attacks. It does not improve performance.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.
html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.ht
ml
https://digitalcloud.training/amazon-aurora/
https://digitalcloud.training/amazon-cloudfront/
An Amazon RDS Read Replica is being deployed in a separate region. The master
database is not encrypted but all data in the new region must be encrypted. How can
this be achieved?
○ Enable encryption using Key Management Service (KMS) when creating the cross-region
Read Replica
○ Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read
Replica from the snapshot
○ Enable encryption on the master DB instance, then create an encrypted cross-region Read
Replica
◉ Encrypt a snapshot from the master DB instance, create a new encrypted master DB
instance, and then create an encrypted cross-region Read Replica
Correct answer
Encrypt a snapshot from the master DB instance, create a new encrypted master DB
instance, and then create an encrypted cross-region Read Replica
Feedback
Explanation:
CORRECT: "Encrypt a snapshot from the master DB instance, create a new encrypted
master DB instance, and then create an encrypted cross-region Read Replica" is the
correct answer.
INCORRECT: "Enable encryption using Key Management Service (KMS) when creating
the cross-region Read Replica" is incorrect. All other options will not work due to the
limitations explained above.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
https://digitalcloud.training/amazon-rds/
An Amazon RDS PostgreSQL database is configured as Multi-AZ. A solutions
architect needs to scale read performance and the solution must be configured for
high availability. What is the most cost-effective solution?
Correct answer
Create a read replica as a Multi-AZ DB instance
Feedback
Explanation:
References:
https://aws.amazon.com/about-aws/whats-new/2018/01/amazon-rds-read-replicas-now-
support-multi-az-deployments/
https://digitalcloud.training/amazon-rds/
A company has acquired another business and needs to migrate their 50TB of data
into AWS within 1 month. They also require a secure, reliable and private connection
to the AWS cloud.
○ Provision an AWS Direct Connect connection and migrate the data over the link
◉ Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct
Connect link
○ Launch a Virtual Private Gateway (VPG) and migrate the data over the AWS VPN
○ Provision an AWS VPN CloudHub connection and migrate the data over redundant links
Correct answer
Migrate data using AWS Snowball. Provision an AWS VPN initially and order a Direct
Connect link
Feedback
Explanation:
AWS Direct Connect provides a secure, reliable and private connection. However, lead
times are often longer than 1 month so it cannot be used to migrate data within the
timeframes. Therefore, it is better to use AWS Snowball to move the data and order a
Direct Connect connection to satisfy the other requirement later on. In the meantime the
organization can use an AWS VPN for secure, private access to their VPC.
CORRECT: "Migrate data using AWS Snowball. Provision an AWS VPN initially and
order a Direct Connect link" is the correct answer.
INCORRECT: "Provision an AWS Direct Connect connection and migrate the data over
the link" is incorrect due to the lead time for installation.
INCORRECT: "Launch a Virtual Private Gateway (VPG) and migrate the data over the
AWS VPN" is incorrect. A VPG is the AWS-side of an AWS VPN. A VPN does not
provide a private connection and is not reliable as you can never guarantee the latency
over the Internet
INCORRECT: "Provision an AWS VPN CloudHub connection and migrate the data over
redundant links" is incorrect. AWS VPN CloudHub is a service for connecting multiple
sites into your VPC over VPN connections. It is not used for aggregating links and the
limitations of Internet bandwidth from the company where the data is stored will still be
an issue. It also uses the public Internet so is not a private or reliable connection.
References:
https://aws.amazon.com/snowball/
https://aws.amazon.com/directconnect/
https://digitalcloud.training/aws-direct-connect/
https://digitalcloud.training/aws-migration-services/
An organization has a large amount of data on Windows (SMB) file shares in their
on-premises data center. The organization would like to move data into Amazon S3.
They would like to automate the migration of data over their AWS Direct Connect
link.
Correct answer
AWS DataSync
Feedback
Explanation:
AWS DataSync can be used to move large amounts of data online between on-
premises storage and Amazon S3 or Amazon Elastic File System (Amazon EFS).
DataSync eliminates or automatically handles many of these tasks, including scripting
copy jobs, scheduling and monitoring transfers, validating data, and optimizing network
utilization. The source datastore can be Server Message Block (SMB) file servers.
References:
https://aws.amazon.com/datasync/faqs/
https://digitalcloud.training/aws-migration-services/
A company hosts an application on Amazon EC2 instances behind Application Load
Balancers in several AWS Regions. Distribution rights for the content require that
users in different geographies must be served content from specific regions.
Correct answer
Create Amazon Route 53 records with a geolocation routing policy.
Feedback
Explanation:
To protect the distribution rights of the content and ensure that users are directed to the
appropriate AWS Region based on the location of the user, the geolocation routing
policy can be used with Amazon Route 53.
Geolocation routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate
from.
When you use geolocation routing, you can localize your content and present some or
all of your website in the language of your users. You can also use geolocation routing
to restrict distribution of content to only the locations in which you have distribution
rights.
CORRECT: "Create Amazon Route 53 records with a geolocation routing policy" is the
correct answer.
INCORRECT: "Create Amazon Route 53 records with a geoproximity routing policy" is
incorrect. Use this routing policy when you want to route traffic based on the location of
your resources and, optionally, shift traffic from resources in one location to resources
in another.
INCORRECT: "Configure Amazon CloudFront with multiple origins and AWS WAF" is
incorrect. AWS WAF protects against web exploits but will not assist with directing
users to different content (from different origins).
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
https://digitalcloud.training/amazon-route-53/
○ Modify the ALB security group to deny incoming traffic from blocked countries
○ Modify the security group for EC2 instances to deny incoming traffic from blocked countries
◉ Use Amazon CloudFront to serve the application and deny access to blocked countries
○ Use a network ACL to block the IP address ranges associated with the specific countries
Correct answer
Use Amazon CloudFront to serve the application and deny access to blocked countries
Feedback
Explanation:
When a user requests your content, CloudFront typically serves the requested content
regardless of where the user is located. If you need to prevent users in specific
countries from accessing your content, you can use the CloudFront geo restriction
feature to do one of the following:
Allow your users to access your content only if they're in one of the countries on a
whitelist of approved countries.
Prevent your users from accessing your content if they're in one of the countries
on a blacklist of banned countries.
For example, if a request comes from a country where, for copyright reasons, you are
not authorized to distribute your content, you can use CloudFront geo restriction to
block the request.
This is the easiest and most effective way to implement a geographic restriction for the
delivery of content.
CORRECT: "Use Amazon CloudFront to serve the application and deny access to
blocked countries" is the correct answer.
INCORRECT: "Use a Network ACL to block the IP address ranges associated with the
specific countries" is incorrect as this would be extremely difficult to manage.
INCORRECT: "Modify the ALB security group to deny incoming traffic from blocked
countries" is incorrect as security groups cannot block traffic by country.
INCORRECT: "Modify the security group for EC2 instances to deny incoming traffic
from blocked countries" is incorrect as security groups cannot block traffic by country.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestriction
s.html
Save time with our AWS cheat sheets:
https://digitalcloud.training/amazon-cloudfront/
An organization want to share regular updates about their charitable work using
static webpages. The pages are expected to generate a large amount of views from
around the world. The files are stored in an Amazon S3 bucket. A solutions architect
has been asked to design an efficient and effective solution.
Correct answer
Use Amazon CloudFront with the S3 bucket as its origin
Feedback
Explanation:
Amazon CloudFront can be used to cache the files in edge locations around the world
and this will improve the performance of the webpages.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront
distribution using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin
access identity (OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header
CORRECT: "Use Amazon CloudFront with the S3 bucket as its origin" is the correct
answer.
INCORRECT: "Generate presigned URLs for the files" is incorrect as this is used to
restrict access which is not a requirement.
INCORRECT: "Use cross-Region replication to all Regions" is incorrect as this does not
provide a mechanism for directing users to the closest copy of the static webpages.
INCORRECT: "Use the geoproximity feature of Amazon Route 53" is incorrect as this
does not include a solution for having multiple copies of the data in different geographic
lcoations.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-
website/
https://digitalcloud.training/amazon-cloudfront/
◉ Create an Origin Access Identity (OAI) and associate it with the distribution
○ Use signed URLs or signed cookies to limit access to the content
○ Use a Network ACL to restrict access to the ELB
○ Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the
CloudFront internal service IP addresses when they change
Correct answer
Create a VPC Security Group for the ELB and use AWS Lambda to automatically update the
CloudFront internal service IP addresses when they change
Feedback
Explanation:
The only way to get this working is by using a VPC Security Group for the ELB that is
configured to allow only the internal service IP ranges associated with CloudFront. As
these are updated from time to time, you can use AWS Lambda to automatically update
the addresses. This is done using a trigger that is triggered when AWS issues an SNS
topic update when the addresses are changed.
CORRECT: "Create a VPC Security Group for the ELB and use AWS Lambda to
automatically update the CloudFront internal service IP addresses when they change" is
the correct answer.
INCORRECT: "Create an Origin Access Identity (OAI) and associate it with the
distribution" is incorrect. You can use an OAI to restrict access to content in Amazon
S3 but not on EC2 or ELB.
INCORRECT: "Use signed URLs or signed cookies to limit access to the content" is
incorrect. Signed cookies and URLs are used to limit access to files but this does not
stop people from circumventing CloudFront and accessing the ELB directly.
References:
https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-
groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/
https://digitalcloud.training/amazon-cloudfront/
A company has divested a single business unit and needs to move the AWS account
owned by the business unit to another AWS Organization. How can this be
achieved?
○ Create a new account in the destination AWS Organization and migrate resources
○ Create a new account in the destination AWS Organization and share the original resources
using AWS Resource Access Manager
○ Migrate the account using AWS CloudFormation
◉ Migrate the account using the AWS Organizations console
Correct answer
Migrate the account using the AWS Organizations console
Feedback
Explanation:
Accounts can be migrated between organizations. To do this you must have root or IAM
access to both the member and master accounts. Resources will remain under the
control of the migrated account.
CORRECT: "Migrate the account using the AWS Organizations console" is the correct
answer.
INCORRECT: "Create a new account in the destination AWS Organization and migrate
resources" is incorrect. You do not need to create a new account in the destination
AWS Organization as you can just migrate the existing account.
INCORRECT: "Create a new account in the destination AWS Organization and share
the original resources using AWS Resource Access Manager" is incorrect. You do not
need to create a new account in the destination AWS Organization as you can just
migrate the existing account.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/organizations-move-
accounts/
https://digitalcloud.training/aws-organizations/
◉ Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects.
○ Create an origin access identity (OAI) and associate it with the distribution. Generate
presigned URLs that limit access to the OAI.
◉ Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the Amazon S3 bucket.
○ Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2
security group. Associate this new web ACL with the CloudFront distribution.
○ Attach the existing security group that contains the IP restrictions to the Amazon CloudFront
distribution.
Correct answers
Create an origin access identity (OAI) and associate it with the distribution. Change the
permissions in the bucket policy so that only the OAI can read the objects.
Create an AWS WAF web ACL that includes the same IP restrictions that exist in the
EC2 security group. Associate this new web ACL with the CloudFront distribution.
Feedback
Explanation:
The next step is to change the permissions either on your Amazon S3 bucket or on the
files in your bucket so that only the origin access identity has read permission (or read
and download permission). This can be implemented through a bucket policy.
To control access at the CloudFront layer the AWS Web Application Firewall (WAF) can
be used. With WAF you must create an ACL that includes the IP restrictions required
and then associate the web ACL with the CloudFront distribution.
CORRECT: "Create an origin access identity (OAI) and associate it with the distribution.
Change the permissions in the bucket policy so that only the OAI can read the objects"
is a correct answer.
CORRECT: "Create an AWS WAF web ACL that includes the same IP restrictions that
exist in the EC2 security group. Associate this new web ACL with the CloudFront
distribution" is also a correct answer.
INCORRECT: "Create an origin access identity (OAI) and associate it with the
distribution. Generate presigned URLs that limit access to the OAI" is incorrect.
Presigned URLs can be used to protect access to CloudFront but they cannot be used
to limit access to an OAI.
INCORRECT: "Create an AWS WAF web ACL that includes the same IP restrictions
that exist in the EC2 security group. Associate this new web ACL with the Amazon S3
bucket" is incorrect. The Web ACL should be associated with CloudFront, not S3.
INCORRECT: "Attach the existing security group that contains the IP restrictions to the
Amazon CloudFront distribution" is incorrect. You cannot attach a security group to a
CloudFront distribution.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-
content-restricting-access-to-s3.html
https://docs.aws.amazon.com/waf/latest/developerguide/cloudfront-features.html
https://digitalcloud.training/aws-waf-shield/
Correct answer
Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL on the Application
Load Balancer.
Feedback
Explanation:
A rate-based rule tracks the rate of requests for each originating IP address, and
triggers the rule action on IPs with rates that go over a limit. You set the limit as the
number of requests per 5-minute time span.
You can use this type of rule to put a temporary block on requests from an IP address
that's sending excessive requests. By default, AWS WAF aggregates requests based
on the IP address from the web request origin, but you can configure the rule to use an
IP address from an HTTP header, like X-Forwarded-For, instead.
CORRECT: "Configure an AWS WAF ACL with rate-based rules. Enable the WAF ACL
on the Application Load Balancer" is the correct answer.
INCORRECT: "Create a custom AWS Lambda function that monitors for suspicious
traffic and modifies a network ACL when a potential DDoS attack is identified" is
incorrect. There’s not description here of how Lambda is going to monitor for traffic.
INCORRECT: "Enable VPC Flow Logs and store them in Amazon S3. Use Amazon
Athena to parse the logs and identify and block potential DDoS attacks" is incorrect.
Amazon Athena is not able to block DDoS attacks, another service would be needed.
INCORRECT: "Enable access logs on the Application Load Balancer and configure
Amazon CloudWatch to monitor the access logs and trigger a Lambda function when
potential attacks are identified. Configure the Lambda function to modify the ALBs
security group and block the attack" is incorrect. Access logs are exported to S3 but not
to CloudWatch. Also, it would not be possible to block an attack from a specific IP using
a security group (while still allowing any other source access) as they do not support
deny rules.
References:
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-rate-
based.html
https://digitalcloud.training/aws-waf-shield/
○ Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP
address
◉ Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP
address
○ Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny
the malicious IP address
○ Modify the security groups for the EC2 instances in the target groups behind the ALB to deny
the malicious IP address.
Correct answer
Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP
address
Feedback
Explanation:
A new version of the AWS Web Application Firewall was released in November 2019.
With AWS WAF classic you create “IP match conditions”, whereas with AWS WAF (new
version) you create “IP set match statements”. Look out for wording on the exam.
The IP match condition / IP set match statement inspects the IP address of a web
request's origin against a set of IP addresses and address ranges. Use this to allow or
block web requests based on the IP addresses that the requests originate from.
AWS WAF supports all IPv4 and IPv6 address ranges. An IP set can hold up to 10,000
IP addresses or IP address ranges to check.
INCORRECT: "Modify the network ACL on the CloudFront distribution to add a deny
rule for the malicious IP address" is incorrect as CloudFront does not sit within a subnet
so network ACLs do not apply to it.
INCORRECT: "Modify the network ACL for the EC2 instances in the target groups
behind the ALB to deny the malicious IP address" is incorrect as the source IP
addresses of the data in the EC2 instances’ subnets will be the ELB IP addresses.
INCORRECT: "Modify the security groups for the EC2 instances in the target groups
behind the ALB to deny the malicious IP address." is incorrect as you cannot create
deny rules with security groups.
References:
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-ipset-
match.html
https://digitalcloud.training/aws-waf-shield/
○ Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda
function and then store the data in Amazon RedShift.
○ Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon DynamoDB.
○ Ingest the data into an Amazon SQS queue. Process the data using an AWS Lambda
function and then store the data in Amazon DynamoDB.
◉ Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon RedShift.
Correct answer
Ingest the data into an Amazon Kinesis Data Stream. Process the data with an AWS Lambda
function and then store the data in Amazon DynamoDB.
Feedback
Explanation:
A Kinesis data stream is a set of shards. Each shard contains a sequence of data
records. A consumer is an application that processes the data from a Kinesis data
stream. You can map a Lambda function to a shared-throughput consumer (standard
iterator), or to a dedicated-throughput consumer with enhanced fan-out.
Amazon DynamoDB is the best database for this use case as it supports near-real time
performance and millisecond responsiveness.
CORRECT: "Ingest the data into an Amazon Kinesis Data Stream. Process the data
with an AWS Lambda function and then store the data in Amazon DynamoDB" is the
correct answer.
INCORRECT: "Ingest the data into an Amazon Kinesis Data Stream. Process the data
with an AWS Lambda function and then store the data in Amazon RedShift" is incorrect.
Amazon RedShift cannot provide millisecond responsiveness.
INCORRECT: "Ingest the data into an Amazon SQS queue. Process the data using an
AWS Lambda function and then store the data in Amazon RedShift" is incorrect.
Amazon SQS does not provide near real-time performance and RedShift does not
provide millisecond responsiveness.
INCORRECT: "Ingest the data into an Amazon SQS queue. Process the data using an
AWS Lambda function and then store the data in Amazon DynamoDB" is incorrect.
Amazon SQS does not provide near real-time performance.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html
https://digitalcloud.training/amazon-kinesis/
An automotive company plans to implement IoT sensors in manufacturing equipment
that will send data to AWS in real time. The solution must receive events in an
ordered manner from each asset and ensure that the data is saved for future
processing.
◉ Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
○ Use Amazon Kinesis Data Streams for real-time events with a shard for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon EBS.
○ Use an Amazon SQS FIFO queue for real-time events with one queue for each equipment
asset. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS.
○ Use an Amazon SQS standard queue for real-time events with one queue for each
equipment asset. Trigger an AWS Lambda function from the SQS queue to save data to
Amazon S3.
Correct answer
Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment
asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3.
Feedback
Explanation:
Amazon Kinesis Data Streams is the ideal service for receiving streaming data. The
Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the
same Amazon Kinesis data stream. Therefore, a separate partition (rather than shard)
should be used for each equipment asset.
Amazon Kinesis Firehose can be used to receive streaming data from Data Streams
and then load the data into Amazon S3 for future processing.
CORRECT: "Use Amazon Kinesis Data Streams for real-time events with a partition for
each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3"
is the correct answer.
INCORRECT: "Use Amazon Kinesis Data Streams for real-time events with a shard for
each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon
EBS" is incorrect. A partition should be used rather than a shard as explained above.
INCORRECT: "Use an Amazon SQS FIFO queue for real-time events with one queue
for each equipment asset. Trigger an AWS Lambda function for the SQS queue to save
data to Amazon EFS" is incorrect. Amazon SQS cannot be used for real-time use
cases.
INCORRECT: "Use an Amazon SQS standard queue for real-time events with one
queue for each equipment asset. Trigger an AWS Lambda function from the SQS
queue to save data to Amazon S3" is incorrect. Amazon SQS cannot be used for real-
time use cases.
References:
https://aws.amazon.com/kinesis/data-streams/faqs/
https://aws.amazon.com/kinesis/data-firehose/
https://digitalcloud.training/amazon-kinesis/
Correct answer
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones.
Use an Amazon RDS MySQL database with Multi-AZ enabled.
Feedback
Explanation:
This architecture not only offers the highest availability it is also operationally simple as
it maximizes the usage of managed services.
References:
https://aws.amazon.com/architecture/well-architected/
https://digitalcloud.training/aws-application-integration-services/
https://digitalcloud.training/amazon-ec2-auto-scaling/
https://digitalcloud.training/amazon-rds/
A solutions architect is designing an application on AWS. The compute layer will run
in parallel across EC2 instances. The compute layer should scale based on the
number of jobs to be processed. The compute layer is stateless. The solutions
architect must ensure that the application is loosely coupled and the job items are
durably stored.
Which design should the solutions architect use?
○ Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon
EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on CPU usage
○ Create an Amazon SQS queue to hold the jobs that need to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto
Scaling group to add and remove nodes based on network usage
◉ Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto
Scaling group to add and remove nodes based on the number of items in the SQS queue
○ Create an Amazon SNS topic to send the jobs that need to be processed. Create an Amazon
EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling
group to add and remove nodes based on the number of messages published to the SNS topic
Correct answer
Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an
Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the
Auto Scaling group to add and remove nodes based on the number of items in the SQS
queue
Feedback
Explanation:
In this case we need to find a durable and loosely coupled solution for storing jobs.
Amazon SQS is ideal for this use case and can be configured to use dynamic scaling
based on the number of jobs waiting in the queue.
To configure this scaling you can use the backlog per instance metric with the target
value being the acceptable backlog per instance to maintain. You can calculate
these numbers as follows:
Backlog per instance: To calculate your backlog per instance, start with
the ApproximateNumberOfMessages queue attribute to determine the length of
the SQS queue (number of messages available for retrieval from the queue).
Divide that number by the fleet's running capacity, which for an Auto Scaling
group is the number of instances in the InService state, to get the backlog per
instance.
Acceptable backlog per instance: To calculate your target value, first determine
what your application can accept in terms of latency. Then, take the acceptable
latency value and divide it by the average time that an EC2 instance takes to
process a message.
This solution will scale EC2 instances using Auto Scaling based on the number of jobs
waiting in the SQS queue.
CORRECT: "Create an Amazon SQS queue to hold the jobs that needs to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on the
number of items in the SQS queue" is the correct answer.
INCORRECT: "Create an Amazon SQS queue to hold the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on
network usage" is incorrect as scaling on network usage does not relate to the number
of jobs waiting to be processed.
INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on CPU
usage" is incorrect. Amazon SNS is a notification service so it delivers notifications to
subscribers. It does store data durably but is less suitable than SQS for this use case.
Scaling on CPU usage is not the best solution as it does not relate to the number of
jobs waiting to be processed.
INCORRECT: "Create an Amazon SNS topic to send the jobs that need to be
processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set
the scaling policy for the Auto Scaling group to add and remove nodes based on the
number of messages published to the SNS topic" is incorrect. Amazon SNS is a
notification service so it delivers notifications to subscribers. It does store data durably
but is less suitable than SQS for this use case. Scaling on the number of notifications in
SNS is not possible.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
https://digitalcloud.training/amazon-ec2-auto-scaling/
https://digitalcloud.training/aws-application-integration-services/
○ AWS Config
○ Amazon SNS
◉ Amazon MQ
○ Amazon Step Functions
Correct answer
Amazon SNS
Feedback
Explanation:
You can use a Lambda function to process Amazon Simple Notification Service
notifications. Amazon SNS supports Lambda functions as a target for messages sent to
a topic. This solution decouples the Amazon EC2 application from Lambda and ensures
the Lambda function is invoked.
INCORRECT: "AWS Config" is incorrect. AWS Config is a service that is used for
continuous compliance, not application decoupling.
INCORRECT: "Amazon MQ" is incorrect. Amazon MQ is similar to SQS but is used for
existing applications that are being migrated into AWS. SQS should be used for new
applications being created in the cloud.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
https://aws.amazon.com/sns/features/
https://digitalcloud.training/aws-lambda/
https://digitalcloud.training/aws-glue/https://digitalcloud.training/aws-application-
integration-services/
A new application will run across multiple Amazon ECS tasks. Front-end application
logic will process data and then pass that data to a back-end ECS task to perform
further processing and write the data to a datastore. The Architect would like to
reduce-interdependencies so failures do no impact other components.
Which solution should the Architect use?
○ Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data
to the stream and the back-end to read data from the stream
○ Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3
bucket, configure the front-end to write data to the stream and the back-end to read data from
Amazon S3
○ Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-
end to add messages to the queue
◉ Create an Amazon SQS queue and configure the front-end to add messages to the queue
and the back-end to poll the queue for messages
Correct answer
Create an Amazon SQS queue and configure the front-end to add messages to the queue
and the back-end to poll the queue for messages
Feedback
Explanation:
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling
applications, thus reducing interdependencies, through a message bus. The front-end
application can place messages on the queue and the back-end can then poll the
queue for new messages. Please remember that Amazon SQS is pull-based (polling)
not push-based (use SNS for push-based).
CORRECT: "Create an Amazon SQS queue and configure the front-end to add
messages to the queue and the back-end to poll the queue for messages" is the correct
answer.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream and configure the
front-end to add data to the stream and the back-end to read data from the stream" is
incorrect. Amazon Kinesis Firehose is used for streaming data. With Firehose the data
is immediately loaded into a destination that can be Amazon S3, RedShift,
Elasticsearch, or Splunk. This is not an ideal use case for Firehose as this is not
streaming data and there is no need to load data into an additional AWS service.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream that delivers data to
an Amazon S3 bucket, configure the front-end to write data to the stream and the back-
end to read data from Amazon S3" is incorrect as per the previous explanation.
INCORRECT: "Create an Amazon SQS queue that pushes messages to the back-end.
Configure the front-end to add messages to the queue " is incorrect as SQS is pull-
based, not push-based. EC2 instances must poll the queue to find jobs to process.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.
html
https://digitalcloud.training/amazon-kinesis/
https://digitalcloud.training/aws-application-integration-services/
A retail organization sends coupons out twice a week and this results in a
predictable surge in sales traffic. The application runs on Amazon EC2 instances
behind an Elastic Load Balancer. The organization is looking for ways lower costs
while ensuring they meet the demands of their customers.
Correct answer
Use capacity reservations with savings plans
Feedback
Explanation:
On-Demand Capacity Reservations enable you to reserve compute capacity for your
Amazon EC2 instances in a specific Availability Zone for any duration. By creating
Capacity Reservations, you ensure that you always have access to EC2 capacity when
you need it, for as long as you need it. When used in combination with savings plans,
you can also gain the advantages of cost reduction.
CORRECT: " Use capacity reservations with savings plans" is the correct answer.
INCORRECT: "Increase the instance size of the existing EC2 instances" is incorrect.
This would add more cost all the time rather than catering for the temporary increases
in traffic.
INCORRECT: "Purchase Amazon EC2 dedicated hosts" is incorrect. This is not a way
to save cost as dedicated hosts are much more expensive than shared hosts.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-
reservations.html#capacity-reservations-differences
https://digitalcloud.training/amazon-ec2/
A Solutions Architect is designing an application that consists of AWS Lambda and
Amazon RDS Aurora MySQL. The Lambda function must use database credentials
to authenticate to MySQL and security policy mandates that these credentials must
not be stored in the function code.
How can the Solutions Architect securely store the database credentials and make
them available to the function?
○ Store the credentials in AWS Key Management Service and use environment variables in the
function code pointing to KMS
◉ Store the credentials in Systems Manager Parameter Store and update the function code and
execution role
○ Use the AWSAuthenticationPlugin and associate an IAM user account in the MySQL
database
○ Create an IAM policy and store the credentials in the policy. Attach the policy to the Lambda
function execution role
Correct answer
Store the credentials in Systems Manager Parameter Store and update the function code and
execution role
Feedback
Explanation:
In this case the scenario requires that credentials are used for authenticating to MySQL.
The credentials need to be securely stored outside of the function code. Systems
Manager Parameter Store provides secure, hierarchical storage for configuration data
management and secrets management.
You can easily reference the parameters from services including AWS Lambda as
depicted in the diagram below:
CORRECT: "Store the credentials in Systems Manager Parameter Store and update
the function code and execution role" is the correct answer.
INCORRECT: "Store the credentials in AWS Key Management Service and use
environment variables in the function code pointing to KMS" is incorrect. You cannot
store credentials in KMS, it is used for creating and managing encryption keys
INCORRECT: "Create an IAM policy and store the credentials in the policy. Attach the
policy to the Lambda function execution role" is incorrect. You cannot store credentials
in IAM policies.
References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-
parameter-store.html
An application that runs a computational fluid dynamics workload uses a tightly-
coupled HPC architecture that uses the MPI protocol and runs across many nodes.
A service-managed deployment is required to minimize operational overhead.
Which deployment option is MOST suitable for provisioning and managing the
resources required for this use case?
Correct answer
Use AWS Batch to deploy a multi-node parallel job
Feedback
Explanation:
AWS Batch Multi-node parallel jobs enable you to run single jobs that span multiple
Amazon EC2 instances. With AWS Batch multi-node parallel jobs, you can run large-
scale, tightly coupled, high performance computing applications and distributed GPU
model training without the need to launch, configure, and manage Amazon EC2
resources directly.
An AWS Batch multi-node parallel job is compatible with any framework that supports
IP-based, internode communication, such as Apache MXNet, TensorFlow, Caffe2, or
Message Passing Interface (MPI).
This is the most efficient approach to deploy the resources required and supports the
application requirements most effectively.
CORRECT: "Use AWS Batch to deploy a multi-node parallel job" is the correct answer.
INCORRECT: "Use Amazon EC2 Auto Scaling to deploy instances in multiple subnets "
is incorrect. This is not the best solution for a tightly-coupled HPC workload with specific
requirements such as MPI support.
INCORRECT: "Use AWS Elastic Beanstalk to provision and manage the EC2
instances" is incorrect. You can certainly provision and manage EC2 instances with
Elastic Beanstalk but this scenario is for a specific workload that requires MPI support
and managing a HPC deployment across a large number of nodes. AWS Batch is more
suitable.
References:
https://d1.awsstatic.com/whitepapers/architecture/AWS-HPC-Lens.pdf
https://docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html
Correct answer
Store the data in S3 Standard-IA for 3 months, then transition to S3 Glacier
Feedback
Explanation:
The most cost-effective solution is to first store the data in S3 Standard-IA where it will
be infrequently accessed for the first three months. Then, after three months expires,
transition the data to S3 Glacier where it can be stored at lower cost for the remainder
of the seven year period. Expedited retrieval can bring retrieval times down to 1-5
minutes.
INCORRECT: "Store the data in S3 Intelligent Tiering for 3 months, then transition to S3
Standard-IA" is incorrect. Intelligent tiering moves data between tiers based on access
patterns, this is more costly and better suited to use cases that are unknown or
unpredictable.
References:
https://aws.amazon.com/s3/storage-classes/
https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-
steps.html#api-downloading-an-archive-two-steps-retrieval-options
Over 500 TB of data must be analyzed using standard SQL business intelligence
tools. The dataset consists of a combination of structured data and unstructured
data. The unstructured data is small and stored on Amazon S3. Which AWS
services are most suitable for performing analytics on the data?
Correct answer
Amazon Redshift with Amazon Redshift Spectrum
Feedback
Explanation:
Using Amazon Redshift Spectrum, you can efficiently query and retrieve structured and
semistructured data from files in Amazon S3 without having to load the data into
Amazon Redshift tables. Redshift Spectrum queries employ massive parallelism to
execute very fast against large datasets.
Used together, RedShift and RedShift spectrum are suitable for running massive
analytics jobs on both the structured (RedShift data warehouse) and unstructured
(Amazon S3) data.
CORRECT: "Amazon Redshift with Amazon Redshift Spectrum" is the correct answer.
INCORRECT: "Amazon RDS MariaDB with Amazon Athena" is incorrect. Amazon RDS
is not suitable for analytics (OLAP) use cases as it is designed for transactional (OLTP)
use cases. Athena can however be used for running SQL queries on data on S3.
INCORRECT: "Amazon ElastiCache for Redis with cluster mode enabled" is incorrect.
This is an example of an in-memory caching service. It is good for performance for
transactional use cases.
References:
https://docs.aws.amazon.com/redshift/latest/dg/c_redshift_system_overview.html
https://docs.aws.amazon.com/redshift/latest/dg/c-using-spectrum.html
https://digitalcloud.training/amazon-redshift/
○ Enable Amazon CloudWatch Logs. Configure an AWS Lambda function to monitor the log
files and record deleted item data to an Amazon S3 bucket
◉ Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and
record the modified item data to an Amazon S3 bucket
○ Enable Amazon CloudTrail. Configure an Amazon EC2 instance to monitor activity in the
CloudTrail log files and record changed items in another DynamoDB table
○ Enable DynamoDB Global Tables. Enable DynamoDB streams on the multi-region table and
save the output directly to an Amazon S3 bucket
Correct answer
Enable DynamoDB Streams. Configure an AWS Lambda function to poll the stream and
record the modified item data to an Amazon S3 bucket
Feedback
Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html
https://digitalcloud.training/amazon-dynamodb/
○ Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
◉ Create a gateway VPC endpoint and add an entry to the route table
○ Create a private Amazon DynamoDB endpoint and connect to it using an AWS VPN
○ Create a software VPN between DynamoDB and the application in the private subnet
Correct answer
Create a gateway VPC endpoint and add an entry to the route table
Feedback
Explanation:
A VPC endpoint enables you to privately connect your VPC to supported AWS services
and VPC endpoint services powered by AWS PrivateLink without requiring an internet
gateway, NAT device, VPN connection, or AWS Direct Connect connection.
The table below helps you to understand the key differences between the two different
types of VPC endpoint:
CORRECT: "Create a gateway VPC endpoint and add an entry to the route table" is the
correct answer.
INCORRECT: "Create an interface VPC endpoint in the VPC with an Elastic Network
Interface (ENI)" is incorrect. This would be used for services that are supported by
interface endpoints, not gateway endpoints.
INCORRECT: "Create a software VPN between DynamoDB and the application in the
private subnet" is incorrect. You cannot create a software VPN between DynamoDB
and an application.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
https://digitalcloud.training/amazon-vpc/
Correct answer
Redis AUTH command
Feedback
Explanation:
Redis authentication tokens enable Redis to require a token (password) before allowing
clients to execute commands, thereby improving data security.
You can require that users enter a token on a token-protected Redis server. To do this,
include the parameter --auth-token (API: AuthToken) with the correct token when you
create your replication group or cluster. Also include it in all subsequent commands to
the replication group or cluster.
INCORRECT: "AWS IAM Policy" is incorrect. You cannot use an IAM policy to enforce
a password on Redis.
References:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html
https://digitalcloud.training/amazon-elasticache/
○ Setup an AWS Direct Connect and migrate the database to Amazon DynamoDB using the
AWS Database Migration Service (DMS)
◉ Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball
Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to Amazon
DynamoDB
○ Enable compression on the MongoDB database and use the AWS Database Migration
Service (DMS) to directly migrate the database to Amazon DynamoDB
○ Use the AWS Database Migration Service (DMS) to extract and load the data to an AWS
Snowball Edge device. Complete the migration to Amazon DynamoDB using AWS DMS in the
AWS Cloud
Correct answer
Use the Schema Conversion Tool (SCT) to extract and load the data to an AWS Snowball
Edge device. Use the AWS Database Migration Service (DMS) to migrate the data to
Amazon DynamoDB
Feedback
Explanation:
Larger data migrations with AWS DMS can include many terabytes of information. This
process can be cumbersome due to network bandwidth limits or just the sheer amount
of data. AWS DMS can use Snowball Edge and Amazon S3 to migrate large databases
more quickly than by other methods.
When you're using an Edge device, the data migration process has the following
stages:
1. You use the AWS Schema Conversion Tool (AWS SCT) to extract the data locally
and move it to an Edge device.
2. You ship the Edge device or devices back to AWS.
3. After AWS receives your shipment, the Edge device automatically loads its data
into an Amazon S3 bucket.
4. AWS DMS takes the files and migrates the data to the target data store. If you are
using change data capture (CDC), those updates are written to the Amazon S3
bucket and then applied to the target data store.
CORRECT: "Use the Schema Conversion Tool (SCT) to extract and load the data to an
AWS Snowball Edge device. Use the AWS Database Migration Service (DMS) to
migrate the data to Amazon DynamoDB" is the correct answer.
INCORRECT: "Setup an AWS Direct Connect and migrate the database to Amazon
DynamoDB using the AWS Database Migration Service (DMS)" is incorrect as Direct
Connect connections can take several weeks to implement.
INCORRECT: "Enable compression on the MongoDB database and use the AWS
Database Migration Service (DMS) to directly migrate the database to Amazon
DynamoDB" is incorrect. It’s unlikely that compression is going to make the difference
and the company want to avoid the internet link as stated in the scenario.
INCORRECT: "Use the AWS Database Migration Service (DMS) to extract and load the
data to an AWS Snowball Edge device. Complete the migration to Amazon DynamoDB
using AWS DMS in the AWS Cloud" is incorrect. This is the wrong method, the
Solutions Architect should use the SCT to extract and load to Snowball Edge and then
AWS DMS in the AWS Cloud.
References:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_LargeDBs.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.DynamoDB.html
https://digitalcloud.training/aws-migration-services/
Correct answers
Create an AWS Global Accelerator and attach endpoints in each AWS Region
Migrate both public IP addresses to the AWS Global Accelerator
Feedback
Explanation:
AWS Global Accelerator uses static IP addresses as fixed entry points for your
application. You can migrate up to two /24 IPv4 address ranges and choose which /32
IP addresses to use when you create your accelerator.
This solution ensures the company can continue using the same IP addresses and they
are able to direct traffic to the application endpoint in the AWS Region closest to the
end user. Traffic is sent over the AWS global network for consistent performance.
CORRECT: "Create an AWS Global Accelerator and attach endpoints in each AWS
Region" is a correct answer.
CORRECT: "Migrate both public IP addresses to the AWS Global Accelerator" is also a
correct answer.
INCORRECT: "Assign new static anycast IP addresses and modify any existing
pointers" is incorrect. This is unnecessary as you can bring your own IP addresses to
AWS Global Accelerator and this is preferred in this scenario.
References:
https://aws.amazon.com/global-accelerator/features/
https://digitalcloud.training/aws-global-accelerator/
Three Amazon VPCs are used by a company in the same region. The company has
two AWS Direct Connect connections to two separate company offices and wishes
to share these with all three VPCs. A Solutions Architect has created an AWS Direct
Connect gateway. How can the required connectivity be configured?
Correct answer
Associate the Direct Connect gateway to a transit gateway
Feedback
Explanation:
You can manage a single connection for multiple VPCs or VPNs that are in the same
Region by associating a Direct Connect gateway to a transit gateway. The solution
involves the following components:
CORRECT: "Associate the Direct Connect gateway to a transit gateway" is the correct
answer.
INCORRECT: "Associate the Direct Connect gateway to a virtual private gateway in
each VPC" is incorrect. For VPCs in the same region a VPG is not necessary. A transit
gateway can instead be configured.
INCORRECT: "Create a VPC peering connection between the VPCs and route entries
for the Direct Connect Gateway" is incorrect. You cannot add route entries for a Direct
Connect gateway to each VPC and enable routing. Use a transit gateway instead.
INCORRECT: "Create a transit virtual interface between the Direct Connect gateway
and each VPC" is incorrect. The transit virtual interface is attached to the Direct
Connect gateway on the connection side, not the VPC/transit gateway side.
References:
https://docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways-
intro.html
https://digitalcloud.training/aws-direct-connect/
Correct answer
Implement an IPSec VPN connection and use the same BGP prefix
Feedback
Explanation:
This is the most cost-effective solution. With this option both the Direct Connect
connection and IPSec VPN are active and being advertised using the Border Gateway
Protocol (BGP). The Direct Connect link will always be preferred unless it is
unavailable.
CORRECT: "Implement an IPSec VPN connection and use the same BGP prefix" is the
correct answer.
INCORRECT: "Configure an IPSec VPN connection over the Direct Connect link" is
incorrect. This is not a solution to the problem as the VPN connection is going over the
Direct Connect link. This is something you might do to add encryption to Direct Connect
but it doesn’t make it more resilient.
References:
https://docs.aws.amazon.com/whitepapers/latest/hybrid-connectivity/vpn-connection-as-
a-backup-to-aws-dx-connection-example.html
https://digitalcloud.training/aws-direct-connect/
A highly elastic application consists of three tiers. The application tier runs in an Auto
Scaling group and processes data and writes it to an Amazon RDS MySQL
database. The Solutions Architect wants to restrict access to the database tier to
only accept traffic from the instances in the application tier. However, instances in
the application tier are being constantly launched and terminated.
How can the Solutions Architect configure secure access to the database tier?
◉ Configure the database security group to allow traffic only from the application security group
○ Configure the database security group to allow traffic only from port 3306
○ Configure a Network ACL on the database subnet to deny all traffic to ports other than 3306
○ Configure a Network ACL on the database subnet to allow all traffic from the application
subnet
Correct answer
Configure the database security group to allow traffic only from the application security group
Feedback
Explanation:
The best option is to configure the database security group to only allow traffic that
originates from the application security group. You can also define the destination port
as the database port. This setup will allow any instance that is launched and attached to
this security group to connect to the database.
CORRECT: "Configure the database security group to allow traffic only from the
application security group" is the correct answer.
INCORRECT: "Configure the database security group to allow traffic only from port
3306" is incorrect. Port 3306 for MySQL should be the destination port, not the source.
INCORRECT: "Configure a Network ACL on the database subnet to deny all traffic to
ports other than 3306" is incorrect. This does not restrict access specifically to the
application instances.
INCORRECT: "Configure a Network ACL on the database subnet to allow all traffic from
the application subnet" is incorrect. This does not restrict access specifically to the
application instances.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
https://digitalcloud.training/amazon-vpc/
Correct answer
Create an Amazon CloudWatch events rule that triggers an Amazon SNS topic
Feedback
Explanation:
A CloudWatch Events rule can be used to set up automatic email notifications for
Medium to High Severity findings to the email address of your choice. You simply create
an Amazon SNS topic and then associate it with an Amazon CloudWatch events rule.
Note: step by step procedures for how to set this up can be found in the article linked in
the references below.
CORRECT: "Create an Amazon CloudWatch events rule that triggers an Amazon SNS
topic" is the correct answer.
INCORRECT: "Create an Amazon CloudWatch Logs rule that triggers an AWS Lambda
function" is incorrect. CloudWatch logs is not the right CloudWatch service to use.
CloudWatch events is used for reacting to changes in service state.
References:
https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html
https://digitalcloud.training/amazon-cloudwatch/
Which AWS service can be used for the migrated message broker?
○ Amazon SQS
○ Amazon SNS
◉ Amazon MQ
○ AWS Step Functions
Correct answer
Amazon MQ
Feedback
Explanation:
Amazon MQ is a managed message broker service for Apache ActiveMQ that makes it
easy to set up and operate message brokers in the cloud. Connecting current
applications to Amazon MQ is easy because it uses industry-standard APIs and
protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and
WebSocket. Using standards means that in most cases, there’s no need to rewrite any
messaging code when you migrate to AWS.
References:
https://aws.amazon.com/amazon-mq/
https://digitalcloud.training/aws-application-integration-services/
Correct answer
Create an Amazon SQS FIFO queue
Feedback
Explanation:
Only FIFO queues guarantee the ordering of messages and therefore a standard queue
would not work. The FIFO queue supports up to 3,000 messages per second with
batching so this is a supported scenario.
References:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sq
s-quotas.html
○ Amazon SWF
◉ Amazon SNS
○ Amazon Kinesis
○ Amazon OpsWorks
Correct answer
Amazon SNS
Feedback
Explanation:
You can use a Lambda function to process Amazon Simple Notification Service
notifications. Amazon SNS supports Lambda functions as a target for messages sent to
a topic. This solution decouples the Amazon EC2 application from Lambda and ensures
the Lambda function is invoked.
INCORRECT: "Amazon SWF" is incorrect. The Simple Workflow Service (SWF) is used
for process automation. It is not well suited to this requirement.
INCORRECT: "Amazon Kinesis" is incorrect as this service is used for ingesting and
processing real time streaming data, it is not a suitable service to be used solely for
invoking a Lambda function.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
https://digitalcloud.training/aws-glue/https://digitalcloud.training/aws-application-
integration-services/
https://digitalcloud.training/aws-lambda/
A company are finalizing their disaster recovery plan. A limited set of core services
will be replicated to the DR site ready to seamlessly take over the in the event of a
disaster. All other services will be switched off.
Correct answer
Pilot light
Feedback
Explanation:
In this DR approach, you simply replicate part of your IT structure for a limited set of
core services so that the AWS cloud environment seamlessly takes over in the event of
a disaster.
Unlike a backup and recovery approach, you must ensure that your most critical core
elements are already configured and running in AWS (the pilot light). When the time
comes for recovery, you can rapidly provision a full-scale production environment
around the critical core.
INCORRECT: "Backup and restore" is incorrect. This is the lowest cost DR approach
that simply entails creating online backups of all data and applications.
INCORRECT: "Warm standby" is incorrect. The term warm standby is used to describe
a DR scenario in which a scaled-down version of a fully functional environment is
always running in the cloud.
References:
https://aws.amazon.com/blogs/publicsector/rapidly-recover-mission-critical-systems-in-
a-disaster/
A Solutions Architect has been tasked with building an application which stores
images to be used for a website. The website will be accessed by thousands of
customers. The images within the application need to be able to be transformed and
processed as they are being retrieved. The solutions architect would prefer to use
managed services to achieve this, and the solution should be highly available and
scalable, and be able to serve users from around the world with low latency.
○ Store the images in a DynamoDB table, with DynamoDB Global Tables enabled. Provision a
Lambda function to process the data on demand as it leaves the table.
◉ Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Event Notifications
to connect to a Lambda function to process and transform the images when a GET request is
initiated on an object.
○ Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to
transform and process the images whenever a GET request is initiated on an object.
○ Store the images in a DynamoDB table, with DynamoDB Accelerator enabled. Use Amazon
EventBridge to pass the data into an event bus as it is retrieved from DynamoDB and use AWS
Lambda to process the data.
Correct answer
Store the images in Amazon S3, behind a CloudFront distribution. Use S3 Object Lambda to
transform and process the images whenever a GET request is initiated on an object.
Feedback
Explanation:
With S3 Object Lambda you can add your own code to S3 GET requests to modify and
process data as it is returned to an application. For the first time, you can use custom
code to modify the data returned by standard S3 GET requests to filter rows,
dynamically resize images, redact confidential data, and much more. Powered by AWS
Lambda functions, your code runs on infrastructure that is fully managed by AWS,
eliminating the need to create and store derivative copies of your data or to run
expensive proxies, all with no changes required to your applications.
CORRECT: "Store the images in Amazon S3, behind a CloudFront distribution. Use S3
Object Lambda to transform and process the images whenever a GET request is
initiated on an object” is the correct answer (as explained above.)
INCORRECT: "Store the images in a DynamoDB table, with DynamoDB Global Tables
enabled. Provision a Lambda function to process the data on demand as it leaves the
table” is incorrect. DynamoDB is not as well designed for Write Once Read Many
workloads and adding a Lambda function to the DynamoDB table takes more manual
provisioning of resources than using S3 Object Lambda.
INCORRECT: "Store the images in Amazon S3, behind a CloudFront distribution. Use
S3 Event Notifications to connect to a Lambda function to process and transform the
images when a GET request is initiated on an object” is incorrect. This would work;
however it is easier to use S3 Object Lambda as this manages the Lambda function for
you.
References:
https://aws.amazon.com/s3/features/object-lambda/
https://digitalcloud.training/aws-lambda/
The Chief Financial Officer of a large corporation is looking for an AWS native tool
which will help reduce their cloud spend. After receiving a budget alarm, the
company has decided that they need to reduce their spend across their different
areas of compute and need insights into their spend to decide where they can
reduce cost.
What is the easiest way to achieve this goal?
Correct answer
AWS Compute Optimizer
Feedback
Explanation:
AWS Compute Optimizer helps you identify the optimal AWS resource configurations,
such as Amazon Elastic Compute Cloud (EC2) instance types, Amazon Elastic Block
Store (EBS) volume configurations, and AWS Lambda function memory sizes, using
machine learning to analyze historical utilization metrics. AWS Compute Optimizer
provides a set of APIs and a console experience to help you reduce costs and increase
workload performance by recommending the optimal AWS resources for your AWS
workloads.
CORRECT: "AWS Compute Optimizer" is the correct answer (as explained above.)
INCORRECT: "AWS Trusted Advisor" is incorrect. Whilst you will get some cost
recommendations using Trusted Advisor, when working with reducing cost for compute
specifically, AWS Compute Optimizer is a better choice.
INCORRECT: "Cost and Usage Reports" is incorrect. Cost and Usage Reports are a
highly detailed report of your spend and usage across your entire AWS Environment.
Whilst it can be used to understand cost, it does not make recommendations.
INCORRECT: "AWS Cost Explorer" is incorrect. Cost Explorer gives you insight into
your spend and usage in a graphical format, which can be filtered and grouped by
parameters like Region, instance type and can use Tags to further group resources. It
does not however make any recommendations on how to reduce spend.
References:
https://aws.amazon.com/compute-optimizer/faqs/
https://digitalcloud.training/aws-billing-and-pricing/
A large customer services company is planning to build a highly scalable and
durable application designed to aggregate data across their support
communications, and extract sentiment on how successfully they are helping their
customers. These communications are generated across chat, social media, emails
and more. They need a solution which stores output from these communication
channels, which then processes the text for sentiment analysis. The outputs must
then be stored in a data warehouse for future use.
Which series of AWS services will provide the functionality the company is looking
for?
○ Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Textract to process the text for sentiment analysis. Then store
the outputs in Amazon RedShift.
○ Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Comprehend to process the text for sentiment analysis. Then
store the outputs in Amazon RedShift.
◉ Use DynamoDB as the original data store for the output from the support communications.
Use Amazon Comprehend to process the text for sentiment analysis. Then store the outputs in
Amazon RedShift.
○ Use DynamoDB as the original data store for the output from the support communications.
Use Amazon Kendra to process the text for sentiment analysis. Then store the outputs in
Amazon RedShift.
Correct answer
Use an Amazon S3 Data Lake as the original date store for the output from the support
communications. Use Amazon Comprehend to process the text for sentiment analysis. Then
store the outputs in Amazon RedShift.
Feedback
Explanation:
You could easily use Amazon Comprehend to detect customer sentiment and analyze
customer interactions and automatically extract insights from customer surveys to
improve your products. An S3 Data Lake also acts as an ideal data repository for
Machine Learning data used by many different business units and applications.
CORRECT: "Use an Amazon S3 Data Lake as the original date store for the output
from the support communications. Use Amazon Comprehend to process the text for
sentiment analysis. Then store the outputs in Amazon RedShift” is the correct answer
(as explained above.)
INCORRECT: "Use an Amazon S3 Data Lake as the original date store for the output
from the support communications. Use Amazon Textract to process the text for
sentiment analysis. Then store the outputs in Amazon RedShift” is incorrect. Amazon
Textract is a machine learning (ML) service that automatically extracts text, handwriting,
and data from scanned documents, and does not output any sentiment.
INCORRECT: "Use DynamoDB as the original data store for the output from the
support communications. Use Amazon Comprehend to process the text for sentiment
analysis. Then store the outputs in Amazon RedShift” is incorrect. DynamoDB is not as
suitable of a data repository for machine learning data like an Amazon S3 Data Lake
would be.
INCORRECT: "Use DynamoDB as the original data store for the output from the
support communications. Use Amazon Kendra to process the text for sentiment
analysis. Then store the outputs in Amazon RedShift” is incorrect. DynamoDB is not as
suitable of a data repository for machine learning data like an Amazon S3 Data Lake
would be, and Amazon Kendra is a highly accurate intelligent search service powered
by machine learning and does not work to understand sentiment.
References:
https://aws.amazon.com/comprehend/
Which of the following groups of services will provide the solutions architect with the
best solution ?
○ Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on EC2.
Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
○ Run the NoSQL database on DynamoDB, and the compute layer on Amazon ECS on
Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
○ Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
◉ Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon Aurora to host the second storage layer.
Correct answer
Run the NoSQL database on Amazon Keyspaces, and the compute layer on Amazon ECS
on Fargate. Use Amazon RDS for Microsoft SQL Server to host the second storage layer.
Feedback
Explanation:
CORRECT: "Run the NoSQL database on Amazon Keyspaces, and the compute layer
on Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the
second storage layer” is the correct answer (as explained above.)
INCORRECT: "Run the NoSQL database on DynamoDB, and the compute layer on
Amazon ECS on EC2. Use Amazon RDS for Microsoft SQL Server to host the second
storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB
therefore it is not the correct answer.
INCORRECT: "Run the NoSQL database on DynamoDB, and the compute layer on
Amazon ECS on Fargate. Use Amazon RDS for Microsoft SQL Server to host the
second storage layer” is incorrect. DynamoDB is not a managed version of DynamoDB
therefore it is not the correct answer.
INCORRECT: "Run the NoSQL database on Amazon Keyspaces, and the compute
layer on Amazon ECS on Fargate. Use Amazon Aurora to host the second storage
layer” is incorrect. Amazon Aurora does not have an option to run a Microsoft SQL
Server database, therefore this answer is not correct.
References:
https://aws.amazon.com/keyspaces/
https://digitalcloud.training/category/aws-cheat-sheets/aws-database/
○ API Gateway
○ Amazon Athena
◉ AWS AppSync
○ AWS Lambda
Correct answer
AWS AppSync
Feedback
Explanation:
AWS AppSync is a serverless GraphQL and Pub/Sub API service that simplifies
building modern web and mobile applications.
INCORRECT: "API Gateway" is incorrect. You cannot create GraphQL APIs on API
Gateway.
References:
https://aws.amazon.com/appsync/
https://digitalcloud.training/category/aws-cheat-sheets/aws-networking-content-delivery/
○ AWS Wavelength
○ AWS Outposts
◉ AWS Private 5G
○ AWS CloudHSM
Correct answer
AWS Private 5G
Feedback
Explanation:
AWS Private 5G is a managed service that makes it easy to deploy, operate, and scale
your own private cellular network, with all required hardware and software provided by
AWS.
CORRECT: "AWS Private 5G" is the correct answer (as explained above.)
References:
https://aws.amazon.com/private5g/
○ Redirect traffic by running your code within a Lambda function using Lambda@Edge.
◉ At the Edge Location, run your code with CloudFront Functions.
○ Use Path Based Routing to route each user to the appropriate webpage behind an
Application Load Balancer.
○ Use Route 53 Geo Proximity Routing to route users’ traffic to your resources based on their
geographic location.
Correct answer
At the Edge Location, run your code with CloudFront Functions.
Feedback
Explanation:
With CloudFront Functions in Amazon CloudFront, you can write lightweight functions in
JavaScript for high-scale, latency-sensitive CDN customizations. Your functions can
manipulate the requests and responses that flow through CloudFront, perform basic
authentication and authorization, generate HTTP responses at the edge, and more.
CloudFront Functions is approximately 1/6th the cost of Lambda@Edge and is
extremely low latency as the functions are run on the host in the edge location, instead
of the running on a Lambda function elsewhere.
CORRECT: "At the Edge Location, run your code with CloudFront Functions” is the
correct answer (as explained above.)
INCORRECT: "Redirect traffic by running your code within a Lambda function using
Lambda@Edge” is incorrect. Although you could achieve this using Lambda@Edge, the
question states the need for the lowest latency possible, and comparatively the lowest
latency option is CloudFront Functions.
INCORRECT: "Use Path Based Routing to route each user to the appropriate webpage
behind an Application Load Balancer” is incorrect. This architecture does not account
for the fact that custom code needs to be run to make this happen.
INCORRECT: "Use Route 53 Geo Proximity Routing to route users’ traffic to your
resources based on their geographic location.'' is incorrect. This may work, however
again it does not account for the fact that custom code needs to be run to make this
happen.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-
functions.html
https://digitalcloud.training/amazon-cloudfront/
As part of a company’s shift to the AWS cloud, they need to gain an insight into their
total on-premises footprint. They have discovered that they are currently struggling
with managing their software licenses. They would like to maintain a hybrid cloud
setup, with some of their licenses stored in the cloud with some stored on-premises.
What actions should be taken to ensure they are managing the licenses
appropriately going forward?
○ Use AWS Secrets Manager to store the licenses as secrets to ensure they are stored
securely
○ Use the AWS Key Management Service to treat the license key safely and store it securely
◉ Use AWS License Manager to manage the software licenses
○ Use Amazon S3 with governance lock to manage the storage of the licenses
Correct answer
Use AWS License Manager to manage the software licenses
Feedback
Explanation:
AWS License Manager makes it easier to manage your software licenses from vendors
such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments.
AWS License Manager lets administrators create customized licensing rules that mirror
the terms of their licensing agreements.
CORRECT: "Use AWS License Manager to manage the software licenses" is the
correct answer (as explained above.)
INCORRECT: "Use AWS Secrets Manager to store the licenses as secrets to ensure
they are stored securely" is incorrect. AWS Secrets Manager helps you protect secrets
needed to access your applications, services, and IT resources. This does not include
license keys.
INCORRECT: "Use the AWS Key Management Service to treat the license key safely
and store it securely" is incorrect. AWS Key Management Service (AWS KMS) makes it
easy for you to create and manage cryptographic keys and control their use across a
wide range of AWS services and in your applications, not license keys.
INCORRECT: "Use Amazon S3 with governance lock to manage the storage of the
licenses" is incorrect. Amazon S3 is not designed to store software licenses.
References:
https://aws.amazon.com/license-manager/
A financial institution with many departments wants to migrate to the AWS Cloud
from their data center. Each department should have their own established AWS
accounts with preconfigured, Limited access to authorized services, based on each
team's needs, by the principle of least privilege.
○ Use AWS CloudFormation to create new member accounts and networking and use IAM
roles to allow access to approved AWS services.
◉ Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's AWS
power user permissions on the created accounts.
○ Configure AWS Organizations with SCPs and create new member accounts. Use AWS
CloudFormation templates to configure the member account networking.
○ Deploy a Landing Zone within AWS Organizations. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's AWS
power user permissions on the created accounts.
Correct answer
Deploy a Landing Zone within AWS Control Tower. Allow department administrators to use
the Landing Zone to create new member accounts and networking. Grant the department's
AWS power user permissions on the created accounts.
Feedback
Explanation:
AWS Control Tower automates the setup of a new landing zone using best practices
blueprints for identity, federated access, and account structure.
CORRECT: "Deploy a Landing Zone within AWS Control Tower. Allow department
administrators to use the Landing Zone to create new member accounts and
networking. Grant the department's AWS power user permissions on the created
accounts” is the correct answer (as explained above.)
INCORRECT: "Configure AWS Organizations with SCPs and create new member
accounts. Use AWS CloudFormation templates to configure the member account
networking” is incorrect. You can make new accounts using AWS Organizations
however the easiest way to do this is by using the AWS Control Tower service.
https://aws.amazon.com/controltower/
Which of the following combinations of services will deliver the best solution?
○ Use Amazon SageMaker to build the machine learning part of the application and use AWS
DataSync to gain access to the third-party telemetry data.
○ Use a TensorFlow AMI from the AWS Marketplace to build the machine learning part of the
application and use AWS DataSync to gain access to the third-party telemetry data.
○ Use a TensorFlow AMI from the AWS Marketplace to build the machine learning part of the
application and use AWS Data Exchange to gain access to the third-party telemetry data.
◉ Use Amazon SageMaker to build the machine learning part of the application and use AWS
Data Exchange to gain access to the third-party telemetry data.
Correct answer
Use Amazon SageMaker to build the machine learning part of the application and use AWS
Data Exchange to gain access to the third-party telemetry data.
Feedback
Explanation:
Amazon SageMaker allows you to build, train, and deploy machine learning models for
any use case with fully managed infrastructure, tools, and workflows. AWS Data
Exchange allows you to gain access to third party data sets across Automotive,
Financial Services, Gaming, Healthcare & Life Sciences, Manufacturing, Marketing,
Media & Entertainment, Retail, and many more industries.
CORRECT: "Use Amazon SageMaker to build the machine learning part of the
application and use AWS Data Exchange to gain access to the third-party telemetry
data” is the correct answer (as explained above.)
INCORRECT: "Use Amazon SageMaker to build the machine learning part of the
application and use AWS DataSync to gain access to the third-party telemetry data” is
incorrect. AWS DataSync is a secure, online service that automates and accelerates
moving data between on-premises and AWS storage services. It does not give access
to third party data.
INCORRECT: "Use a TensorFlow AMI from the AWS Marketplace to build the machine
learning part of the application and use AWS DataSync to gain access to the third-party
telemetry data” is incorrect. Building an EC2 instance from a TensorFlow AMI would not
involve using managed services and AWS DataSync is a secure, online service that
automates and accelerates moving data between on-premises and AWS storage
services. It does not give access to third party data.
INCORRECT: "Use a TensorFlow AMI from the AWS Marketplace to build the machine
learning part of the application and use AWS Data Exchange to gain access to the third-
party telemetry data” is incorrect. Building an EC2 instance from a TensorFlow AMI
would not involve using managed services.
References:
https://aws.amazon.com/data-exchange/
What is the simplest way to achieve this, whilst adhering to the principle of least
privilege?
○ Create a new AWS Organizations. Assign each team to a different Organizational Unit and
apply to appropriate permissions granting access to the appropriate resources in the bucket.
○ Copy the items from the bucket to create separate versions of each Separate the items in the
bucket into new buckets. Administer Bucket policies to allow each account to access the
appropriate bucket.
◉ Use S3 Access points to administer different access policies to each team, and control
access points using Service Control Policies within AWS Organizations.
○ Create the S3 Bucket in an individual account. Configure an IAM Role for each user to enable
cross account access for the S3 Bucket with a permissions policy to only access the appropriate
items within the bucket.
Correct answer
Use S3 Access points to administer different access policies to each team, and control
access points using Service Control Policies within AWS Organizations.
Feedback
Explanation:
Amazon S3 Access Points, a feature of S3, simplify data access for any AWS service or
customer application that stores data in S3. With S3 Access Points, customers can
create unique access control policies for each access point to easily control access to
shared datasets. You can also control access point usage using AWS Organizations
support for AWS SCPs.
CORRECT: "Use S3 Access points to administer different access policies to each team,
and control access points using Service Control Policies within AWS Organizations” is
the correct answer (as explained above.)
INCORRECT: "Create a new AWS Organizations. Assign each team to a different
Organizational Unit and apply to appropriate permissions granting access to the
appropriate resources in the bucket” is incorrect. This would not only be incredibly time
consuming but totally unnecessary as you can use the preexisting AWS Organizations
and the Service Control policies to control access via S3 Access Points.
INCORRECT: "Copy the items from the bucket to create separate versions of each
Separate the items in the bucket into new buckets. Administer Bucket policies to allow
each account to access the appropriate bucket” is incorrect. This involves a lot of
operational overhead and would be prone to significant error when administering the
correct permissions to each account.
References:
https://aws.amazon.com/s3/features/access-points/
https://digitalcloud.training/amazon-s3-and-glacier/