Interview-questions-Real Time PHP Project
Interview-questions-Real Time PHP Project
A solutions architect is designing a new service that will use an Amazon API Gateway
API on the frontend. The service will need to persist data in a backend database using
key-value requests. Initially, the data requirements will be around 1 GB and future
growth is unknown. Requests can range from 0 to over 800 requests per second.
Which combination of AWS services would meet these requirements? (Select TWO.)
AWS Lambda
(Correct)
AWS Fargate
Amazon DynamoDB
(Correct)
Amazon RDS
Explanation
In this case AWS Lambda can perform the computation and store the data in an Amazon
DynamoDB table. Lambda can scale concurrent executions to meet demand easily and
DynamoDB is built for key-value data storage requirements and is also serverless and easily
scalable. This is therefore a cost effective solution for unpredictable workloads.
INCORRECT: "AWS Fargate" is incorrect as containers run constantly and therefore incur
costs even when no requests are being made.
INCORRECT: "Amazon EC2 Auto Scaling" is incorrect as this uses EC2 instances which will
incur costs even when no requests are being made.
References:
https://aws.amazon.com/lambda/features/
https://aws.amazon.com/dynamodb/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/aws-lambda/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-dynamodb/
Question 2: Skipped
A company is migrating from an on-premises infrastructure to the AWS Cloud. One of
the company's applications stores files on a Windows file server farm that uses
Distributed File System Replication (DFSR) to keep data in sync. A solutions architect
needs to replace the file server farm.
Amazon EFS
Amazon S3
Amazon FSx
(Correct)
Explanation
Amazon FSx for Windows File Server provides fully managed, highly reliable file storage that
is accessible over the industry-standard Server Message Block (SMB) protocol.
Amazon FSx is built on Windows Server and provides a rich set of administrative features
that include end-user file restore, user quotas, and Access Control Lists (ACLs).
Additionally, Amazon FSX for Windows File Server supports Distributed File System
Replication (DFSR) in both Single-AZ and Multi-AZ deployments as can be seen in the
feature comparison table below.
INCORRECT: "Amazon S3" is incorrect as this is not a suitable replacement for a Microsoft
filesystem.
INCORRECT: "AWS Storage Gateway" is incorrect as this service is primarily used for
connecting on-premises storage to cloud storage. It consists of a software device installed
on-premises and can be used with SMB shares but it actually stores the data on S3. It is also
used for migration. However, in this case the company need to replace the file server farm
and Amazon FSx is the best choice for this job.
References:
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/high-availability-multiAZ.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-fsx/
Question 3: Skipped
An application runs on two EC2 instances in private subnets split between two AZs.
The application needs to connect to a CRM SaaS application running on the Internet.
The vendor of the SaaS application restricts authentication to a whitelist of source IP
addresses and only 2 IP addresses can be configured per customer.
What is the most appropriate and cost-effective solution to enable authentication to
the SaaS application?
(Correct)
Configure redundant Internet Gateways and update the routing tables for each subnet
Explanation
In this scenario you need to connect the EC2 instances to the SaaS application with a source
address of one of two whitelisted public IP addresses to ensure authentication works.
A NAT Gateway is created in a specific AZ and can have a single Elastic IP address associated
with it. NAT Gateways are deployed in public subnets and the route tables of the private
subnets where the EC2 instances reside are configured to forward Internet-bound traffic to
the NAT Gateway. You do pay for using a NAT Gateway based on hourly usage and data
processing, however this is still a cost-effective solution.
The diagram below depicts an instance in a private subnet using a NAT gateway to connect
out to the internet via an internet gateway.
CORRECT: "Configure a NAT Gateway for each AZ with an Elastic IP address" is the correct
answer.
INCORRECT: "Use a Network Load Balancer and configure a static IP for each AZ" is
incorrect. A Network Load Balancer can be configured with a single static IP address (the
other types of ELB cannot) for each AZ. However, using a NLB is not an appropriate solution
as the connections are being made outbound from the EC2 instances to the SaaS app and
ELBs are used for distributing inbound connection requests to EC2 instances (only return
traffic goes back through the ELB).
INCORRECT: "Configure redundant Internet Gateways and update the routing tables for
each subnet" is incorrect as you cannot create multiple Internet Gateways. An IGW is already
redundant.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/elastic-load-balancing/
Question 4: Skipped
A solutions architect needs to backup some application log files from an online
ecommerce store to Amazon S3. It is unknown how often the logs will be accessed or
which logs will be accessed the most. The solutions architect must keep costs as low as
possible by using the appropriate S3 storage class.
S3 Glacier
S3 Intelligent-Tiering
(Correct)
Explanation
It works by storing objects in two access tiers: one tier that is optimized for frequent access
and another lower-cost tier that is optimized for infrequent access. This is an ideal use case
for intelligent-tiering as the access patterns for the log files are not known.
INCORRECT: "S3 One Zone-Infrequent Access (S3 One Zone-IA)" is incorrect as if the data is
accessed often retrieval fees could become expensive.
INCORRECT: "S3 Glacier" is incorrect as if the data is accessed often retrieval fees could
become expensive. Glacier also requires more work in retrieving the data from the archive
and quick access requirements can add further costs.
References:
https://aws.amazon.com/s3/storage-classes/#Unknown_or_changing_access
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Question 5: Skipped
A Linux instance running in your VPC requires some configuration changes to be
implemented locally and you need to run some commands. Which of the following
can be used to securely access the instance?
EC2 password
Key pairs
(Correct)
SSL/TLS certificate
Public key
Explanation
Amazon EC2 uses public key cryptography to encrypt and decrypt login information. Public
key cryptography uses a public key to encrypt a piece of data, and then the recipient uses
the private key to decrypt the data. The public and private keys are known as a key pair.
Public key cryptography enables you to securely access your instances using a private key
instead of a password.
A key pair consists of a public key that AWS stores, and a private key file that you store:
- For Windows AMIs, the private key file is required to obtain the password used to log into
your instance.
- For Linux AMIs, the private key file allows you to securely SSH into your instance.
INCORRECT: "Public key" is incorrect. You cannot login to an EC2 instance using
certificates/public keys.
INCORRECT: "EC2 password" is incorrect. The “EC2 password” might refer to the operating
system password. By default, you cannot login this way to Linux and must use a key pair.
However, this can be enabled by setting a password and updating the /etc/ssh/sshd_config
file.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
Question 6: Skipped
A Solutions Architect is developing an encryption solution. The solution requires that
data keys are encrypted using envelope protection before they are written to disk.
(Correct)
Explanation
When you encrypt your data, your data is protected, but you have to protect your
encryption key. One strategy is to encrypt it. Envelope encryption is the practice of
encrypting plaintext data with a data key, and then encrypting the data key under another
key.
When you encrypt a data key, you don't have to worry about storing the encrypted data
key, because the data key is inherently protected by encryption. You can safely store the
encrypted data key alongside the encrypted data.
Encrypting the same data under multiple master keys
Encryption operations can be time consuming, particularly when the data being encrypted
are large objects. Instead of re-encrypting raw data multiple times with different keys, you
can re-encrypt only the data keys that protect the raw data.
In general, symmetric key algorithms are faster and produce smaller ciphertexts than public
key algorithms. But public key algorithms provide inherent separation of roles and easier
key management. Envelope encryption lets you combine the strengths of each strategy.
INCORRECT: "API Gateway with STS" is incorrect. The AWS Security Token Service (STS) is a
web service that enables you to request temporary, limited-privilege credentials for AWS
Identity and Access Management (IAM) users or for users that you authenticate (federated
users).
INCORRECT: "IAM Access Key" is incorrect. IAM access keys are used for signing
programmatic requests you make to AWS.
References:
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html
https://docs.aws.amazon.com/kms/latest/APIReference/Welcome.html
Question 7: Skipped
An Amazon VPC contains several Amazon EC2 instances. The instances need to make
API calls to Amazon DynamoDB. A solutions architect needs to ensure that the API
calls do not traverse the internet.
Create a new DynamoDB table that uses the endpoint
(Correct)
(Correct)
Create an ENI for the endpoint in each of the subnets of the VPC
Explanation
Amazon DynamoDB and Amazon S3 support gateway endpoints, not interface endpoints.
With a gateway endpoint you create the endpoint in the VPC, attach a policy allowing
access to the service, and then specify the route table to create a route table entry in.
CORRECT: "Create a route table entry for the endpoint" is a correct answer.
INCORRECT: "Create a new DynamoDB table that uses the endpoint" is incorrect as it is not
necessary to create a new DynamoDB table.
INCORRECT: "Create an ENI for the endpoint in each of the subnets of the VPC" is incorrect
as an ENI is used by an interface endpoint, not a gateway endpoint.
INCORRECT: "Create a VPC peering connection between the VPC and DynamoDB" is
incorrect as you cannot create a VPC peering connection between a VPC and a public AWS
service as public services are outside of VPCs.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html
Question 8: Skipped
An organization has a large amount of data on Windows (SMB) file shares in their on-
premises data center. The organization would like to move data into Amazon S3. They
would like to automate the migration of data over their AWS Direct Connect link.
AWS Snowball
AWS DataSync
(Correct)
AWS CloudFormation
Explanation
AWS DataSync can be used to move large amounts of data online between on-premises
storage and Amazon S3 or Amazon Elastic File System (Amazon EFS). DataSync eliminates or
automatically handles many of these tasks, including scripting copy jobs, scheduling and
monitoring transfers, validating data, and optimizing network utilization. The source
datastore can be Server Message Block (SMB) file servers.
INCORRECT: "AWS Snowball" is incorrect. AWS Snowball is a hardware device that is used
for migrating data into AWS. The organization plan to use their Direct Connect link for
migrating data rather than sending it in via a physical device. Also, Snowball will not
automate the migration.
References:
https://aws.amazon.com/datasync/faqs/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
migration/aws-datasync/
Question 9: Skipped
A company's application is running on Amazon EC2 instances in a single Region. In the
event of a disaster, a solutions architect needs to ensure that the resources can also be
deployed to a second Region.
Which combination of actions should the solutions architect take to accomplish this?
(Select TWO.)
Detach a volume on an EC2 instance and copy it to an Amazon S3 bucket in the second
Region
Launch a new EC2 instance from an Amazon Machine Image (AMI) in the second Region
(Correct)
Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the second Region
for the destination
(Correct)
Launch a new EC2 instance in the second Region and copy a volume from Amazon S3 to the
new instance
Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3 and launch an
EC2 instance in the second Region using that EBS volume
Explanation
You can copy an Amazon Machine Image (AMI) within or across AWS Regions using the
AWS Management Console, the AWS Command Line Interface or SDKs, or the Amazon EC2
API, all of which support the CopyImage action.
Using the copied AMI the solutions architect would then be able to launch an instance from
the same EBS volume in the second Region.
Note: the AMIs are stored on Amazon S3, however you cannot view them in the S3
management console or work with them programmatically using the S3 API.
CORRECT: "Copy an Amazon Machine Image (AMI) of an EC2 instance and specify the
second Region for the destination" is a correct answer.
CORRECT: "Launch a new EC2 instance from an Amazon Machine Image (AMI) in the
second Region" is also a correct answer.
INCORRECT: "Launch a new EC2 instance in the second Region and copy a volume from
Amazon S3 to the new instance" is incorrect. You cannot create an EBS volume directly from
Amazon S3.
INCORRECT: "Copy an Amazon Elastic Block Store (Amazon EBS) volume from Amazon S3
and launch an EC2 instance in the second Region using that EBS volume" is incorrect. You
cannot create an EBS volume directly from Amazon S3.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/CopyingAMIs.html
(Correct)
Explanation
CORRECT: "Use CloudFormation with securely configured templates" is the correct answer.
INCORRECT: "Remove the ability for staff to deploy applications" is incorrect. Removing the
ability of staff to deploy resources does not help you to deploy applications securely as it
does not solve the problem of how to do this in an operationally efficient manner.
INCORRECT: "Manually check all application configurations before deployment" is
incorrect. Manual checking of all application configurations before deployment is not
operationally efficient.
References:
https://aws.amazon.com/cloudformation/resources/templates/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
management-tools/aws-cloudformation/
(Correct)
Elastic IP Address
Explanation
An Elastic Fabric Adapter is an AWS Elastic Network Adapter (ENA) with added capabilities.
The EFA lets you apply the scale, flexibility, and elasticity of the AWS Cloud to tightly-
coupled HPC apps. It is ideal for tightly coupled app as it uses the Message Passing
Interface (MPI).
INCORRECT: "Elastic Network Interface (ENI)" is incorrect. The ENI is a basic type of adapter
and is not the best choice for this use case.
INCORRECT: "Elastic Network Adapter (ENA)" is incorrect. The ENA, which provides
Enhanced Networking, does provide high bandwidth and low inter-instance latency but it
does not support the features for a tightly-coupled app that the EFA does.
References:
https://aws.amazon.com/blogs/aws/now-available-elastic-fabric-adapter-efa-for-tightly-
coupled-hpc-workloads/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ec2/
Use Auto Scaling to scale down the instance outside of business hours. Scale up the
instance when required.
Hibernate the instance outside business hours. Start the instance again when required.
(Correct)
Terminate the instance outside business hours. Recover the instance again when required.
Stop the instance outside business hours. Start the instance again when required.
Explanation
When you hibernate an instance, Amazon EC2 signals the operating system to perform
hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory
(RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists
the instance's EBS root volume and any attached EBS data volumes. When you start your
instance:
- The processes that were previously running on the instance are resumed
- Previously attached data volumes are reattached and the instance retains its instance ID
CORRECT: "Hibernate the instance outside business hours. Start the instance again when
required" is the correct answer.
INCORRECT: "Stop the instance outside business hours. Start the instance again when
required" is incorrect. When an instance is stopped the operating system is shut down and
the contents of memory will be lost.
INCORRECT: "Use Auto Scaling to scale down the instance outside of business hours. Scale
out the instance when required" is incorrect. Auto Scaling scales does not scale up and
down, it scales in by terminating instances and out by launching instances. When scaling out
new instances are launched and no state will be available from terminated instances.
INCORRECT: "Terminate the instance outside business hours. Recover the instance again
when required" is incorrect. You cannot recover terminated instances, you can recover
instances that have become impaired in some circumstances.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html
Use a network ACL to block the IP address ranges associated with the specific countries
Use Amazon CloudFront to serve the application and deny access to blocked countries
(Correct)
Modify the ALB security group to deny incoming traffic from blocked countries
Modify the security group for EC2 instances to deny incoming traffic from blocked countries
Explanation
When a user requests your content, CloudFront typically serves the requested content
regardless of where the user is located. If you need to prevent users in specific countries
from accessing your content, you can use the CloudFront geo restriction feature to do one
of the following:
Allow your users to access your content only if they're in one of the countries on a whitelist
of approved countries.
Prevent your users from accessing your content if they're in one of the countries on a
blacklist of banned countries.
For example, if a request comes from a country where, for copyright reasons, you are not
authorized to distribute your content, you can use CloudFront geo restriction to block the
request.
This is the easiest and most effective way to implement a geographic restriction for the
delivery of content.
CORRECT: "Use Amazon CloudFront to serve the application and deny access to blocked
countries" is the correct answer.
INCORRECT: "Use a Network ACL to block the IP address ranges associated with the
specific countries" is incorrect as this would be extremely difficult to manage.
INCORRECT: "Modify the ALB security group to deny incoming traffic from blocked
countries" is incorrect as security groups cannot block traffic by country.
INCORRECT: "Modify the security group for EC2 instances to deny incoming traffic from
blocked countries" is incorrect as security groups cannot block traffic by country.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/
georestrictions.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-cloudfront/
(Correct)
Configure an allow rule in the Security Group for the IP addresses of the consumers
Explanation
All objects by default are private. Only the object owner has permission to access these
objects. However, the object owner can optionally share objects with others by creating a
presigned URL, using their own security credentials, to grant time-limited permission to
download the objects.
When you create a presigned URL for your object, you must provide your security
credentials, specify a bucket name, an object key, specify the HTTP method (GET to
download the object) and expiration date and time. The presigned URLs are valid only for
the specified duration.
Anyone who receives the presigned URL can then access the object. For example, if you
have a video in your bucket and both the bucket and the object are private, you can share
the video with others by generating a presigned URL.
CORRECT: "Generate a pre-signed URL and distribute it to the consumers" is the correct
answer.
INCORRECT: "Enable public read access for the S3 bucket" is incorrect. Enabling public read
access does not restrict the content to authorized consumers.
INCORRECT: "Use CloudFront to distribute the files using authorization hash tags" is
incorrect. You cannot use CloudFront as hash tags are not a CloudFront authentication
mechanism.
INCORRECT: "Configure an allow rule in the Security Group for the IP addresses of the
consumers" is incorrect. Security Groups do not apply to S3 buckets.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Question 15: Skipped
A company needs to store data for 5 years. The company will need to have immediate
and highly available access to the data at any point in time but will not require
frequent access.
Which lifecycle action should be taken to meet the requirements while reducing costs?
(Correct)
Transition objects from Amazon S3 Standard to Amazon S3 One Zone-Infrequent Access (S3
One Zone-IA)
Explanation
This is a good use case for S3 Standard-IA which provides immediate access and 99.9%
availability.
INCORRECT: "Transition objects from Amazon S3 Standard to the GLACIER storage class" is
incorrect. The Glacier storage class does not provide immediate access. You can retrieve
within hours or minutes, but you do need to submit a job to retrieve the data.
INCORRECT: "Transition objects to expire after 5 years" is incorrect. Expiring the objects
after 5 years is going to delete them at the end of the 5-year period, but you still need to
work out the best storage solution to use before then, and this answer does not provide a
solution.
References:
https://aws.amazon.com/s3/storage-classes/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Reserved Instances
(Correct)
On-Demand Instances
Explanation
Spot Instances with a defined duration (also known as Spot blocks) are designed not to be
interrupted and will run continuously for the duration you select. This makes them ideal for
jobs that take a finite time to complete, such as batch processing, encoding and rendering,
modeling and analysis, and continuous integration.
Spot Block is the best solution for this job as it only runs once a quarter for 5 days and
therefore reserved instances would not be beneficial. Note that the maximum duration of a
Spot Block is 6 hours.
INCORRECT: "Scheduled Reserved Instances" is incorrect. These reserved instances are ideal
for workloads that run for a certain number of hours each day, but not for just 5 days per
quarter.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html#fixed-
duration-spot-instances
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ec2/
Attach an IAM policy to the bucket
(Correct)
(Correct)
Explanation
None of the options present a good solution for specifying permissions required to write
and modify objects so that requirement needs to be taken care of separately. The other
requirements are to prevent accidental deletion and the ensure that all versions of the
document are available.
The two solutions for these requirements are versioning and MFA delete. Versioning will
retain a copy of each version of the document and multi-factor authentication delete (MFA
delete) will prevent any accidental deletion as you need to supply a second factor when
attempting a delete.
INCORRECT: "Set read-only permissions on the bucket" is incorrect as this will also prevent
any writing to the bucket which is not desired.
INCORRECT: "Attach an IAM policy to the bucket" is incorrect as users need to modify
documents which will also allow delete. Therefore, a method must be implemented to just
control deletes.
INCORRECT: "Encrypt the bucket using AWS SSE-S3" is incorrect as encryption doesn’t stop
you from deleting an object.References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
How can you assign these permissions only to the specific ECS task that is running the
application?
Create an IAM policy with permissions to DynamoDB and attach it to the container instance
Use a security group to allow outbound connections to DynamoDB and assign it to the
container instance
Create an IAM policy with permissions to DynamoDB and assign It to a task using
the taskRoleArn parameter
(Correct)
Explanation
To specify permissions for a specific task on Amazon ECS you should use IAM Roles for
Tasks. The permissions policy can be applied to tasks when creating the task definition, or
by using an IAM task role override using the AWS CLI or SDKs. The taskRoleArn parameter is
used to specify the policy.
CORRECT: "Create an IAM policy with permissions to DynamoDB and assign It to a task
using the taskRoleArn parameter" is the correct answer.
INCORRECT: "Create an IAM policy with permissions to DynamoDB and attach it to the
container instance" is incorrect. You should not apply the permissions to the container
instance as they will then apply to all tasks running on the instance as well as the instance
itself.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ecs/
Create an Amazon EFS file system and connect the backup applications using the NFS
protocol.
Connect the backup applications to an AWS Storage Gateway using an iSCSI-virtual tape
library (VTL).
(Correct)
Create an Amazon EFS file system and connect the backup applications using the iSCSI
protocol.
Connect the backup applications to an AWS Storage Gateway using the iSCSI protocol.
Explanation
The AWS Storage Gateway Tape Gateway enables you to replace using physical tapes on
premises with virtual tapes in AWS without changing existing backup workflows. Tape
Gateway emulates physical tape libraries, removes the cost and complexity of managing
physical tape infrastructure, and provides more durability than physical tapes.
CORRECT: "Connect the backup applications to an AWS Storage Gateway using an iSCSI-
virtual tape library (VTL)" is the correct answer.
INCORRECT: "Create an Amazon EFS file system and connect the backup applications using
the NFS protocol" is incorrect. The NFS protocol is used by AWS Storage Gateway File
Gateways but these do not provide virtual tape functionality that is suitable for replacing the
existing backup infrastructure.
INCORRECT: "Create an Amazon EFS file system and connect the backup applications using
the iSCSI protocol" is incorrect. The NFS protocol is used by AWS Storage Gateway File
Gateways but these do not provide virtual tape functionality that is suitable for replacing the
existing backup infrastructure.
INCORRECT: "Connect the backup applications to an AWS Storage Gateway using the NFS
protocol" is incorrect. The iSCSI protocol is used by AWS Storage Gateway Volume
Gateways but these do not provide virtual tape functionality that is suitable for replacing the
existing backup infrastructure.
References:
https://aws.amazon.com/storagegateway/vtl/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/aws-storage-gateway/
Question 20: Skipped
An organization want to share regular updates about their charitable work using static
webpages. The pages are expected to generate a large amount of views from around
the world. The files are stored in an Amazon S3 bucket. A solutions architect has been
asked to design an efficient and effective solution.
(Correct)
Explanation
Amazon CloudFront can be used to cache the files in edge locations around the world and
this will improve the performance of the webpages.
To serve a static website hosted on Amazon S3, you can deploy a CloudFront distribution
using one of these configurations:
Using a REST API endpoint as the origin with access restricted by an origin access identity
(OAI)
Using a website endpoint as the origin with anonymous (public) access allowed
Using a website endpoint as the origin with access restricted by a Referer header
CORRECT: "Use Amazon CloudFront with the S3 bucket as its origin" is the correct answer.
INCORRECT: "Generate presigned URLs for the files" is incorrect as this is used to restrict
access which is not a requirement.
INCORRECT: "Use cross-Region replication to all Regions" is incorrect as this does not
provide a mechanism for directing users to the closest copy of the static webpages.
INCORRECT: "Use the geoproximity feature of Amazon Route 53" is incorrect as this does
not include a solution for having multiple copies of the data in different geographic
lcoations.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-
website/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-cloudfront/
What can a Solutions Architect use to collect page clicks for the website and process
them sequentially for each user?
(Correct)
Explanation
This is a good use case for Amazon Kinesis streams as it is able to scale to the required load,
allow multiple applications to access the records and process them sequentially.
Amazon Kinesis Data Streams enables real-time processing of streaming big data. It
provides ordering of records, as well as the ability to read and/or replay records in the same
order to multiple Amazon Kinesis Applications.
Amazon Kinesis streams allows up to 1 MiB of data per second or 1,000 records per second
for writes per shard. There is no limit on the number of shards so you can easily scale Kinesis
Streams to accept 50,000 per second.
The Amazon Kinesis Client Library (KCL) delivers all records for a given partition key to the
same record processor, making it easier to build multiple applications reading from the
same Amazon Kinesis data stream.
INCORRECT: "Amazon SQS FIFO queue" is incorrect as SQS is not best suited to streaming
data and Kinesis is a better solution.
INCORRECT: "AWS CloudTrail trail" is incorrect. CloudTrail is used for auditing and is not
useful here.
INCORRECT: "Amazon SQS standard queue" is incorrect. Standard SQS queues do not
ensure that messages are processed sequentially and FIFO SQS queues do not scale to the
required number of transactions a second.
References:
https://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html
https://aws.amazon.com/kinesis/data-streams/faqs/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
analytics/amazon-kinesis/
Create an Amazon EC2 instance farm behind an ELB to store the data in Amazon EBS Cold
HDD volumes
Create an Amazon SQS queue, and have the machines write to the queue
Create an Auto Scaling Group of Amazon EC2 instances behind ELBs to write data into
Amazon RDS
Create an Amazon Kinesis Firehose delivery stream to store the data in Amazon S3
(Correct)
Explanation
Kinesis Data Firehose is the easiest way to load streaming data into data stores and analytics
tools. It captures, transforms, and loads streaming data and you can deliver the data to
“destinations” including Amazon S3 buckets for later analysis
CORRECT: "Create an Amazon Kinesis Firehose delivery stream to store the data in Amazon
S3" is the correct answer.
INCORRECT: "Create an Amazon EC2 instance farm behind an ELB to store the data in
Amazon EBS Cold HDD volumes" is incorrect. Storing the data in EBS wold be expensive and
as EBS volumes cannot be shared by multiple instances you would have a bottleneck of a
single EC2 instance writing the data.
INCORRECT: "Create an Amazon SQS queue, and have the machines write to the queue" is
incorrect. Using an SQS queue to store the data is not possible as the data needs to be
stored long-term and SQS queues have a maximum retention time of 14 days.
INCORRECT: "Create an Auto Scaling Group of Amazon EC2 instances behind ELBs to write
data into Amazon RDS" is incorrect. Writing data into RDS via a series of EC2 instances and
a load balancer is more complex and more expensive. RDS is also not an ideal data store for
this data.
References:
https://aws.amazon.com/kinesis/data-firehose/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
analytics/amazon-kinesis/
Which type of Amazon EC2 instances should be used to reduce the cost of the system?
Spot Instances
On-Demand Instances
(Correct)
Explanation
Scheduled Reserved Instances (Scheduled Instances) enable you to purchase capacity
reservations that recur on a daily, weekly, or monthly basis, with a specified start time and
duration, for a one-year term. You reserve the capacity in advance, so that you know it is
available when you need it. You pay for the time that the instances are scheduled, even if
you do not use them.
Scheduled Instances are a good choice for workloads that do not run continuously, but do
run on a regular schedule. For example, you can use Scheduled Instances for an application
that runs during business hours or for batch processing that runs at the end of the week.
INCORRECT: "Standard Reserved Instances" is incorrect as the workload only runs for 4
hours a day this would be more expensive.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-scheduled-instances.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ec2/
Modify the configuration of AWS WAF to add an IP match condition to block the malicious
IP address
(Correct)
Modify the security groups for the EC2 instances in the target groups behind the ALB to
deny the malicious IP address.
Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious
IP address
Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny
the malicious IP address
Explanation
A new version of the AWS Web Application Firewall was released in November 2019. With
AWS WAF classic you create “IP match conditions”, whereas with AWS WAF (new version)
you create “IP set match statements”. Look out for wording on the exam.
The IP match condition / IP set match statement inspects the IP address of a web request's
origin against a set of IP addresses and address ranges. Use this to allow or block web
requests based on the IP addresses that the requests originate from.
AWS WAF supports all IPv4 and IPv6 address ranges. An IP set can hold up to 10,000 IP
addresses or IP address ranges to check.
CORRECT: "Modify the configuration of AWS WAF to add an IP match condition to block
the malicious IP address" is the correct answer.
INCORRECT: "Modify the network ACL on the CloudFront distribution to add a deny rule for
the malicious IP address" is incorrect as CloudFront does not sit within a subnet so network
ACLs do not apply to it.
INCORRECT: "Modify the network ACL for the EC2 instances in the target groups behind
the ALB to deny the malicious IP address" is incorrect as the source IP addresses of the data
in the EC2 instances’ subnets will be the ELB IP addresses.
INCORRECT: "Modify the security groups for the EC2 instances in the target groups behind
the ALB to deny the malicious IP address." is incorrect as you cannot create deny rules with
security groups.
References:
https://docs.aws.amazon.com/waf/latest/developerguide/waf-rule-statement-type-ipset-
match.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/security-
identity-compliance/aws-waf-and-shield/
(Correct)
Configure an Amazon API Gateway API in front of an Auto Scaling group to deploy
instances to multiple Availability Zones
Explanation
The Amazon EC2-based application must be highly available and elastically scalable. Auto
Scaling can provide the elasticity by dynamically launching and terminating instances based
on demand. This can take place across availability zones for high availability.
Incoming connections can be distributed to the instances by using an Application Load
Balancer (ALB).
INCORRECT: "Configure an Amazon API Gateway API in front of an Auto Scaling group to
deploy instances to multiple Availability Zones" is incorrect as API gateway is not used for
load balancing connections to Amazon EC2 instances.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-
scaling.html
https://aws.amazon.com/elasticloadbalancing/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/aws-auto-scaling/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/elastic-load-balancing/
Use Amazon Kinesis Data Streams for real-time events with a shard for each device. Use
Amazon Kinesis Data Firehose to save data to Amazon EBS
Use an Amazon SQS FIFO queue for real-time events with one queue for each device.
Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS
Use an Amazon SQS standard queue for real-time events with one queue for each device.
Trigger an AWS Lambda function from the SQS queue to save data to Amazon S3
Use Amazon Kinesis Data Streams for real-time events with a partition key for each device.
Use Amazon Kinesis Data Firehose to save data to Amazon S3
(Correct)
Explanation
Amazon Kinesis Data Streams collect and process data in real time. A Kinesis data stream is
a set of shards. Each shard has a sequence of data records. Each data record has a sequence
number that is assigned by Kinesis Data Streams. A shard is a uniquely identified sequence
of data records in a stream.
A partition key is used to group data by shard within a stream. Kinesis Data Streams
segregates the data records belonging to a stream into multiple shards. It uses the partition
key that is associated with each data record to determine which shard a given data record
belongs to.
For this scenario, the solutions architect can use a partition key for each device. This will
ensure the records for that device are grouped by shard and the shard will ensure ordering.
Amazon S3 is a valid destination for saving the data records.
CORRECT: "Use Amazon Kinesis Data Streams for real-time events with a partition key for
each device. Use Amazon Kinesis Data Firehose to save data to Amazon S3" is the correct
answer.
INCORRECT: "Use Amazon Kinesis Data Streams for real-time events with a shard for each
device. Use Amazon Kinesis Data Firehose to save data to Amazon EBS" is incorrect as you
cannot save data to EBS from Kinesis.
INCORRECT: "Use an Amazon SQS FIFO queue for real-time events with one queue for each
device. Trigger an AWS Lambda function for the SQS queue to save data to Amazon EFS" is
incorrect as SQS is not the most efficient service for streaming, real time data.
INCORRECT: "Use an Amazon SQS standard queue for real-time events with one queue for
each device. Trigger an AWS Lambda function from the SQS queue to save data to Amazon
S3" is incorrect as SQS is not the most efficient service for streaming, real time data.
References:
https://docs.aws.amazon.com/streams/latest/dev/key-concepts.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
analytics/amazon-kinesis/
(Correct)
Explanation
To protect the distribution rights of the content and ensure that users are directed to the
appropriate AWS Region based on the location of the user, the geolocation routing policy
can be used with Amazon Route 53.
Geolocation routing lets you choose the resources that serve your traffic based on the
geographic location of your users, meaning the location that DNS queries originate from.
When you use geolocation routing, you can localize your content and present some or all of
your website in the language of your users. You can also use geolocation routing to restrict
distribution of content to only the locations in which you have distribution rights.
CORRECT: "Create Amazon Route 53 records with a geolocation routing policy" is the
correct answer.
INCORRECT: "Configure Amazon CloudFront with multiple origins and AWS WAF" is
incorrect. AWS WAF protects against web exploits but will not assist with directing users to
different content (from different origins).
References:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-route-53/
(Correct)
(Correct)
Explanation
Scheduled Instances are a good choice for workloads that do not run continuously but do
run on a regular schedule. This is ideal for the development environment.
Reserved instances are a good choice for workloads that run continuously. This is a good
option for the production environment.
CORRECT: "Use Reserved instances for the production environment" is also a correct
answer.
INCORRECT: "Use Spot instances for the development environment" is incorrect. Spot
Instances are a cost-effective choice if you can be flexible about when your applications run
and if your applications can be interrupted. Spot instances are not suitable for the
development environment as important work may be interrupted.
References:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-purchasing-
options.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ec2/
Amazon MQ
AWS Config
Amazon SNS
(Correct)
Explanation
You can use a Lambda function to process Amazon Simple Notification Service notifications.
Amazon SNS supports Lambda functions as a target for messages sent to a topic. This
solution decouples the Amazon EC2 application from Lambda and ensures the Lambda
function is invoked.
INCORRECT: "AWS Config" is incorrect. AWS Config is a service that is used for continuous
compliance, not application decoupling.
INCORRECT: "Amazon MQ" is incorrect. Amazon MQ is similar to SQS but is used for
existing applications that are being migrated into AWS. SQS should be used for new
applications being created in the cloud.
INCORRECT: "AWS Step Functions" is incorrect. AWS Step Functions is a workflow service. It
is not the best solution for this scenario.
References:
https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html
https://aws.amazon.com/sns/features/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/aws-lambda/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
application-integration/amazon-sns/
Which architecture offers the highest availability and low operational complexity?
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Launch an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS
for MySQL with Multi-AZ enabled.
Deploy a second Active MQ server to another Availability Zone. Launch an additional
consumer EC2 instance in another Availability Zone. Use MySQL database replication to
another Availability Zone.
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Create an Auto Scaling group for the consumer EC2 instances across two Availability Zones.
Use an Amazon RDS MySQL database with Multi-AZ enabled.
(Correct)
Deploy Amazon MQ with active/standby brokers configured across two Availability Zones.
Launch an additional consumer EC2 instance in another Availability Zone. Use MySQL
database replication to another Availability Zone.
Explanation
The correct answer offers the highest availability as it includes Amazon MQ active/standby
brokers across two AZs, an Auto Scaling group across two AZ,s and a Multi-AZ Amazon RDS
MySQL database deployment.
This architecture not only offers the highest availability it is also operationally simple as it
maximizes the usage of managed services.
References:
https://aws.amazon.com/architecture/well-architected/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
application-integration/amazon-mq/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/aws-auto-scaling/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-rds/
Encrypt a snapshot from the master DB instance, create a new encrypted master DB
instance, and then create an encrypted cross-region Read Replica
(Correct)
Encrypt a snapshot from the master DB instance, create an encrypted cross-region Read
Replica from the snapshot
Enable encryption using Key Management Service (KMS) when creating the cross-region
Read Replica
Enable encryption on the master DB instance, then create an encrypted cross-region Read
Replica
Explanation
You cannot create an encrypted Read Replica from an unencrypted master DB instance. You
also cannot enable encryption after launch time for the master DB instance. Therefore, you
must create a new master DB by taking a snapshot of the existing DB, encrypting it, and
then creating the new DB from the snapshot. You can then create the encrypted cross-
region Read Replica of the master DB.
CORRECT: "Encrypt a snapshot from the master DB instance, create a new encrypted master
DB instance, and then create an encrypted cross-region Read Replica" is the correct answer.
INCORRECT: "Enable encryption using Key Management Service (KMS) when creating the
cross-region Read Replica" is incorrect. All other options will not work due to the limitations
explained above.
INCORRECT: "Encrypt a snapshot from the master DB instance, create an encrypted cross-
region Read Replica from the snapshot" is incorrect. All other options will not work due to
the limitations explained above.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-rds/
Create an Amazon CloudFront distribution for the site and redirect user traffic to the
distribution
(Correct)
Re-deploy the application in a new VPC that is closer to the users making the requests
Store the contents on Amazon EFS instead of the EC2 root volume
Implement Amazon Redshift to create a repository of the content closer to the users
Explanation
This is a good use case for CloudFront. CloudFront is a content delivery network (CDN) that
caches content closer to users. You can cache the static content on CloudFront using the
EC2 instances as origins for the content. This will improve performance (as the content is
closer to the users) and reduce the need for the ASG to scale (as you don’t need the
processing power of the EC2 instances to serve the static content).
CORRECT: "Create an Amazon CloudFront distribution for the site and redirect user traffic
to the distribution" is the correct answer.
INCORRECT: "Store the contents on Amazon EFS instead of the EC2 root volume" is
incorrect. Using EFS instead of the EC2 root volume does not solve either problem.
INCORRECT: "Re-deploy the application in a new VPC that is closer to the users making the
requests" is incorrect. Re-deploying the application in a VPC closer to the users may reduce
latency (and therefore improve performance), but it doesn’t solve the problem of reducing
the need for the ASG to scale.
References:
https://aws.amazon.com/caching/cdn/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-cloudfront/
Amazon Aurora
Amazon Athena
(Correct)
Explanation
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon
S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and
you pay only for the queries that you run – this satisfies the requirement to minimize
infrastructure costs for infrequent queries.
CORRECT: "Amazon Athena" is the correct answer.
INCORRECT: "Amazon Aurora" is incorrect. Amazon RDS and Aurora are not suitable
solutions for analyzing datasets on S3 – these are both relational databases typically used
for transactional (not analytical) workloads.
INCORRECT: "Amazon RDS for MySQL" is incorrect as per the previous explanation.
References:
https://docs.aws.amazon.com/athena/latest/ug/what-is.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
analytics/amazon-athena/
(Correct)
Explanation
Kinesis Data Streams enables you to build custom applications that process or analyze
streaming data for specialized needs. The diagram below shows the architecture of a Kinesis
Data Streams application:
Amazon Kinesis Data Analytics is the easiest way to process and analyze real-time,
streaming data. Kinesis Data Analytics can use standard SQL queries to process Kinesis data
streams and can ingest data from Kinesis Streams and Kinesis Firehose.
CORRECT: "Kinesis Data Streams and Kinesis Data Analytics" is the correct answer.
INCORRECT: "DynamoDB and EMR" is incorrect. DynamoDB is a NoSQL database that can
be used for storing data from a stream but cannot be used to process or analyze the data or
to query it with SQL queries. Elastic Map Reduce (EMR) is a hosted Hadoop framework and
is not used for analytics on streaming data.
INCORRECT: "Kinesis Data Streams and Kinesis Firehose" is incorrect. Firehose cannot be
used for running SQL queries.
References:
https://aws.amazon.com/kinesis/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
analytics/amazon-kinesis/
Configure an Auto Scaling group to add or remove instances in multiple Availability Zones
automatically
(Correct)
Increase the number of instances and use smaller EC2 instance types
(Correct)
Configure an Auto Scaling group to add or remove instances in the Availability Zone
automatically
Explanation
The solutions architect must enable high availability for the architecture and ensure it is
cost-effective. To enable high availability an Amazon EC2 Auto Scaling group should be
created to add and remove instances across multiple availability zones.
In order to distribute the traffic to the instances the architecture should use a Network Load
Balancer which operates at Layer 4. This architecture will also be cost-effective as the Auto
Scaling group will ensure the right number of instances are running based on demand.
CORRECT: "Configure a Network Load Balancer in front of the EC2 instances" is a correct
answer.
CORRECT: "Configure an Auto Scaling group to add or remove instances in multiple
Availability Zones automatically" is also a correct answer.
INCORRECT: "Increase the number of instances and use smaller EC2 instance types" is
incorrect as this is not the most cost-effective option. Auto Scaling should be used to
maintain the right number of active instances.
References:
https://docsaws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/amazon-ec2/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/elastic-load-balancing/
(Correct)
(Correct)
Explanation
Multi-factor authentication (MFA) delete adds an additional step before an object can be
deleted from a versioning-enabled bucket.
With MFA delete the bucket owner must include the x-amz-mfa request header in requests
to permanently delete an object version or change the versioning state of the bucket.
INCORRECT: "Create a bucket policy on the S3 bucket" is incorrect. A bucket policy is not
required to enable MFA delete.
INCORRECT: "Create a lifecycle policy for the objects in the S3 bucket" is incorrect. A
lifecycle policy will move data to another storage class but does not protect against
deletion.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Question 37: Skipped
A new application is to be published in multiple regions around the world. The
Architect needs to ensure only 2 IP addresses need to be whitelisted. The solution
should intelligently route traffic for lowest latency and provide fast regional failover.
Launch EC2 instances into multiple regions behind an ALB and use a Route 53 failover
routing policy
Launch EC2 instances into multiple regions behind an NLB and use AWS Global Accelerator
(Correct)
Launch EC2 instances into multiple regions behind an ALB and use Amazon CloudFront with
a pair of static IP addresses
Launch EC2 instances into multiple regions behind an NLB with a static IP address
Explanation
AWS Global Accelerator uses the vast, congestion-free AWS global network to route TCP
and UDP traffic to a healthy application endpoint in the closest AWS Region to the user.
This means it will intelligently route traffic to the closest point of presence (reducing
latency). Seamless failover is ensured as AWS Global Accelerator uses anycast IP address
which means the IP does not change when failing over between regions so there are no
issues with client caches having incorrect entries that need to expire.
CORRECT: "Launch EC2 instances into multiple regions behind an NLB and use AWS Global
Accelerator" is the correct answer.
INCORRECT: "Launch EC2 instances into multiple regions behind an NLB with a static IP
address" is incorrect. An NLB with a static IP is a workable solution as you could configure a
primary and secondary address in applications. However, this solution does not intelligently
route traffic for lowest latency.
INCORRECT: "Launch EC2 instances into multiple regions behind an ALB and use a Route
53 failover routing policy" is incorrect. A Route 53 failover routing policy uses a primary and
standby configuration. Therefore, it sends all traffic to the primary until it fails a health check
at which time it sends traffic to the secondary. This solution does not intelligently route
traffic for lowest latency.
INCORRECT: "Launch EC2 instances into multiple regions behind an ALB and use Amazon
CloudFront with a pair of static IP addresses" is incorrect. Amazon CloudFront cannot be
configured with “a pair of static IP addresses”.
References:
https://aws.amazon.com/global-accelerator/
https://aws.amazon.com/global-accelerator/faqs/
https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/aws-global-accelerator/
“Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote
resource”
You have been asked to resolve the problem, what is the most likely solution?
Enable CORS on the APIs resources using the selected methods under the API Gateway
(Correct)
Explanation
Cross-origin resource sharing (CORS) is a browser security feature that restricts cross-origin
HTTP requests that are initiated from scripts running in the browser. If your REST API's
resources receive non-simple cross-origin HTTP requests, you need to enable CORS support.
To support CORS, therefore, a REST API resource needs to implement an OPTIONS method
that can respond to the OPTIONS preflight request with at least the following response
headers mandated by the Fetch standard:
• Access-Control-Allow-Methods
• Access-Control-Allow-Headers
• Access-Control-Allow-Origin
CORRECT: "Enable CORS on the APIs resources using the selected methods under the API
Gateway" is the correct answer.
INCORRECT: "The IAM policy does not allow access to the API" is incorrect. IAM policies are
not used to control CORS and there is no ACL on the API to update.
INCORRECT: "The ACL on the API needs to be updated" is incorrect. There is no ACL on an
API.
INCORRECT: "The request is not secured with SSL/TLS" is incorrect. This error would display
whether using SSL/TLS or not.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-api-gateway/
Configure CloudFront to require users to access the files using a signed URL, create an
origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket to the
OAI
(Correct)
Configure CloudFront to require users to access the files using signed cookies, create an
origin access identity (OAI) and instruct users to login with the OAI
Configure CloudFront to require users to access the files using signed cookies, and move
the files to an encrypted EBS volume
Configure CloudFront to require users to access the files using a signed URL, and configure
the S3 bucket as a website endpoint
Explanation
A signed URL includes additional information, for example, an expiration date and time, that
gives you more control over access to your content. You can also specify the IP address or
range of IP addresses of the users who can access your content.
If you use CloudFront signed URLs (or signed cookies) to limit access to files in your Amazon
S3 bucket, you may also want to prevent users from directly accessing your S3 files by using
Amazon S3 URLs. To achieve this you can create an origin access identity (OAI), which is a
special CloudFront user, and associate the OAI with your distribution.
You can then change the permissions either on your Amazon S3 bucket or on the files in
your bucket so that only the origin access identity has read permission (or read and
download permission).
CORRECT: "Configure CloudFront to require users to access the files using a signed URL,
create an origin access identity (OAI) and restrict access to the files in the Amazon S3 bucket
to the OAI" is the correct answer.
INCORRECT: "Configure CloudFront to require users to access the files using signed
cookies, create an origin access identity (OAI) and instruct users to login with the OAI" is
incorrect. Users cannot login with an OAI.
INCORRECT: "Configure CloudFront to require users to access the files using signed
cookies, and move the files to an encrypted EBS volume" is incorrect. You cannot use
CloudFront to pull data directly from an EBS volume.
INCORRECT: "Configure CloudFront to require users to access the files using a signed URL,
and configure the S3 bucket as a website endpoint" is incorrect. You cannot use CloudFront
and an OAI when your S3 bucket is configured as a website endpoint.
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-
restricting-access-to-s3.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
(Correct)
Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy
(Correct)
Modify the Auto Scaling group termination policy to terminate the oldest instance first
Modify the Auto Scaling group termination policy to terminate the newest instance first
Explanation
The cooldown period is a configurable setting for your Auto Scaling group that helps to
ensure that it doesn’t launch or terminate additional instances before the previous scaling
activity takes effect so this would help. After the Auto Scaling group dynamically scales
using a simple scaling policy, it waits for the cooldown period to complete before resuming
scaling activities.
The CloudWatch Alarm Evaluation Period is the number of the most recent data points to
evaluate when determining alarm state. This would help as you can increase the number of
datapoints required to trigger an alarm.
CORRECT: "Modify the CloudWatch alarm period that triggers your Auto Scaling scale
down policy" is the correct answer.
CORRECT: "Modify the Auto Scaling group cool-down timers" is the correct answer.
INCORRECT: "Modify the Auto Scaling group termination policy to terminate the newest
instance first" is incorrect. The order in which Auto Scaling terminates instances is not the
issue here, the problem is that the workload is dynamic and Auto Scaling is constantly
reacting to change, and launching or terminating instances.
INCORRECT: "Modify the Auto Scaling group termination policy to terminate the oldest
instance first" is incorrect. As per the previous explanation, the order of termination is not
the issue here.
INCORRECT: "Modify the Auto Scaling policy to use scheduled scaling actions" is incorrect.
Using scheduled scaling actions may not be cost-effective and also affects elasticity as it is
less dynamic.
References:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/
AlarmThatSendsEmail.html#alarm-evaluation
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
compute/aws-auto-scaling/
(Correct)
Explanation
The architecture is already highly resilient but the may be subject to performance
degradation if there are sudden increases in request rates. To resolve this situation Amazon
Aurora Read Replicas can be used to serve read traffic which offloads requests from the
main database. On the frontend an Amazon CloudFront distribution can be placed in front
of the ALB and this will cache content for better performance and also offloads requests
from the backend.
CORRECT: "Add an Amazon CloudFront distribution in front of the ALB" is the correct
answer.
INCORRECT: "Add and AWS WAF in front of the ALB" is incorrect. A web application firewall
protects applications from malicious attacks. It does not improve performance.
INCORRECT: "Add an AWS Transit Gateway to the Availability Zones" is incorrect as this is
used to connect on-premises networks to VPCs.
INCORRECT: "Add an AWS Global Accelerator endpoint" is incorrect as this service is used
for directing users to different instances of the application in different regions based on
latency.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-aurora/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-cloudfront/
Store the files on Amazon Glacier, and create a lifecycle policy to remove the files after three
months
Store the files on Amazon S3, and create a lifecycle policy to remove the files after three
months
(Correct)
Store the files on Amazon EFS, and create a lifecycle policy to remove the files after three
months
Store the files on Amazon EBS, and create a lifecycle policy to remove the files after three
months
Explanation
To manage your objects so that they are stored cost effectively throughout their lifecycle,
configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define
actions that Amazon S3 applies to a group of objects. There are two types of actions:
Transition actions—Define when objects transition to another storage class. For example,
you might choose to transition objects to the S3 Standard-IA storage class 30 days after you
created them, or archive objects to the S3 Glacier storage class one year after creating them.
There are costs associated with the lifecycle transition requests.
The lifecycle expiration costs depend on when you choose to expire objects.
The solutions architect can create a lifecycle action using the “expiration action element”
which expires objects (deletes them) at the specified time.
CORRECT: "Store the files on Amazon S3, and create a lifecycle policy to remove the files
after three months" is the correct answer.
INCORRECT: "Store the files on Amazon EBS, and create a lifecycle policy to remove the
files after three months" is incorrect. There is no lifecycle policy available for deleting files on
EBS. The Amazon Data Lifecycle Manager (DLM) feature automates the creation, retention,
and deletion of EBS snapshots but not the individual files within an EBS volume.
INCORRECT: "Store the files on Amazon Glacier, and create a lifecycle policy to remove the
files after three months" is incorrect. S3 lifecycle actions apply to any storage class, including
Glacier, however Glacier would not allow immediate download.
INCORRECT: "Store the files on Amazon EFS, and create a lifecycle policy to remove the
files after three months" is incorrect. There is no lifecycle policy available for deleting files on
EFS
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Create an internal Application Load Balancer in the service provider VPC and put application
servers behind it
Create a VPC endpoint service and grant permissions to specific service consumers to create
a connection
(Correct)
Create a virtual private gateway connection between each pair of service provider VPCs and
service consumer VPCs
Create a proxy server in the service provider VPC to route requests from service consumers
to the application servers
Explanation
What you need to do here is offer the service through a service provider offering. This is a
great use case for a VPC endpoint service using AWS PrivateLink (referred to as an endpoint
service). Other AWS principals can then create a connection from their VPC to your endpoint
service using an interface VPC endpoint.
You are acting as the service provider and offering the service to service consumers. This
configuration uses a Network Load Balancer and can be fault-tolerant by configuring
multiple subnets in which the EC2 instances are running.
CORRECT: "Create a VPC endpoint service and grant permissions to specific service
consumers to create a connection" is the correct answer.
INCORRECT: "Create a virtual private gateway connection between each pair of service
provider VPCs and service consumer VPCs" is incorrect. Creating a virtual private gateway
connection between each pair of service provider VPCs and service consumer VPCs would
be extremely cumbersome and is not the best option.
INCORRECT: "Create a proxy server in the service provider VPC to route requests from
service consumers to the application servers" is incorrect. Using a proxy service is possible
but would not scale as well and would present a single point of failure unless there is some
load balancing to multiple proxies (not mentioned).
INCORRECT: "Create an internal Application Load Balancer in the service provider VPC and
put application servers behind it" is incorrect. Creating an internal ALB would not work as
you need consumers from outside your VPC to be able to connect.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
networking-and-content-delivery/amazon-vpc/
What is the MOST secure means of granting the Lambda function access to the
DynamoDB table?
Create an identity and access management (IAM) role allowing access from AWS Lambda
and assign the role to the DynamoDB table
Create an identity and access management (IAM) role with the necessary permissions to
access the DynamoDB table, and assign the role to the Lambda function
(Correct)
Create an identity and access management (IAM) user and create access and secret keys for
the user. Give the user the necessary permissions to access the DynamoDB table. Have the
Developer use these keys to access the resources
Create a DynamoDB username and password and give them to the Developer to use in the
Lambda function
Explanation
The most secure method is to use an IAM role so you don’t need to embed any credentials
in code and can tightly control the services that your Lambda function can access. You need
to assign the role to the Lambda function, NOT to the DynamoDB table.
CORRECT: "Create an identity and access management (IAM) role with the necessary
permissions to access the DynamoDB table, and assign the role to the Lambda function" is
the correct answer.
INCORRECT: "Create a DynamoDB username and password and give them to the Developer
to use in the Lambda function" is incorrect. You cannot create a user name and password
for DynamoDB and it would be bad practice to store these in the function code if you could.
INCORRECT: "Create an identity and access management (IAM) user and create access and
secret keys for the user. Give the user the necessary permissions to access the DynamoDB
table. Have the Developer use these keys to access the resources" is incorrect. You should
not use an access key and secret ID to access DynamoDB. Again, this means embedding
credentials in code which should be avoided.
INCORRECT: "Create an identity and access management (IAM) role allowing access from
AWS Lambda and assign the role to the DynamoDB table" is incorrect as the role should be
assigned to the Lambda function so it can access the table.
References:
https://aws.amazon.com/blogs/security/how-to-create-an-aws-iam-policy-to-grant-aws-
lambda-access-to-an-amazon-dynamodb-table/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-dynamodb/
Store the data in an Amazon EFS filesystem. Mount the file system on the application
instances
(Correct)
Store the data in an Amazon EBS volume. Mount the EBS volume on the application
instances
Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the
application instances
Store the data in AWS Storage Gateway. Setup AWS Direct Connect between the Gateway
appliance and the EC2 instances
Explanation
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic
NFS file system for use with AWS Cloud services and on-premises resources. It is built to
scale on-demand to petabytes without disrupting applications, growing and shrinking
automatically as you add and remove files, eliminating the need to provision and manage
capacity to accommodate growth.
Amazon EFS supports the Network File System version 4 (NFSv4.1 and NFSv4.0) protocol.
Multiple Amazon EC2 instances can access an Amazon EFS file system at the same time,
providing a common data source for workloads and applications running on more than one
instance or server.
For this scenario, EFS is a great choice as it will provide a scalable file system that can be
mounted by multiple EC2 instances and accessed simultaneously.
CORRECT: "Store the data in an Amazon EFS filesystem. Mount the file system on the
application instances" is the correct answer.
INCORRECT: "Store the data in an Amazon EBS volume. Mount the EBS volume on the
application instances" is incorrect. Though there is a new feature that allows (EBS multi-
attach) that allows attaching multiple Nitro instances to a volume, this is not on the exam
yet, and has some specific constraints.
INCORRECT: "Store the data in Amazon S3 Glacier. Update the vault policy to allow access
to the application instances" is incorrect as S3 Glacier is not a suitable storage location for
live access to data, it is used for archival.
INCORRECT: "Store the data in AWS Storage Gateway. Setup AWS Direct Connect between
the Gateway appliance and the EC2 instances" is incorrect. There is no reason to store the
data on-premises in a Storage Gateway, using EFS is a much better solution.
References:
https://docs.aws.amazon.com/efs/latest/ug/whatisefs.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-efs/
(Correct)
Explanation
Amazon DynamoDB global tables provide a fully managed solution for deploying a multi-
region, multi-master database. This is the only solution presented that provides an active-
active configuration where reads and writes can take place in multiple regions with full bi-
directional synchronization.
INCORRECT: "AWS Database Migration Service with change data capture" is incorrect as
the DMS is used for data migration from a source to a destination. However, in this example
we need a multi-master database and DMS will not allow this configuration.
References:
https://aws.amazon.com/blogs/database/how-to-use-amazon-dynamodb-global-tables-to-
power-multiregion-architectures/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-dynamodb/
What services would be used to deliver this solution in the MOST cost-effective
manner? (Select TWO.)
Configure S3 event notifications to trigger a Lambda function when data is uploaded and
use the Lambda function to trigger the ETL job
(Correct)
Use AWS Glue to extract, transform and load the data into the target data store
(Correct)
Configure CloudFormation to provision a Kinesis data stream to transform the data and load
it into S3
Explanation
The Amazon S3 notification feature enables you to receive notifications when certain events
happen in your bucket. To enable notifications, you must first add a notification
configuration that identifies the events you want Amazon S3 to publish and the destinations
where you want Amazon S3 to send the notifications. You store this configuration in
the notification subresource that is associated with a bucket.
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for
customers to prepare and load their data for analytics.
CORRECT: "Use AWS Glue to extract, transform and load the data into the target data
store" is also a correct answer.
References:
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
https://aws.amazon.com/glue/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
Create an Application Load Balancer with Auto Scaling groups across multiple Availability
Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-A)
Create an Application Load Balancer with Auto Scaling groups across multiple Availability
Zones. Mount an instance store on each EC2 instance
Create an Application Load Balancer with Auto Scaling groups across multiple Availability
Zones. Store data on Amazon EFS and mount a target on each instance
(Correct)
Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to
each EC2 instance
Explanation
To increase the resiliency of the application the solutions architect can use Auto Scaling
groups to launch and terminate instances across multiple availability zones based on
demand. An application load balancer (ALB) can be used to direct traffic to the web
application running on the EC2 instances.
Lastly, the Amazon Elastic File System (EFS) can assist with increasing the resilience of the
application by providing a shared file system that can be mounted by multiple EC2 instances
from multiple availability zones.
CORRECT: "Create an Application Load Balancer with Auto Scaling groups across multiple
Availability Zones. Store data on Amazon EFS and mount a target on each instance" is the
correct answer.
INCORRECT: "Launch the application on EC2 instances in each Availability Zone. Attach EBS
volumes to each EC2 instance" is incorrect as the EBS volumes are single points of failure
which are not shared with other instances.
INCORRECT: "Create an Application Load Balancer with Auto Scaling groups across multiple
Availability Zones. Mount an instance store on each EC2 instance" is incorrect as instance
stores are ephemeral data stores which means data is lost when powered down. Also,
instance stores cannot be shared between instances.
INCORRECT: "Create an Application Load Balancer with Auto Scaling groups across multiple
Availability Zones. Store data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-
IA)" is incorrect as there are data retrieval charges associated with this S3 tier. It is not a
suitable storage tier for application files.
References:
https://docs.aws.amazon.com/efs/
Save time with our exam-specific cheat sheets:
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-efs/
Amazon S3 Intelligent-Tiering
Amazon S3 Standard
(Correct)
Explanation
S3 standard is the best choice in this scenario for a short term storage solution. In this case
the size and number of logs is unknown and it would be difficult to fully assess the access
patterns at this stage. Therefore, using S3 standard is best as it is cost-effective, provides
immediate access, and there are no retrieval fees or minimum capacity charge per object.
INCORRECT: "Amazon S3 Glacier Deep Archive" is incorrect as this storage class is used for
archiving data. There are retrieval fees and it take hours to retrieve data from an archive.
References:
https://aws.amazon.com/s3/storage-classes/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
storage/amazon-s3/
(Correct)
ElastiCache is a web service that makes it easy to deploy and run Memcached or Redis
protocol-compliant server nodes in the cloud. The in-memory caching provided by
ElastiCache can be used to significantly improve latency and throughput for many read-
heavy application workloads or compute-intensive workloads
There are two different database engines with different characteristics as per below:
The correct choice for this scenario is Redis as Redis provides the persistency that is
required.
INCORRECT: "Kinesis Data Streams" is incorrect. Kinesis Data Streams is used for processing
streams of data, it is not a persistent data store.
References:
https://aws.amazon.com/elasticache/redis/
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-elasticache/
Migrate the database to an Amazon Aurora global database in MySQL compatibility mode.
Configure read replicas in ap-southeast-2
(Correct)
Migrate the database to Amazon DynamoDB. Use DynamoDB global tables to enable
replication to additional Regions
Deploy MySQL instances in each Region. Deploy an Application Load Balancer in front of
MySQL to reduce the load on the primary instance
Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the Australian
Region
Explanation
The issue here is latency with read queries being directed from Australia to UK which is
great physical distance. A solution is required for improving read performance in Australia.
An Aurora global database consists of one primary AWS Region where your data is
mastered, and up to five read-only, secondary AWS Regions. Aurora replicates data to the
secondary AWS Regions with typical latency of under a second. You issue write operations
directly to the primary DB instance in the primary AWS Region.
This solution will provide better performance for users in the Australia Region for queries.
Writes must still take place in the UK Region but read performance will be greatly improved.
INCORRECT: "Migrate the database to Amazon RDS for MySQL. Configure Multi-AZ in the
Australian Region" is incorrect. The database is located in UK. If the database is migrated to
Australia then the reverse problem will occur. Multi-AZ does not assist with improving query
performance across Regions.
INCORRECT: "Migrate the database to Amazon DynamoDB. Use DynamoDB global tables
to enable replication to additional Regions" is incorrect as a relational database running on
MySQL is unlikely to be compatible with DynamoDB.
INCORRECT: "Deploy MySQL instances in each Region. Deploy an Application Load
Balancer in front of MySQL to reduce the load on the primary instance" is incorrect as you
can only put ALBs in front of the web tier, not the DB tier.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-
database.html
https://digitalcloud.training/certification-training/aws-solutions-architect-associate/
database/amazon-aurora/
Which service should the solutions architect use to replace the file server farm?
Amazon EBS
Amazon EFS
Amazon FSx
(Correct)
Explanation
Amazon FSx for Windows file server supports DFS namespaces and DFS replication. This is
the best solution for replacing the on-premises infrastructure. Note the limitations for
deployment:
INCORRECT: "Amazon EFS" is incorrect. You cannot replace a Windows file server farm with
EFS as it uses a completely different protocol.
INCORRECT: "Amazon EBS" is incorrect. Amazon EBS provides block-based volumes that
are attached to EC2 instances. It cannot be used for replacing a shared Windows file server
farm using DFSR.
INCORRECT: "AWS Storage Gateway" is incorrect. This service is used for providing cloud
storage solutions for on-premises servers. In this case the infrastructure is being migrated
into the AWS Cloud.
References:
https://docs.aws.amazon.com/fsx/latest/WindowsGuide/high-availability-multiAZ.html
(Correct)
(Correct)
Explanation
In order to address scalability and to provide a shared data storage for sessions that can be
accessible from any individual web server, you can abstract the HTTP sessions from the web
servers themselves. A common solution to for this is to leverage an In-Memory Key/Value
store such as Redis and Memcached.
Sticky sessions, also known as session affinity, allow you to route a site user to the particular
web server that is managing that individual user’s session. The session’s validity can be
determined by a number of methods, including a client-side cookie or via configurable
duration parameters that can be set at the load balancer which routes requests to the web
servers. You can configure sticky sessions on Amazon ELBs.
CORRECT: "Sticky sessions on an Elastic Load Balancer (ELB)" is the correct answer.
CORRECT: "A key/value store such as ElastiCache Redis" is the correct answer.
INCORRECT: "A block storage service such as Elastic Block Store (EBS)" is incorrect. In this
instance the question states that a caching layer is being implemented and EBS volumes
would not be suitable for creating an independent caching layer as they must be attached
to EC2 instances.
INCORRECT: "A workflow service such as Amazon Simple Workflow Service (SWF)" is
incorrect. Workflow services such as SWF are used for carrying out a series of tasks in a
coordinated task flow. They are not suitable for storing session state data.
INCORRECT: "A relational data store such as Amazon RDS" is incorrect. Relational
databases are not typically used for storing session state data due to their rigid schema that
tightly controls the format in which data can be stored.
References:
https://aws.amazon.com/caching/session-management/
Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB) spanning
multiple AZs
(Correct)
Add the existing web application instances to an Auto Scaling group behind an Application
Load Balancer (ALB)
Create new public and private subnets in the same VPC, each in a new AZ. Migrate the
database to an Amazon RDS multi-AZ deployment
(Correct)
Create new public and private subnets in the same AZ for high availability
Create new public and private subnets in a new AZ. Create a database using Amazon EC2 in
one AZ
Explanation
To add high availability to this architecture both the web tier and database tier require
changes. For the web tier an Auto Scaling group across multiple AZs with an ALB will ensure
there are always instances running and traffic is being distributed to them.
The database tier should be migrated from the EC2 instances to Amazon RDS to take
advantage of a managed database with Multi-AZ functionality. This will ensure that if there
is an issue preventing access to the primary database a secondary database can take over.
CORRECT: "Create an Amazon EC2 Auto Scaling group and Application Load Balancer (ALB)
spanning multiple AZs" is the correct answer.
CORRECT: "Create new public and private subnets in the same VPC, each in a new AZ.
Migrate the database to an Amazon RDS multi-AZ deployment" is the correct answer.
INCORRECT: "Create new public and private subnets in the same AZ for high availability" is
incorrect as this would not add high availability.
INCORRECT: "Add the existing web application instances to an Auto Scaling group behind
an Application Load Balancer (ALB)" is incorrect because the existing servers are in a single
subnet. For HA we need to instances in multiple subnets.
INCORRECT: "Create new public and private subnets in a new AZ. Create a database using
Amazon EC2 in one AZ" is incorrect because we also need HA for the database layer.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-load-balancer.html
https://aws.amazon.com/rds/features/multi-az/
Create an Amazon SQS queue and configure the front-end to add messages to the queue
and the back-end to poll the queue for messages
(Correct)
Create an Amazon Kinesis Firehose delivery stream and configure the front-end to add data
to the stream and the back-end to read data from the stream
Create an Amazon Kinesis Firehose delivery stream that delivers data to an Amazon S3
bucket, configure the front-end to write data to the stream and the back-end to read data
from Amazon S3
Create an Amazon SQS queue that pushes messages to the back-end. Configure the front-
end to add messages to the queue
Explanation
This is a good use case for Amazon SQS. SQS is a service that is used for decoupling
applications, thus reducing interdependencies, through a message bus. The front-end
application can place messages on the queue and the back-end can then poll the queue for
new messages. Please remember that Amazon SQS is pull-based (polling) not push-based
(use SNS for push-based).
CORRECT: "Create an Amazon SQS queue and configure the front-end to add messages to
the queue and the back-end to poll the queue for messages" is the correct answer.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream and configure the front-
end to add data to the stream and the back-end to read data from the stream" is incorrect.
Amazon Kinesis Firehose is used for streaming data. With Firehose the data is immediately
loaded into a destination that can be Amazon S3, RedShift, Elasticsearch, or Splunk. This is
not an ideal use case for Firehose as this is not streaming data and there is no need to load
data into an additional AWS service.
INCORRECT: "Create an Amazon Kinesis Firehose delivery stream that delivers data to an
Amazon S3 bucket, configure the front-end to write data to the stream and the back-end to
read data from Amazon S3" is incorrect as per the previous explanation.
INCORRECT: "Create an Amazon SQS queue that pushes messages to the back-end.
Configure the front-end to add messages to the queue " is incorrect as SQS is pull-based,
not push-based. EC2 instances must poll the queue to find jobs to process.
References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/common_use_cases.html
How can a Solutions Architect design a managed solution that will align with open-
source software?
Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS worker nodes.
(Correct)
Launch the containers on Amazon Elastic Container Service (ECS) with Amazon EC2 instance
worker nodes.
Launch the containers on a fleet of Amazon EC2 instances in a cluster placement group.
Launch the containers on Amazon Elastic Container Service (ECS) with AWS Fargate
instances.
Explanation
Amazon EKS is a managed service that can be used to run Kubernetes on AWS. Kubernetes
is an open-source system for automating the deployment, scaling, and management of
containerized applications. Applications running on Amazon EKS are fully compatible with
applications running on any standard Kubernetes environment, whether running in on-
premises data centers or public clouds. This means that you can easily migrate any standard
Kubernetes application to Amazon EKS without any code modification.
This solution ensures that the same open-source software is used for automating the
deployment, scaling, and management of containerized applications both on-premises and
in the AWS Cloud.
CORRECT: "Launch the containers on Amazon Elastic Kubernetes Service (EKS) and EKS
worker nodes" is the correct answer.
INCORRECT: "Launch the containers on Amazon Elastic Container Service (ECS) with AWS
Fargate instances" is incorrect
INCORRECT: "Launch the containers on Amazon Elastic Container Service (ECS) with
Amazon EC2 instance worker nodes" is incorrect
References:
https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html
Create an SCP with an allow rule that allows launching the specific instance types
Use AWS Resource Access Manager to control which launch types can be used
Create an IAM policy to deny launching all but the specific instance types
Create an SCP with a deny rule that denies all but the specific instance types
(Correct)
Explanation
To apply the restrictions across multiple member accounts you must use a Service Control
Policy (SCP) in the AWS Organization. The way you would do this is to create a deny rule
that applies to anything that does not equal the specific instance type you want to allow.
CORRECT: "Create an SCP with a deny rule that denies all but the specific instance types" is
the correct answer.
INCORRECT: "Create an SCP with an allow rule that allows launching the specific instance
types" is incorrect as a deny rule is required.
INCORRECT: "Create an IAM policy to deny launching all but the specific instance types" is
incorrect. With IAM you need to apply the policy within each account rather than centrally
so this would require much more effort.
INCORRECT: "Use AWS Resource Access Manager to control which launch types can be
used" is incorrect. AWS Resource Access Manager (RAM) is a service that enables you to
easily and securely share AWS resources with any AWS account or within your AWS
Organization. It is not used for restricting access or permissions.
References:
https://docs.aws.amazon.com/organizations/latest/userguide/
orgs_manage_policies_example-scps.html#example-ec2-instances
Create an interface VPC endpoint in the VPC with an Elastic Network Interface (ENI)
Create a gateway VPC endpoint and add an entry to the route table
(Correct)
Explanation
There are two different types of VPC endpoint: interface endpoint, and gateway endpoint.
With an interface endpoint you use an ENI in the VPC. With a gateway endpoint you
configure your route table to point to the endpoint. Amazon S3 and DynamoDB use
gateway endpoints. This solution means that all traffic will go through the VPC endpoint
straight to DynamoDB using private IP addresses.
CORRECT: "Create a gateway VPC endpoint and add an entry to the route table" is the
correct answer.
INCORRECT: "Create an interface VPC endpoint in the VPC with an Elastic Network Interface
(ENI)" is incorrect. As mentioned above, an interface endpoint is not used for DynamoDB,
you must use a gateway endpoint.
INCORRECT: "Create the Amazon DynamoDB table in the VPC" is incorrect. You cannot
create a DynamoDB table in a VPC, to connect securely using private addresses you should
use a gateway endpoint instead.
References:
https://docs.amazonaws.cn/en_us/vpc/latest/userguide/vpc-endpoints-ddb.html
https://aws.amazon.com/premiumsupport/knowledge-center/iam-restrict-calls-ip-
addresses/
https://aws.amazon.com/blogs/aws/new-vpc-endpoints-for-dynamodb/
Use Amazon DynamoDB Accelerator (DAX) in front of the DynamoDB backend tier.
Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on the SQS
queue depth.
(Correct)
Add an Amazon CloudFront distribution with a custom origin to cache the responses for the
web tier.
Replace the Amazon SQS queue with Amazon Kinesis Data Firehose.
Explanation
The most likely cause of the processing delays is insufficient instances in the middle tier
where the order processing takes place. The most effective solution to reduce processing
times in this case is to scale based on the backlog per instance (number of messages in the
SQS queue) as this reflects the amount of work that needs to be done.
CORRECT: "Use Amazon EC2 Auto Scaling to scale out the middle tier instances based on
the SQS queue depth" is the correct answer.
INCORRECT: "Replace the Amazon SQS queue with Amazon Kinesis Data Firehose" is
incorrect. The issue is not the efficiency of queuing messages but the processing of the
messages. In this case scaling the EC2 instances to reflect the workload is a better solution.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
What is the MOST efficient way for management to ensure that capacity requirements
are met?
(Correct)
Explanation
Scaling based on a schedule allows you to set your own scaling schedule for predictable
load changes. To configure your Auto Scaling group to scale based on a schedule, you
create a scheduled action. This is ideal for situations where you know when and for how
long you are going to need the additional capacity.
CORRECT: "Add a Scheduled Scaling action" is the correct answer.
INCORRECT: "Add a Step Scaling policy" is incorrect. Step scaling policies increase or
decrease the current capacity of your Auto Scaling group based on a set of scaling
adjustments, known as step adjustments. The adjustments vary based on the size of the
alarm breach. This is more suitable to situations where the load unpredictable.
INCORRECT: "Add a Simple Scaling policy" is incorrect. AWS recommend using step over
simple scaling in most cases. With simple scaling, after a scaling activity is started, the policy
must wait for the scaling activity or health check replacement to complete and the
cooldown period to expire before responding to additional alarms (in contrast to step
scaling). Again, this is more suitable to unpredictable workloads.
INCORRECT: "Add Amazon EC2 Spot instances" is incorrect. Adding spot instances may
decrease EC2 costs but you still need to ensure they are available. The main requirement of
the question is that the performance issues are resolved rather than the cost being
minimized.
References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-
demand.html
Which combination of changes can the company make to meet these requirements?
(Select TWO.)
Use AWS Direct Connect and mount an Amazon FSx for Windows File Server using iSCSI.
Use the mount command on servers to mount Amazon S3 buckets using NFS.
Use an AWS Storage Gateway file gateway to replace the NFS storage.
(Correct)
Use Amazon Elastic File System (EFS) volumes to replace the block storage.
Use an AWS Storage Gateway volume gateway to replace the block storage.
(Correct)
Explanation
In this scenario the company should use cloud storage to replace the existing storage
solutions that are running out of capacity. The on-premises servers mount the existing
storage using block protocols (iSCSI) and file protocols (NFS). As there is a requirement to
avoid re-architecting existing applications these protocols must be used in the revised
solution.
The AWS Storage Gateway volume gateway should be used to replace the block-based
storage systems as it is mounted over iSCSI and the file gateway should be used to replace
the NFS file systems as it uses NFS.
CORRECT: "Use an AWS Storage Gateway file gateway to replace the NFS storage" is a
correct answer.
CORRECT: "Use an AWS Storage Gateway volume gateway to replace the block storage" is a
correct answer.
INCORRECT: "Use the mount command on servers to mount Amazon S3 buckets using
NFS" is incorrect. You cannot mount S3 buckets using NFS as it is an object-based storage
system (not file-based) and uses an HTTP REST API.
INCORRECT: "Use AWS Direct Connect and mount an Amazon FSx for Windows File Server
using iSCSI" is incorrect. You cannot mount FSx for Windows File Server file systems using
iSCSI, you must use SMB.
INCORRECT: "Use Amazon Elastic File System (EFS) volumes to replace the block storage" is
incorrect. You cannot use EFS to replace block storage as it uses NFS rather than iSCSI.
References:
https://docs.aws.amazon.com/storagegateway/
Question 62: Skipped
The database tier of a web application is running on a Windows server on-premises.
The database is a Microsoft SQL Server database. The application owner would like to
migrate the database to an Amazon RDS instance.
How can the migration be executed with minimal administrative effort and
downtime?
Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS
(Correct)
Use the AWS Database Migration Service (DMS) to directly migrate the database to RDS.
Use the Schema Conversion Tool (SCT) to enable conversion from Microsoft SQL Server to
Amazon RDS
Use AWS DataSync to migrate the data from the database to Amazon S3. Use AWS
Database Migration Service (DMS) to migrate the database to RDS
Use the AWS Server Migration Service (SMS) to migrate the server to Amazon EC2. Use AWS
Database Migration Service (DMS) to migrate the database to RDS
Explanation
You can directly migrate Microsoft SQL Server from an on-premises server into Amazon RDS
using the Microsoft SQL Server database engine. This can be achieved using the native
Microsoft SQL Server tools, or using AWS DMS as depicted below:
CORRECT: "Use the AWS Database Migration Service (DMS) to directly migrate the
database to RDS" is the correct answer.
INCORRECT: "Use the AWS Server Migration Service (SMS) to migrate the server to Amazon
EC2. Use AWS Database Migration Service (DMS) to migrate the database to RDS" is
incorrect. You do not need to use the AWS SMS service to migrate the server into EC2 first.
You can directly migrate the database online with minimal downtime.
INCORRECT: "Use AWS DataSync to migrate the data from the database to Amazon S3. Use
AWS Database Migration Service (DMS) to migrate the database to RDS" is incorrect. AWS
DataSync is used for migrating data, not databases.
INCORRECT: "Use the AWS Database Migration Service (DMS) to directly migrate the
database to RDS. Use the Schema Conversion Tool (SCT) to enable conversion from
Microsoft SQL Server to Amazon RDS" is incorrect. You do not need to use the SCT as you
are migrating into the same destination database engine (RDS is just the platform).
References:
https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/migrate-an-on-
premises-microsoft-sql-server-database-to-amazon-rds-for-sql-server.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.html
https://aws.amazon.com/dms/schema-conversion-tool/
Amazon CloudFront
(Correct)
Amazon Route 53
Explanation
An Application Load Balancer is a type of Elastic Load Balancer that can use layer 7
(HTTP/HTTPS) protocol data to make forwarding decisions. An ALB supports both path-
based (e.g. /images or /orders) and host-based routing (e.g. example.com).
In this scenario a single EC2 instance is listening for traffic for each application on a different
port. You can use a target group that listens on a single port (HTTP or HTTPS) and then uses
listener rules to selectively route to a different port on the EC2 instance based on the
information in the URL path. So you might have example.com/images going to one backend
port and example.com/orders going to a different backend port.
INCORRECT: "Amazon Route 53" is incorrect. Amazon Route 53 is a DNS service. It can be
used to load balance however it does not have the ability to route based on information in
the incoming request path.
INCORRECT: "Classic Load Balancer (CLB)" is incorrect. You cannot use host-based or path-
based routing with a CLB.
Use a Kinesis data stream to store the file, and use Lambda for processing
Upload files into an S3 bucket, and use the Amazon S3 event notification to invoke a
Lambda function to extract the metadata
(Correct)
Store the file in an EBS volume which can then be accessed by another EC2 instance for
processing
Place the files in an SQS queue, and use a fleet of EC2 instances to extract the metadata
Explanation
Storing the file in an S3 bucket is the most cost-efficient solution, and using S3 event
notifications to invoke a Lambda function works well for this unpredictable workload.
The following diagram depicts a similar architecture where users upload documents to an
Amazon S3 bucket and an event notification triggers a Lambda function that resizes the
image.
CORRECT: "Upload files into an S3 bucket, and use the Amazon S3 event notification to
invoke a Lambda function to extract the metadata" is the correct answer.
INCORRECT: "Use a Kinesis data stream to store the file, and use Lambda for processing" is
incorrect. Kinesis data streams runs on EC2 instances and you must, therefore, provision
some capacity even when the application is not receiving files. This is not as cost-efficient as
storing them in an S3 bucket prior to using Lambda for the processing.
INCORRECT: "Store the file in an EBS volume which can then be accessed by another EC2
instance for processing" is incorrect. Storing the file in an EBS volume and using EC2
instances for processing is not cost-efficient.
INCORRECT: "Place the files in an SQS queue, and use a fleet of EC2 instances to extract the
metadata" is incorrect. SQS queues have a maximum message size of 256KB. You can use
the extended client library for Java to use pointers to a payload on S3 but the maximum
payload size is 2GB.
Provision an IPSec VPN connection between your on-premises location and AWS and create
a CLB that uses cross-zone load balancing to distributed traffic across EC2 instances and on-
premises servers
Provision a Direct Connect connection between your on-premises location and AWS and
create a target group on an ALB to use Instance ID based targets for both your EC2
instances and on-premises server
This cannot be done, ELBs are an AWS service and can only distribute traffic within the AWS
cloud
Provision a Direct Connect connection between your on-premises location and AWS and
create a target group on an ALB to use IP based targets for both your EC2 instances and on-
premises servers
(Correct)
Explanation
The ALB (and NLB) supports IP addresses as targets as well as instance IDs as targets. When
you create a target group, you specify its target type, which determines how you specify its
targets. After you create a target group, you cannot change its target type.
Using IP addresses as targets allows load balancing any application hosted in AWS or on-
premises using IP addresses of the application back-ends as targets.
You must have a VPN or Direct Connect connection to enable this configuration to work.
CORRECT: "Provision a Direct Connect connection between your on-premises location and
AWS and create a target group on an ALB to use IP based targets for both your EC2
instances and on-premises servers" is the correct answer.
INCORRECT: "Provision an IPSec VPN connection between your on-premises location and
AWS and create a CLB that uses cross-zone load balancing to distributed traffic across EC2
instances and on-premises servers" is incorrect. The CLB does not support IP addresses as
targets.
INCORRECT: "This cannot be done, ELBs are an AWS service and can only distribute traffic
within the AWS cloud" is incorrect as this statement is incorrect.