Aws Test
Aws Test
Question #1Topic 1
A Solutions Architect is designing an application that will encrypt all data in an Amazon
Redshift cluster.
Which action will encrypt the data at rest?
• A. Amazon SQS
• B. Amazon EFS
• C. Amazon S3
• D. AWS Lambda
• A. AWS Snowball storage for the legacy application until the application can be re-
architected.
• B. AWS Storage Gateway in cached mode for the legacy application storage to write
data to Amazon S3.
• C. AWS Storage Gateway in stored mode for the legacy application storage to write
data to Amazon S3.
• D. An Amazon S3 volume mounted on the legacy application server locally using the
File Gateway service.
• A. Amazon Redshift
• B. Amazon DynamoDB
• C. Amazon RDS MySQL
• D. Amazon Aurora
• A. The application is reading parts of objects from Amazon S3 using a range header.
• B. The application is reading objects from Amazon S3 using parallel object requests.
• C. The application is updating records by writing new objects with unique keys.
• D. The application is updating records by overwriting existing objects with the same
keys.
• A. Use Amazon Kinesis with AWS CloudTrail for auditing the specific times when profile
photos are uploaded.
• B. Use Amazon EBS volumes with IAM policies restricting user access to specific time
periods.
• C. Use Amazon S3 with the default private access policy and generate pre-signed URLs
each time a new site profile is created.
• D. Use Amazon CloudFront with AWS CloudTrail for auditing the specific times when
profile photos are uploaded.
• A. Amazon S3
• B. Amazon EFS
• C. Amazon EBS
• D. Amazon Glacier
• A. Create a Lambda function to move files older than 30 days to Amazon EBS and
move files older than 60 days to Amazon Glacier.
• B. Create a Lambda function to move files older than 30 days to Amazon Glacier and
move files older than 60 days to Amazon EBS.
• C. Create lifecycle rules to move files older than 30 days to Amazon S3 Standard
Infrequent Access and move files older than 60 days to Amazon Glacier.
• D. Create lifecycle rules to move files older than 30 days to Amazon Glacier and move
files older than 60 days to Amazon S3 Standard Infrequent Access.
Reveal Solution
Question #11Topic 1
An organization is currently hosting a large amount of frequently accessed data
consisting of key-value pairs and semi-structured documents in their data center.
They are planning to move this data to AWS.
Which of one of the following services MOST effectively meets their needs?
• A. Amazon Redshift
• B. Amazon RDS
• C. Amazon DynamoDB
• D. Amazon Aurora
• A. Modify the Redshift cluster and configure cross-region snapshots to the other region.
• B. Modify the Redshift cluster to take snapshots of the Amazon EBS volumes each day,
sharing those snapshots with the other region.
• C. Modify the Redshift cluster and configure the backup and specify the Amazon S3
bucket in the other region.
• D. Modify the Redshift cluster to use AWS Snowball in export mode with data delivered to
the other region.
• A. Configure the database security group to allow database traffic from the application
server IP addresses.
• B. Configure the database security group to allow database traffic from the application
server security group.
• C. Configure the database subnet network ACL to deny all inbound non-database traffic
from the application-tier subnet.
• D. Configure the database subnet network ACL to allow inbound database traffic from the
application-tier subnet.
• A. Amazon EC2
• B. NAT instance
• C. ELB Classic Load Balancer
• D. Amazon RDS
• A. Change the Auto Scaling group's scale out event to scale based on network utilization.
• B. Create an Auto Scaling scheduled action to scale out the necessary resources at 8:30
AM every morning.
• C. Use Reserved Instances to ensure the system has reserved the right amount of
capacity for the scale-up events.
• D. Permanently keep a steady state of instances that is needed at 9:00 AM to guarantee
available resources, but leverage Spot Instances.
• A. Create an Auto Scaling group with a minimum of one instance and a maximum of two
instances, then use an Application Load Balancer to balance the traffic.
• B. Recreate the API using Amazon API Gateway and use AWS Lambda as the service
backend.
• C. Create an Auto Scaling group with a maximum of two instances, then use an
Application Load Balancer to balance the traffic.
• D. Recreate the API using Amazon API Gateway and integrate the new API with the
existing backend service.
Question #21Topic 1
A Solution Architect is designing an application that uses Amazon EBS volumes. The
volumes must be backed up to a different region.
How should the Architect meet this requirement?
Question #22Topic 1
A company is using an Amazon S3 bucket located in us-west-2 to serve videos to their
customers. Their customers are located all around the world and the videos are
requested a lot during peak hours. Customers in Europe complain about experiencing
slow downloaded speeds, and during peak hours, customers in all locations report
experiencing HTTP 500 errors.
What can a Solutions Architect do to address these issues?
• A. Place an elastic load balancer in front of the Amazon S3 bucket to distribute the load
during peak hours.
• B. Cache the web content with Amazon CloudFront and use all Edge locations for content
delivery.
• C. Replicate the bucket in eu-west-1 and use an Amazon Route 53 failover routing policy
to determine which bucket it should serve the request to.
• D. Use an Amazon Route 53 weighted routing policy for the CloudFront domain name to
distribute the GET request between CloudFront and the Amazon S3 bucket directly.
• A. an external service to ping the VPN endpoint from outside the VPC.
• B. AWS CloudTrail to monitor the endpoint.
• C. the CloudWatch TunnelState Metric.
• D. an AWS Lambda function that parses the VPN connection logs.
• A. Auto Scaling
• B. Amazon SQS
• C. Amazon ElastiCache
• D. ELB Application Load Balancer
• A. Enable AWS CloudTrail logging in each individual region. Repeat this for all future
regions.
• B. Enable Amazon CloudWatch logs for all AWS services across all regions and
aggregate them in a single Amazon S3 bucket.
• C. Enable AWS Trusted Advisor security checks and report all security incidents for all
regions.
• D. Enable AWS CloudTrail by creating a new trail and apply the trail to all regions.
• A. Amazon DynamoDB
• B. Amazon S3
• C. Amazon EBS
• D. Amazon EFS
Question #31Topic 1
A company plans to use AWS for all new batch processing workloads. The company's
developers use Docker containers for the new batch processing. The system design
must accommodate critical and non-critical batch processing workloads 24/7.
How should a Solutions Architect design this architecture in a cost-efficient manner?
• A. Purchase Reserved Instances to run all containers. Use Auto Scaling groups to
schedule jobs.
• B. Host a container management service on Spot Instances. Use Reserved Instances to
run Docker containers.
• C. Use Amazon ECS orchestration and Auto Scaling groups: one with Reserve Instances,
one with Spot Instances.
• D. Use Amazon ECS to manage container orchestration. Purchase Reserved Instances to
run all batch workloads at the same time.
• A. Amazon S3
• B. Amazon RDS
• C. Amazon RedShift
• D. AWS Storage Gateway
• A. Lambda@Edge
• B. AWS Lambda
• C. Amazon API Gateway
• D. Amazon EC2 instances
• A. Amazon SQS
• B. Amazon SNS
• C. Amazon ECS
• D. AWS STS
• A. Host the website on an Amazon EC2 instance with ELB and Auto Scaling, and map a
Route 53 alias record to the ELB endpoint.
• B. Host the website using AWS Elastic Beanstalk, and map a Route 53 alias record to the
Beanstalk stack.
• C. Host the website on an Amazon EC2 instance, and map a Route 53 alias record to the
public IP address of the Amazon EC2 instance.
• D. Serve the website from an Amazon S3 bucket, and map a Route 53 alias record to the
website endpoint.
• E. Create a Route 53 hosted zone, and set the NS records of the domain to use Route 53
name servers.
• A. Create an Amazon Kinesis Firehouse delivery stream to store the data in Amazon S3.
• B. Create an Auto Scaling group of Amazon EC2 servers behind ELBs to write the data
into Amazon RDS.
• C. Create an Amazon SQS queue, and have the machines write to the queue.
• D. Create an Amazon EC2 server farm behind an ELB to store the data in Amazon EBS
Cold HDD volumes.
• A. Amazon S3
• B. Amazon Aurora
• C. Amazon DynamoDB
• D. Amazon Redshift
Question #41Topic 1
A Solutions Architect is designing a mobile application that will capture receipt images
to track expenses. The Architect wants to store the images on Amazon S3.
However, uploading images through the web server will create too much traffic.
What is the MOST efficient method to store images from a mobile application on
Amazon S3?
• A. Replace the Amazon EC2 reverse proxy with an ELB internal Classic Load Balancer.
• B. Add Auto Scaling to the Amazon EC2 backend fleet.
• C. Add Auto Scaling to the Amazon EC2 reverse proxy layer.
• D. Use t2 burstable instance types for the backend fleet.
• E. Replace both the frontend and reverse proxy layers with an ELB Application Load
Balancer.
• A. Amazon EC2
• B. Amazon Kinesis Firehose
• C. Amazon EBS
• D. Amazon API Gateway
• A. Attach an Elastic IP address to each Amazon EC2 instance and add a route from the
private subnet to the public subnet.
• B. Launch a NAT gateway in the public subnet and add a route to it from the private
subnet.
• C. Launch Amazon EC2 instances in the public subnet and change the security group to
allow outbound traffic on port 80.
• D. Launch a NAT gateway in the private subnet and deploy a NAT instance in the private
subnet.
• A. Create a private subnet for the Amazon EC2 instances and a public subnet for the
Amazon RDS cluster.
• B. Create a private subnet for the Amazon EC2 instances and a private subnet for the
Amazon RDS cluster.
• C. Create a public subnet for the Amazon EC2 instances and a private subnet for the
Amazon RDS cluster.
• D. Create a public subnet for the Amazon EC2 instances and a public subnet for the
Amazon RDS cluster.
uestion #51Topic 1
A Solutions Architect is designing a solution for a media company that will stream large
amounts of data from an Amazon EC2 instance. The data streams are typically large
and sequential, and must be able to support up to 500 MB/s.
Which storage type will meet the performance requirements of this application?
• A. Create an IAM role that allows access from the corporate network to Amazon S3.
• B. Configure a proxy on Amazon EC2 and use an Amazon S3 VPC endpoint.
• C. Use Amazon API Gateway to do IP whitelisting.
• D. Configure IP whitelisting on the customer's gateway.
• A. Create an IAM access and secret key, and store it in the Lambda function.
• B. Create an IAM role to the Lambda function with permissions to list all Amazon RDS
instances.
• C. Create an IAM role to Amazon RDS with permissions to list all Amazon RDS instances.
• D. Create an IAM access and secret key, and store it in an encrypted RDS database.
• A. Amazon S3
• B. Amazon DynamoDB
• C. Amazon Kinesis
• D. Amazon EFC
Topic 1
A Solutions Architect is designing a web application that is running on an Amazon EC2
instance. The application stores data in DynamoDB. The Architect needs to secure
access to the DynamoDB table.
What combination of steps does AWS recommend to achieve secure authorization?
(Select two.)
• A. Store an access key on the Amazon EC2 instance with rights to the Dynamo DB table.
• B. Attach an IAM user to the Amazon EC2 instance.
• C. Create an IAM role with permissions to write to the DynamoDB table.
• D. Attach an IAM role to the Amazon EC2 instance.
• E. Attach an IAM policy to the Amazon EC2 instance.
• A. Create a network ACL on the web server's subnet, and allow HTTPS inbound and
MySQL outbound. Place both database and web servers on the same subnet.
• B. Open an HTTPS port on the security group for web servers and set the source to
0.0.0.0/0. Open the MySQL port on the database security group and attach it to the
MySQL instance. Set the source to Web Server Security Group.
• C. Create a network ACL on the web server's subnet, and allow HTTPS inbound, and
specify the source as 0.0.0.0/0. Create a network ACL on a database subnet, allow
MySQL port inbound for web servers, and deny all outbound traffic.
• D. Open the MySQL port on the security group for web servers and set the source to
0.0.0.0/0. Open the HTTPS port on the database security group and attach it to the
MySQL instance. Set the source to Web Server Security Group.
• A. AWS Lambda
• B. Auto Scaling
• C. AWS Elastic Beanstalk
• D. Elastic Load Balancing
• A. Amazon SQS
• B. Auto Scaling group
• C. Amazon EC2 security group
• D. Amazon ELB
Question #71Topic 1
A company has an application that stores sensitive data. The company is required by
government regulations to store multiple copies of its data.
What would be the MOST resilient and cost-effective option to meet this requirement?
• A. Amazon EFS
• B. Amazon RDS
• C. AWS Storage Gateway
• D. Amazon S3
• A. Create an Amazon S3 bucket and store all of the documents in this bucket.
• B. Create an Amazon EBS volume and allow multiple users to mount that volume to their
EC2 instance(s).
• C. Use Amazon Glacier to store all of the documents.
• D. Create an Amazon Elastic File System (Amazon EFS) to store and share the
documents.
• A. Use AWS Key Management Service and move the encrypted data to Amazon S3.
• B. Use an application-specific encryption API with AWS server-side encryption.
• C. Use encrypted EBS storage volumes with AWS-managed keys.
• D. Use third-party tools to encrypt the EBS data volumes with Key Management Service
Bring Your Own Keys.
• A. Store the AWS Access Key ID/Secret Access Key combination in software comments.
• B. Assign an IAM user to the Amazon EC2 instance.
• C. Assign an IAM role to the Amazon EC2 instance.
• D. Enable multi-factor authentication for the AWS root account.
• A. Amazon Aurora
• B. Amazon Redshift
• C. Amazon DynamoDB
• D. Amazon RDS MySQL
Question #81Topic 1
A company hosts a two-tier application that consists of a publicly accessible web server
that communicates with a private database. Only HTTPS port 443 traffic to the web
server must be allowed from the Internet.
Which of the following options will achieve these requirements? (Choose two.)
• A. Security group rule that allows inbound Internet traffic for port 443.
• B. Security group rule that denies all inbound Internet traffic except port 443.
• C. Network ACL rule that allows port 443 inbound and all ports outbound for Internet
traffic.
• D. Security group rule that allows Internet traffic for port 443 in both inbound and
outbound.
• E. Network ACL rule that allows port 443 for both inbound and outbound for all Internet
traffic.
• A. Amazon EFS
• B. Amazon S3
• C. Amazon EBS
• D. Amazon ElastiCache
• A. Amazon S3
• B. Amazon EFS
• C. Amazon EBS
• D. Cached Volumes
• A. Use AWS IAM authorization and add least-privileged permissions to each respective
IAM role.
• B. Use an API Gateway custom authorizer to invoke an AWS Lambda function to validate
each user's identity.
• C. Use Amazon Cognito user pools to provide built-in user management.
• D. Use Amazon Cognito user pools to integrate with external identity providers.
• A. Redesign the website to use Amazon API Gateway, and use AWS Lambda to deliver
content.
• B. Add server instances using Amazon EC2 and use Amazon Route 53 with a failover
routing policy.
• C. Serve the images and videos via an Amazon CloudFront distribution created using the
news site as the origin.
• D. Use Amazon ElasticCache for Redis for caching and reducing the load requests from
the origin.
• A. Use the AWS CLI to create queues using AWS IAM Access Keys.
• B. Write a script to create the Amazon SQS queue using AWS Lambda.
• C. Use AWS Elastic Beanstalk to automatically create the Amazon SQS queues.
• D. Use AWS CloudFormation Templates to manage the Amazon SQS queue creation.
• A. One public subnet for the load balancer tier, one public subnet for the front-end tier,
and one private subnet for the backend tier.
• B. One shared public subnet for all tiers of the application.
• C. One public subnet for the load balancer tier and one shared private subnet for the
application tiers.
• D. One shared private subnet for all tiers of the application.
Question #91Topic 1
Two Auto Scaling applications, Application A and Application B, currently run within a
shared set of subnets. A Solutions Architect wants to make sure that
Application A can make requests to Application B, but Application B should be denied
from making requests to Application A.
Which is the SIMPLEST solution to achieve this policy?
• A. Using security groups that reference the security groups of the other application
• B. Using security groups that reference the application server's IP addresses
• C. Using Network Access Control Lists to allow/deny traffic based on application IP
addresses
• D. Migrating the applications to separate subnets from each other
• A. Amazon SNS
• B. AWS STS
• C. Amazon SQS
• D. Amazon Route 53
• E. AWS Glue
• A. Amazon S3
• B. Amazon Glacier
• C. Amazon EFS
• D. AWS Storage Gateway
• A. Amazon RDS
• B. Amazon Redshift
• C. Amazon DynamoDB
• D. Amazon S3
• A. Create a bastion host that authenticates users against the corporate directory.
• B. Create a bastion host with security group rules that only allow traffic from the corporate
network.
• C. Attach an IAM role to the bastion host with relevant permissions.
• D. Configure the web servers' security group to allow SSH traffic from a bastion host.
• E. Deny all SSH traffic from the corporate network in the inbound network ACL.
• A. Continuously replicate the production database server to Amazon RDS. Use AWS
CloudFormation to deploy the application and any additional servers if necessary.
• B. Continuously replicate the production database server to Amazon RDS. Create one
application load balancer and register on-premises servers. Configure ELB Application
Load Balancer to automatically deploy Amazon EC2 instances for application and
additional servers if the on-premises application is down.
• C. Use a scheduled Lambda function to replicate the production database to AWS. Use
Amazon Route 53 health checks to deploy the application automatically to Amazon S3 if
production is unhealthy.
• D. Use a scheduled Lambda function to replicate the production database to AWS.
Register on-premises servers to an Auto Scaling group and deploy the application and
additional servers if production is unavailable.
• A. Amazon Aurora
• B. Amazon RDS MySQL with Multi-AZ enabled
• C. Amazon DynamoDB
• D. Amazon ElastiCache
• A. Add an S3 lifecycle rule to move any files from the bucket in us-east-1 to the bucket in
ap-southeast-2.
• B. Create a Lambda function to be triggered for every new file in us-east-1 that copies the
file to the bucket in ap-southeast-2.
• C. Use SNS to notify the bucket in ap-southeast-2 to create a file whenever the file is
created in the bucket in us-east-1.
• D. Enable versioning and configure cross-region replication from the bucket in us-east-1
to the bucket in ap-southeast-2.
• A. Use Amazon CloudWatch Events to invoke an AWS Lambda function that can launch
On-Demand Instances.
• B. Regularly store data from the application on Amazon DynamoDB. Increase the
maximum number of instances in the AWS Auto Scaling group.
• C. Manually place a bid for additional Spot Instances at a higher price in the same AWS
Region and Availability Zone.
• D. Ensure that the Amazon Machine Image associated with the application has the latest
configurations for the launch configuration.
• A. Use asynchronous replication for standby to maximize throughput during peak demand.
• B. Offload SELECT queries that can tolerate stale data to READ replica.
• C. Offload SELECT and UPDATE queries to READ replica.
• D. Offload SELECT query that needs the most current data to READ replica.
Question #111Topic 1
A Solutions Architect is deploying a new production MySQL database on AWS. It is
critical that the database is highly available.
What should the Architect do to achieve this goal with Amazon RDS?
• A. Create a read replica of the primary database and deploy it in a different AWS Region.
• B. Enable multi-AZ to create a standby database in a different Availability Zone.
• C. Enable multi-AZ to create a standby database in a different AWS Region.
• D. Create a read replica of the primary database and deploy it in a different Availability
Zone.
• A. Amazon S3
• B. Amazon DynamoDB
• C. Amazon RDS
• D. Amazon EBS
Reveal Solution Discussion 47
Question #114Topic 1
A company hosts a website on premises. The website has a mix of static and dynamic
content, but users experience latency when loading static files.
Which AWS service can help reduce latency?
• A. Amazon DynamoDB
• B. Amazon Aurora MySQL
• C. Amazon RDS MySQL
• D. Amazon Redshift
• A. Amazon RDS
• B. Amazon RedShift
• C. Amazon DynamoDB Accelerator
• D. Amazon ElastiCache
• A. Use an Amazon Redshift database. Copy the product database into Redshift and allow
the team to query it.
• B. Use an Amazon RDS read replica of the production database and allow the team to
query against it.
• C. Use multiple Amazon EC2 instances running replicas of the production database,
placed behind a load balancer.
• D. Use an Amazon DynamoDB table to store a copy of the data.
Question #121Topic 1
A company has a legal requirement to store point-in-time copies of its Amazon RDS
PostGreSQL database instance in facilities that are at least 200 miles apart.
Use of which of the following provides the easiest way to comply with this requirement?
• A. Dynamic
• B. Scheduled
• C. Manual
• D. Lifecycle
• A. Amazon S3
• B. Amazon EBS
• C. Amazon Glacier
• D. Amazon EFS
Reveal Solution Discussion 11
Question #124Topic 1
An online company wants to conduct real-time sentiment analysis about its products
from its social media channels using SQL.
Which of the following solutions has the LOWEST cost and operational burden?
• A. Encrypt the files on the client side and store the files on Amazon Glacier, then decrypt
the reports on the client side.
• B. Move the files to Amazon ElastiCache and provide a username and password for
downloading the reports.
• C. Specify the use of AWS KMS server-side encryption at the time of an object creation
on Amazon S3.
• D. Store the files on Amazon S3 and use the application to generate S3 pre-signed URLs
to users.
• A. Amazon DynamoDB
• B. Amazon EBS Throughput Optimized HDD Volumes
• C. Amazon EBS Cold HDD Volumes
• D. Amazon ElastiCache
Question #131Topic 1
An on-premises database is experiencing significant performance problems when
running SQL queries. With 10 users, the lookups are performing as expected.
As the number of users increases, the lookups take three times longer than expected to
return values to an application.
Which action should a Solutions Architect take to maintain performance as the user
count increases?
• A. AWS Config
• B. AWS CloudFormation
• C. Amazon CloudWatch
• D. AWS Service Catalog
• A. Launch an Amazon RDS instance with encryption enabled. Enable encryption for logs
and backups.
• B. Launch an Amazon RDS instance. Enable encryption for database, logs and backups.
• C. Launch an Amazon RDS instance with encryption enabled. Logs and backups are
automatically encrypted.
• D. Launch an Amazon RDS instance. Enable encryption for backups. Encrypt logs with a
database-engine feature.
• A. Amazon CloudWatch
• B. AWS CloudFormation
• C. AWS Lambda
• D. Amazon SQS
• A. AWS CloudHSM
• B. AWS Trusted Advisor
• C. Server Side Encryption (SSE-S3)
• D. Server Side Encryption (SSE-KMS)
uestion #141Topic 1
A Solutions Architect is designing a customer order processing application that will likely
have high usage spikes.
What should the Architect do to ensure that customer orders are not lost before being
written to an Amazon RDS database? (Choose two.)
• A. Use an Amazon Route 53 latency routing policy to route traffic to an Amazon EC2
instance with the least lag time.
• B. Use Amazon S3 to cache static elements of the website requests.
• C. Use an Auto Scaling group to scale the number of EC2 instances to match the site
traffic.
• D. Use Amazon Cloud Front to serve static assets to decrease the load on the EC2
instances.
• A. Amazon SNS
• B. AWS Lambda with sequential dispatch
• C. A FIFO queue in Amazon SQS
• D. A standard queue in Amazon SQS
• A. Each front-end node should send votes to an Amazon SQS queue. Provision worker
instances to read the SQS queue and process the message information into RDBMS
database.
• B. As the load on the database increases, horizontally-scale the RDBMS database with
additional memory-optimized instances. When voting has ended, scale down the
additional instances.
• C. Re-provision the RDBMS database with larger, memory-optimized instances. When
voting ends, re-provision the back-end database with smaller instances.
• D. Send votes from each front-end node to Amazon DynamoDB. Provision worker
instances to process the votes in DynamoDB into the RDBMS database.
• A. Subscribe an Amazon SQS queue to the Amazon SNS topic and trigger the Lambda
function from the queue.
• B. Configure Lambda to write failures to an SQS Dead Letter Queue.
• C. Configure a Dead Letter Queue for the Amazon SNS topic.
• D. Configure the Amazon SNS topic to invoke the Lambda function synchronously.
Question #151Topic 1
A customer owns a MySQL database that is accessed by various clients who expect, at
most, 100 ms latency on requests. Once a record is stored in the database, it rarely
changed. Clients only access one record at a time.
Database access has been increasing exponentially due to increased client demand.
The resultant load will soon exceed the capacity of the most expensive hardware
available for purchase. The customer wants to migrate to AWS, and is willing to change
database systems.
Which service would alleviate the database load issue and offer virtually unlimited
scalability for the future?
• A. Amazon RDS
• B. Amazon DynamoDB
• C. Amazon Redshift
• D. AWS Data Pipeline
• A. Amazon Redshift
• B. Amazon Aurora
• C. Amazon DynamoDB
• D. Amazon S3
• A. Amazon ECS
• B. Amazon EC2 Spot instances
• C. AWS Lambda functions
• D. AWS Elastic Beanstalk
• A. Generate an access key ID and a secret key, and assign an IAM role with least
privilege.
• B. Create an IAM policy granting access to all services and assign it to the Amazon EC2
instance profile.
• C. Create an IAM role granting least privilege and assign it to the Amazon EC2 instance
profile.
• D. Generate temporary access keys to grant users temporary access to the Amazon EC2
instance.
• A. Create a crontab job script in each instance to push the logs regularly to Amazon S3.
• B. Install and configure Amazon CloudWatch Logs agent in the Amazon EC2 instances.
• C. Enable Amazon CloudWatch Events in the AWS Management Console.
• D. Enable AWS CloudTrail to map all API calls invoked by the applications.
• A. Amazon S3
• B. Amazon DynamoDB
• C. Amazon RDS
• D. Amazon Redshift
• A. Create a bastion host in a public subnet, and use the bastion host to connect to the
database.
• B. Log in to the web servers in the public subnet to connect to the database.
• C. Perform DB maintenance after using SSH to connect to the NAT Gateway in a public
subnet.
• D. Create an IPSec VPN tunnel between the customer site and the VPC, and use the
VPN tunnel to connect to the database.
• E. Attach an Elastic IP address to the database.
Question #161Topic 1
A web application running on Amazon EC2 instances writes data synchronously to an
Amazon DynamoDB table configured for 60 write capacity units. During normal
operation the application writes 50 KB/s to the tale, but can scale up to 500 KB/ s during
peak hours. The application is currently throttling errors from the
DynamoDB table during peak hours.
What is the MOST cost-efficient change to support the increased traffic with minimal
changes to the application?
• A. Use Amazon SQS to manage the write operations to the DynamoDB table.
• B. Change DynamoDB table configuration to 600 write capacity units.
• C. Increase the number of Amazon EC2 instances to support the traffic.
• D. Configure Amazon DynamoDB Auto Scaling to handle the extra demand.
• A. Bucket policy
• B. Object tagging
• C. CORS configuration
• D. Lifecycle policy
• A. Use one Amazon EC2 Reserved Instance and use an Auto Scaling group to add and
remove EC2 instances based on CPU utilization.
• B. Use one Amazon EC2 On-Demand instance and use an Auto Scaling group to add and
remove EC2 instances based on CPU utilization.
• C. Use one Amazon EC2 On-Demand instance and use an Auto Scaling Group scheduled
action to add three EC2 Spot instances at 7:30 AM and remove three instances at 6:10
PM.
• D. Use one Amazon EC2 Reserved Instance and use an Auto Scaling Group scheduled
action to add three EC2 On-Demand instances at 7:30 AM and remove three instances at
6:10 PM.
• A. Two public subnets for the elastic load balancer, two public subnets for the web
servers, and two public subnets for Amazon RDS.
• B. One public subnet for the elastic load balancer, two private subnets for the web
servers, and two private subnets for Amazon RDS.
• C. One public subnet for the elastic load balancer, one public subnet for the web servers,
and one private subnet for the database.
• D. Two public subnets for the elastic load balancer, two private subnets for the web
servers, and two private subnets for RDS.
Reveal Solution Discussion 55
Question #165Topic 1
A workload in an Amazon VPC consists of a single web server launched from a custom
AMI. Session state is stored in a database.
How should the Solutions Architect modify this workload to be both highly available and
scalable?
• A. Create a launch configuration with a desired capacity of two web servers across
multiple Availability Zones. Create an Auto Scaling group with the AMI ID of the web
server image. Use Amazon Route 53 latency-based routing to balance traffic across the
Auto Scaling group.
• B. Create a launch configuration with the AMI ID of the web server image. Create an Auto
Scaling group using the newly-created launch configuration, and a desired capacity of two
web servers across multiple regions. Use an Application Load Balancer (ALB) to balance
traffic across the Auto Scaling group.
• C. Create a launch configuration with the AMI ID of the web server image. Create an Auto
Scaling group using the newly-created launch configuration, and a desired capacity of two
web servers across multiple Availability Zones. Use an ALB to balance traffic across the
Auto Scaling group.
• D. Create a launch configuration with the AMI ID of the web server image. Create an Auto
Scaling group using the newly-created launch configuration, and a desired capacity of two
web servers across multiple Availability Zones. Use Amazon Route 53 weighted routing to
balance traffic across the Auto Scaling group.
• A. Auto Scaling
• B. Elastic Beanstalk
• C. EC2 Container Service
• D. CloudFormation
• A. Amazon S3
• B. Amazon DynamoDB
• C. Amazon EFS
• D. Amazon EBS
• A. Scale out the EC2 instances to ensure that the environment scales up and down based
on the highest load.
• B. Implement Amazon DynamoDB Accelerator to improve database performance and
remove the need to scale the read/write units.
• C. Use a scheduled job to scale out EC2 before 9:00 a.m. on Monday and to scale down
after 9:30 a.m.
• D. Use Amazon CloudFront to cache web request and reduce the load on EC2 and
DynamoDB.
Question #171Topic 1
As part of a migration strategy, a Solutions Architect needs to analyze workloads that
can be optimized for performance and cost. The Solutions Architect has identified a
stateless application that serves static content as a potential candidate to move to the
cloud. The Solutions Architect has the flexibility to choose an identity solution between
Facebook, Twitter, and Amazon.
Which AWS solution offers flexibility and ease of use, and the LEAST operational
overhead for this migration?
• A. Use AWS Identity and Access Management (IAM) for managing identities, and migrate
the application to run on Amazon S3, Amazon API Gateway, and AWS Lambda.
• B. Use a third-party solution for managing identities, and migrate the application to run on
Amazon S3, EC2 Spot Instances, and Amazon EC2.
• C. Use Amazon Cognito for managing identities, and migrate the application to run on
Amazon S3, Amazon API Gateway, and AWS Lambda.
• D. Use Amazon Cognito for managing identities, and migrate the application to run on
Amazon S3, EC2 Spot Instances, and Amazon EC2.
• A. Change the Availability Zones in which the instances were created to another
Availability Zone in the same region with a lower cost.
• B. Replace all On-Demand Instances with Spot Instances in the Auto Scaling group.
• C. Purchase Reserved Instances for the minimum number of Auto Scaling instances.
• D. Reduce the number of minimum instances to 0. New requests to the Application Load
Balancer create new instances.
• A. Amazon EFS
• B. Amazon S3
• C. Amazon Glacier
• D. Amazon EBS
• A. 2
• B. 3
• C. 4
• D. 6
• A. Create a lifecycle policy that moves Amazon S3 data to Amazon S3 One Zone-
Infrequent Access storage after 7 days. After 30 days, move the data to Amazon Glacier.
• B. Keep the data on Amazon S3, and create a lifecycle policy to move S3 data to Amazon
Glacier after 7 days.
• C. Move all Amazon S3 data to S3 Standard-Infrequent Access storage, and create a
lifecycle policy to move the data to Amazon Glacier after 7 days.
• D. Keep the data on Amazon S3, then create a lifecycle policy to move the data to S3
Standard-Infrequent Access storage after 7 days.
• A. Add an IAM policy for IAM database access to the Lambda execution role.
• B. Store a one-way hash of the password in the Lambda function.
• C. Have the Lambda function use the AWS Systems Manager Parameter Store.
• D. Connect to the Amazon RDS for SQL Server instance by using a role assigned to the
Lambda function.
• A. Replicate relevant data between Amazon Redshift and Amazon DynamoDB. Data
scientists use Redshift. Dashboards use DynamoDB.
• B. Configure auto-replication between Amazon Redshift and Amazon RDS. Data
scientists use Redshift. Dashboards use RDS.
• C. Use Amazon Redshift for both requirements, with separate query queues configured in
workload management.
• D. Use Amazon Redshift for Data Scientists. Run automated dashboard queries against
Redshift and store the results in Amazon ElastiCache. Dashboards query ElastiCache.
Question #181Topic 1
A company has an application that uses Amazon CloudFront for content that is hosted
on an Amazon S3 bucket. After an unexpected refresh, the users are still seeing old
content.
Which step should the Solutions Architect take to ensure that new content is displayed?
• A. Perform a cache refresh on the CloudFront distribution that is serving the content.
• B. Perform an invalidation on the CloudFront distribution that is serving the content.
• C. Create a new cache behavior path with the updated content.
• D. Change the TTL value for removing the old objects.
• A. Amazon EC2
• B. Amazon RDS
• C. AWS CloudTrail
• D. Amazon DynamoDB
• A. An inbound rule allowing traffic from the security group attached to the ALB
• B. An inbound rule allowing traffic from the network ACLs attached to the ALB
• C. An outbound rule allowing traffic to the security group attached to the ALB
• D. An outbound rule blocking all traffic to the Internet
• A. Re-host the application on Amazon EC2 with lift and shift of existing application code.
Configure an Elastic Load Balancing load balancer to handle incoming requests. Use
Amazon CloudWatch alarms to receive notification of scaling issues. Increase and
decrease the size of the Amazon EC2 instances using AWS CLI or AWS Management
Console as required.
• B. Re-architect the application as a three-tier application. Move the database to Amazon
RDS. Use read replicas and Amazon ElastiCache with RDS for better performance. Use
an Application Load Balancer to forward incoming requests to web and application servers
running on-premises.
• C. Re-platform the application as a three-tier application. Use Elastic Load Balancing for
incoming requests. Use EC2 for web and application tiers. Use RDS at the database tier.
Use CloudWatch alarms and Auto Scaling for horizontal scaling at the web tier.
• D. Re-architect the application as Service Oriented Architecture (SOA). Run database and
application servers on-premises. Run web-facing EC2 servers. Use an Enterprise Service
Bus to handle communications between different parts of the application running on-
premises and in the cloud.
• A. Configure a Network Load Balancer with listeners for appropriate path patterns for the
target groups.
• B. Configure an Application Load Balancer with host-based routing based on the domain
field in the HTTP header.
• C. Configure a Network Load Balancer and enable cross-zone load balancing to ensure
that all EC2 instances are used.
• D. Configure an Application Load Balancer with listeners for appropriate path patterns for
the target group.
• A. Amazon EC2
• B. Amazon API Gateway
• C. AWS Elastic Beanstalk
• D. Amazon EC2 Container Service
• A. Security groups
• B. Network ACL
• C. AWS WAF
• D. AWS Shield
Question #191Topic 1
A customer is looking for a storage archival solution for 1,000 TB of data. The customer
requires that the solution be durable and data be available within a few hours of
requesting it, but not exceeding a day. The solution should be as cost-effective as
possible. To meet security compliance policies, data must be encrypted at rest. The
customer expects they will need to fetch the data two times in a year.
Which storage solution should a Solutions Architect recommend to meet these
requirements?
• A. Launch the instances in an Auto Scaling group with an Elastic Load Balancing health
check.
• B. Launch instances in multiple Availability Zones and set the load balancer to Multi-AZ.
• C. Add CloudWatch alarm actions for each instance to restart if the Status Check (Any)
fails.
• D. Add Route 53 records for each instance with an instance health check.
• A. Amazon S3, AWS Lambda, Amazon API Gateway, and Amazon DynamoDB
• B. Amazon CloudFront, AWS Lambda, API Gateway, and Amazon RDS
• C. Amazon CloudFront, Elastic Load Balancing, Amazon EC2, and Amazon RDS
• D. Amazon S3, Amazon CloudFront, AWS Lambda, Amazon API Gateway, and Amazon
DynamoDB.
• A. Create a security group for the web tier instances that allows inbound traffic only over
port 443.
• B. Enforce Transparent Data Encryption (TDE) on the RDS database.
• C. Create a network ACL that allows inbound traffic only over port 443.
• D. Configure the web servers to communicate with RDS by using SSL, and issue
certificates to the web tier EC2 instances.
• E. Create a customer master key in AWS KMS and apply it to encrypt the RDS instance.
• A. Use HTTPS for traffic over VPC peering between the VPC and the on-premises
datacenter.
• B. Use HTTPS for traffic over the Internet between the on-premises server and the
Amazon EC2 instance.
• C. Use HTTPS for traffic over a VPN connection between the VPC and the on-premises
datacenter.
• D. Use HTTPS for traffic over gateway VPC endpoints that have been configured for the
Amazon EC2 instance.
• A. AWS KMS
• B. HTTPS
• C. SFTP
• D. FTPS
Question #201Topic 1
A Solutions Architect is trying to bring a data warehouse workload to an Amazon EC2
instance. The data will reside in Amazon EBS volumes and full table scans will be
executed frequently.
What type of Amazon EBS volume would be most suitable in this scenario?
• A. Increase the value for the health check interval set on the ELB load balancer.
• B. Change the thresholds set on the Auto Scaling group health check.
• C. Change the health check type to ELB for the Auto Scaling group.
• D. Change the health check set on the ELB load balancer to use TCP rather than HTTP
checks.
• A. Redis Auth
• B. AWS Single Sign-On
• C. IAM database authentication
• D. VPC security group for Redis
• A. Configure the CloudWatch Alarm to send the notification to an Amazon SNS topic
whenever there is an alarm.
• B. Configure the CloudWatch Alarm to send the notification to a mobile phone number
whenever there is an alarm.
• C. Configure the CloudWatch Alarm to send the notification to the email addresses
whenever there is an alarm.
• D. Create the platform endpoints for mobile devices and subscribe the SNS topic with
platform endpoints.
• E. Subscribe the SNS topic with an Amazon SQS queue, and poll the messages
continuously from the queue. Use each mobile platform's libraries to send the message to
the mobile application.
• A. Put an Amazon ElastiCache cluster in front of the database and use lazy loading to
limit database access during peak periods.
• B. Put an Amazon Elasticsearch domain in front of the database and use a Write-Through
cache to reduce database access during peak periods.
• C. Configure an Amazon RDS Auto Scaling group to automatically scale the RDS
instance during load spikes.
• D. Change the Amazon RDS instance storage type from General Purpose SSD to
provisioned IOPS SSD.
Question #211Topic 1
A Solutions Architect must design an Amazon DynamoDB table to store data about
customer activities. The data is used to analyze recent customer behavior, so data that
is less than a week old is heavily accessed and older data is accessed infrequently.
Data that is more than one month old never needs to be referenced by the application,
but needs to be archived for year-end analytics.
What is the MOST cost-efficient way to meet these requirements? (Choose two.)
• A. Use DynamoDB time-to-live settings to expire items after a certain time period.
• B. Provision a higher write capacity unit to minimize the number of partitions.
• C. Create separate tables for each week's data with higher throughput for the current
week.
• D. Pre-process data to consolidate multiple records to minimize write operations.
• E. Export the old table data from DynamoDB to Amazon S3 using AWS Data Pipeline,
and delete the old table.
• A. Use an AWS Classic Load Balancer with a host-based routing option to route traffic to
the correct service.
• B. Use the AWS CLI to update Amazon Route 53 hosted zone to route traffic as services
get updated.
• C. Use an AWS Application Load Balancer with host-based routing option to route traffic
to the correct service.
• D. Use Amazon CloudFront to manage and route traffic to the correct service.
• A. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the STANDARD_IA storage class after 15 days, and then to the GLACIER storage
class after an additional 15 days.
• B. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the GLACIER storage class after 30 days.
• C. Create a lifecycle rule to transition documents from the STANDARD storage class to
the STANDARD_IA storage class after 30 days and then to the GLACIER storage class
after an additional 30 days.
• D. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the GLACIER storage class after 15 days.
• A. Set up an S3 bucket based in Paris, and enable cross-region replication from the
Oregon bucket to the Paris bucket.
• B. Create an Application Load Balancer that load balances data retrieval between the
Oregon S3 bucket and a new Paris S3 bucket.
• C. Create an Amazon CloudFront distribution with the bucket located in Oregon as the
origin and set the Maximum Time to Live (TTL) for cache behavior to 0.
• D. Set up an S3 bucket based in Paris, and enable a lifecycle management rule to
transition data from the Oregon bucket to the Paris bucket.
• A. Use time-based scaling to scale the number of instances based on periods of high
load.
• B. Modify the scaling triggers in Elastic Beanstalk to use the CPUUtilization metric.
• C. Swap the c4.large instances with the m4.large instance type.
• D. Create an additional Auto Scaling group, and configure Amazon EBS to use both Auto
Scaling groups to increase the scaling capacity.
• A. Use an API Gateway in proxy mode, and provide the API Gateway's IP address to the
external service provider.
• B. Associate a public elastic network interface to a published stage/endpoint in API
Gateway, exposing the AWS Lambda function, and provide the IP address for the public
network interface to the external party to whitelist.
• C. Deploy the Lambda function in private subnets and route outbound traffic through a
NAT gateway. Provide the NAT gateway's Elastic IP address to the external service
provider.
• D. Provide the external party the allocated AWS IP address range for Lambda functions,
and send change notifications by using a subscription to the AmazonIpSpaceChanged
SNS topic.
Question #211Topic 1
A Solutions Architect must design an Amazon DynamoDB table to store data about
customer activities. The data is used to analyze recent customer behavior, so data that
is less than a week old is heavily accessed and older data is accessed infrequently.
Data that is more than one month old never needs to be referenced by the application,
but needs to be archived for year-end analytics.
What is the MOST cost-efficient way to meet these requirements? (Choose two.)
• A. Use DynamoDB time-to-live settings to expire items after a certain time period.
• B. Provision a higher write capacity unit to minimize the number of partitions.
• C. Create separate tables for each week's data with higher throughput for the current
week.
• D. Pre-process data to consolidate multiple records to minimize write operations.
• E. Export the old table data from DynamoDB to Amazon S3 using AWS Data Pipeline,
and delete the old table.
• A. Use an AWS Classic Load Balancer with a host-based routing option to route traffic to
the correct service.
• B. Use the AWS CLI to update Amazon Route 53 hosted zone to route traffic as services
get updated.
• C. Use an AWS Application Load Balancer with host-based routing option to route traffic
to the correct service.
• D. Use Amazon CloudFront to manage and route traffic to the correct service.
• A. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the STANDARD_IA storage class after 15 days, and then to the GLACIER storage
class after an additional 15 days.
• B. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the GLACIER storage class after 30 days.
• C. Create a lifecycle rule to transition documents from the STANDARD storage class to
the STANDARD_IA storage class after 30 days and then to the GLACIER storage class
after an additional 30 days.
• D. Create a lifecycle rule to transition the documents from the STANDARD storage class
to the GLACIER storage class after 15 days.
• A. Set up an S3 bucket based in Paris, and enable cross-region replication from the
Oregon bucket to the Paris bucket.
• B. Create an Application Load Balancer that load balances data retrieval between the
Oregon S3 bucket and a new Paris S3 bucket.
• C. Create an Amazon CloudFront distribution with the bucket located in Oregon as the
origin and set the Maximum Time to Live (TTL) for cache behavior to 0.
• D. Set up an S3 bucket based in Paris, and enable a lifecycle management rule to
transition data from the Oregon bucket to the Paris bucket.
• A. Use time-based scaling to scale the number of instances based on periods of high
load.
• B. Modify the scaling triggers in Elastic Beanstalk to use the CPUUtilization metric.
• C. Swap the c4.large instances with the m4.large instance type.
• D. Create an additional Auto Scaling group, and configure Amazon EBS to use both Auto
Scaling groups to increase the scaling capacity.
• A. Use an API Gateway in proxy mode, and provide the API Gateway's IP address to the
external service provider.
• B. Associate a public elastic network interface to a published stage/endpoint in API
Gateway, exposing the AWS Lambda function, and provide the IP address for the public
network interface to the external party to whitelist.
• C. Deploy the Lambda function in private subnets and route outbound traffic through a
NAT gateway. Provide the NAT gateway's Elastic IP address to the external service
provider.
• D. Provide the external party the allocated AWS IP address range for Lambda functions,
and send change notifications by using a subscription to the AmazonIpSpaceChanged
SNS topic.
• A. Multiple EC2 instances in a database replication configuration that uses two Availability
Zones.
• B. A standalone Amazon EC2 instance with a selected database installed.
• C. Amazon RDS in a Multi-AZ configuration with Provisioned IOPS.
• D. Multiple EC2 instances in a replication configuration that uses two placement groups.
Question #221Topic 1
An application has a web tier that runs on EC2 instances in a public subnet. The
application tier instances run in private subnets across two Availability Zones. All traffic
is IPv4 only, and each subnet has its own custom route table.
A new feature requires that application tier instances can call an external service over
the Internet; however, they must still not be accessible to Internet traffic.
What should be done to allow the application servers to connect to the Internet,
maintain high availability, and minimize administrative overhead?
• A. Add an Amazon egress-only internet gateway to each private subnet. Alter each private
subnet's route table to include a route from 0.0.0.0/0 to the egress-only internal gateway in
the same Availability Zone.
• B. Add an Amazon NAT Gateway to each public subnet. Alter each private subnet's route
table to include a route from 0.0.0.0/0 to the NAT Gateway in the same Availability Zone.
• C. Add an Amazon NAT instance to one of the public subnets Alter each private subnet's
route table to include a route from 0.0.0.0/0 to the Internet gateway in the VPC.
• D. Add an Amazon NAT Gateway to each private subnet. Alter each private subnet's route
table to include a route from 0.0.0.0/0 to the NAT Gateway in the other Availability Zone.
• A. Amazon ECS for the web application, and an Amazon RDS for MySQL for the
database.
• B. AWS Elastic Beanstalk Docker Multi-container either for the web application or
database.
• C. AWS Elastic Beanstalk Docker Single Container for the web application, and an
Amazon RDS for MySQL for the database.
• D. AWS CloudFormation with Lambda Custom Resources without VPC for the web
application, and an Amazon RDS for MySQL database.
• E. AWS CloudFormation with Lambda Custom Resources running in a VPC for the web
application, and an Amazon RDS for MySQL database.
• A. Simple
• B. Failover
• C. Weighted
• D. Multivalue Answer
• A. Allow all inbound traffic, with explicit denies on non-HTTP and non-HTTPS ports.
• B. Allow incoming traffic to HTTP and HTTPS ports.
• C. Allow incoming traffic to HTTP and HTTPS ports, with explicit denies to all other ports.
• D. Deny all traffic to non-HTTP and non-HTTPS ports
• A. S3 buckets are replicated globally, allowing for large scalability. EBS volumes are
replicated only within a region.
• B. S3 is an origin for CloudFront. EBS volumes would need EC2 instances behind an
Elastic Load Balancing load balancer to be an origin.
• C. S3 buckets can be encrypted, allowing for secure storage of the web files. EBS
volumes cannot be encrypted.
• D. S3 buckets support object-level read throttling, preventing abuse. EBS volumes do not
provide object-level throttling.
Question #231Topic 1
A company is moving to AWS. Management has identified a set of approved AWS
services that meet all deployment requirements. The company would like to restrict
access to all other unapproved services to which employees would have access.
Which solution meets these requirements with the LEAST amount of operational
overhead?
• A. Configure the AWS Trusted Advisor service utilization compliance report. Subscribe to
Amazon SNS notifications from Trusted Advisor. Create a custom AWS Lambda function
that can automatically remediate the use of unauthorized services.
• B. Use AWS Config to evaluate the configuration settings of AWS resources. Subscribe to
Amazon SNS notifications from AWS Config. Create a custom AWS Lambda function that
can automatically remediate the use of unauthorized services.
• C. Configure AWS Organizations. Create an organizational unit (OU) and place all AWS
accounts into the OU. Apply a service control policy (SCP) to the OU that denies the use
of certain services.
• D. Create a custom AWS IAM policy. Deploy the policy to each account using AWS
CloudFormation StackSets. Include deny statements in the policy to restrict the use of
certain services. Attach the policies to all IAM users in each account.
• A. Migrate the production and DR environments to different Availability Zones within the
same region. Let AWS manage failover between the environments.
• B. Migrate the production and DR environments to different regions. Let AWS manage
failover between the environments.
• C. Migrate the production environment to a single Availability Zone, and set up instance
recovery for Amazon EC2. Decommission the DR environment because it is no longer
needed.
• D. Migrate the production environment to span multiple Availability Zones, using Elastic
Load Balancing and Multi-AZ Amazon RDS. Decommission the DR environment because
it is no longer needed.
• A. Create another AWS account root user with permissions to the DynamoDB table.
• B. Create an IAM role and assign the role to the EC2 instance with permissions to the
DynamoDB table.
• C. Create an identity provider and assign the identity provider to the EC2 instance with
permissions to the DynamoDB table.
• D. Create identity federation with permissions to the DynamoDB table.
• A. Implement an AWS Auto Scaling group for the web server instances behind the
Application Load Balancer.
• B. Enable Amazon CloudFront for the website and specify the Application Load Balancer
as the origin.
• C. Move the photos into an Amazon S3 bucket and enable static website hosting.
• D. Enable Amazon ElastiCache in the web server subnet.
• A. Host the website data on Amazon S3 and set permissions to enable public read-only
access for users.
• B. Host the web server data on Amazon CloudFront and update the objects in the
Cloudfront distribution when they change.
• C. Host the application on EC2 instances across multiple Availability Zones. Use an Auto
Scaling group coupled with an Application Load Balancer.
• D. Host the application on EC2 instances in a single Availability Zone. Replicate the EC2
instances to a separate region, and use an Application Load Balancer for high availability.
• A. Enable EBS optimization on the instance and keep the temporary files on the existing
volume.
• B. Put the temporary database on a new 50-GB EBS gp2 volume.
• C. Move the temporary database onto instance storage.
• D. Put the temporary database on a new 50-GB EBS io1 volume with a 3-K IOPS
provision.
• A. Create a similar RDS PostgreSQL instance and direct all traffic to it.
• B. Use the secondary instance of the Multiple Availability Zone for read traffic only.
• C. Create a read replica and send half of all traffic to it.
• D. Create a read replica and send all read traffic to it.
Question #241Topic 1
A Security team reviewed their company's VPC Flow Logs and found that traffic is being
directed to the internet. The application in the VPC uses Amazon EC2 instances for
compute and Amazon S3 for storage. The company's goal is to eliminate internet
access and allow the application to continue to function.
What change should be made in the VPC before updating the route table?
• A. Amazon EFS
• B. Amazon S3
• C. Amazon ElastiCache
• D. Amazon EBS
• A. AWS Auto Scaling with a Classic Load Balancer, and AWS CloudTrail
• B. Amazon Route 53, Auto Scaling with an Application Load Balancer, and Amazon
CloudFront
• C. A VPC, a NAT gateway and Auto Scaling with a Network Load Balancer
• D. CloudFront, Route 53, and Auto Scaling with a Classic Load Balancer
• A. Use an Amazon CloudWatch alarm on the EC2 CPU to scale the Auto Scaling group
up and down.
• B. Use an EC2 Auto Scaling health check for messages processed on the EC2
instances to scale up and down.
• C. Use an Amazon CloudWatch alarm based on the number of visible messages to
scale the Auto Scaling group up or down.
• D. Use an Amazon CloudWatch alarm based on the CPU to scale the Auto Scaling
group up or down.
• A. Amazon EBS
• B. Amazon S3
• C. AWS Storage Gateway for files
• D. Amazon EFS
• A. Place the Amazon EC2 instances in the public subnet, with no EIPs; route outgoing
traffic through the internet gateway.
• B. Place the Amazon EC2 instances in a public subnet, and assign EIPs; route outgoing
traffic through the NAT gateway.
• C. Place the Amazon EC2 instances in a private subnet, and assign EIPs; route
outgoing traffic through the internet gateway.
• D. Place the Amazon EC2 instances in a private subnet, with no EIPs; route outgoing
traffic through the NAT gateway
Question #251Topic 1
A company processed 10 TB of raw data to generate quarterly reports. Although it is
unlikely to be used again, the raw data needs to be preserved for compliance and
auditing purposes.
What is the MOST cost-effective way to store the data in AWS?
• A. AWS Lambda
• B. Amazon ElastiCache
• C. Size EC2 instances to handle peak load
• D. An Auto Scaling group for EC2 instances
• A. Use an Application Load Balancer (ALB) in passthrough mode, then terminate SSL
on EC2 instances.
• B. Use an Application Load Balancer (ALB) with a TCP listener, then terminate SSL on
EC2 instances.
• C. Use a Network Load Balancer (NLB) with a TCP listener, then terminate SSL on EC2
instances.
• D. Use an Application Load Balancer (ALB) with an HTTPS listener, then install SSL
certificates on the ALB and EC2 instances.
• E. Use a Network Load Balancer (NLB) with an HTTPS listener, then install SSL
certificates on the NLB and EC2 instances.
Reveal Solution
Question #261Topic 1
A company's Amazon RDS MySQL DB instance may be rebooted for maintenance and
to apply patches. This database is critical and potential user disruption must be
minimized.
What should the Solution Architect do in this scenario?
• A. Move the images to the EC2 instances in the Auto Scaling group.
• B. Enable Transfer Acceleration for the S3 bucket.
• C. Configure an Amazon CloudFront distribution with the S3 bucket as the origin.
• D. Increase the number of minimum, desired, and maximum EC2 instances in the Auto
Scaling group.
• A. Create an identity and access management (IAM) role with the necessary permissions
to access the DynamoDB table, and assign the role to the Lambda function.
• B. Create a DynamoDB user name and password and give them to the Developer to use
in the Lambda function.
• C. Create an identity and access management (IAM) user, and create access and secret
keys for the user. Give the user the necessary permissions to access the DynamoDB
table. Have the Developer use these keys to access the resources.
• D. Create an identity and access management (IAM) role allowing access from AWS
Lambda and assign the role to the DynamoDB table.
• A. Re-deploy the application in a new VPC that is closer to the users making the requests.
• B. Create an Amazon CloudFront distribution for the site and redirect user traffic to the
distribution.
• C. Store the contents on Amazon EFS instead of the EC2 root volume.
• D. Implement Amazon Redshift to create a repository of the content closer to the users.
• A. Create network ACL rules for the private subnet to allow incoming traffic on ports
32768 through 61000 from the IP address of the ALB only.
• B. Update the EC2 cluster security group to allow incoming access from the IP address of
the ALB only.
• C. Modify the security group used by the EC2 cluster to allow incoming traffic from the
security group used by the ALB only.
• D. Enable AWS WAF on the ALB and enable the ECS rule.
• A. Create a CloudFront origin access identity and create a security group that allows
access from CloudFront.
• B. Create a CloudFront origin access identity and update the bucket policy to grant access
to it.
• C. Create a bucket policy restricting all access to the bucket to include CloudFront IPs
only.
• D. Enable the CloudFront option to restrict viewer access and update the bucket policy to
allow the distribution.
• A. Amazon Athena
• B. Amazon Redshift Spectrum
• C. Amazon RDS for PostgreSQL
• D. Amazon Aurora
• A. Create a VPC endpoint service and grant permissions to specific service consumers to
create a connection.
• B. Create a virtual private gateway connection between each pair of service provider
VPCs and service consumer VPCs.
• C. Create an internal Application Load Balancer in the service provider VPC and put
application servers behind it.
• D. Create a proxy server in the service provider VPC to route requests from service
consumers to the application servers.
• A. Set up a weighted routing policy, distributing the workload between the load balancer
and the on-premises environment.
• B. Set up an A record to point the DNS name to the IP address of the load balancer.
• C. Create multiple A records for the EC2 instances.
• D. Set up a geolocation routing policy to distribute the workload between the load
balancer and the on-premises environment.
• E. Set up a routing policy for failover using the on-premises environment as primary and
the load balancer as secondary.
• A. Put the requests into an Amazon SQS queue and configure Amazon EC2 instances to
poll the queue
• B. Publish the message to an Amazon SNS topic that an Amazon EC2 subscriber can
receive and process
• C. Save the requests to an Amazon DynamoDB table with a DynamoDB stream that
triggers an Amazon EC2 Spot Instance
• D. Use Amazon S3 to store the requests and configure an event notification to have
Amazon EC2 instances process the new object
• A. Use a NAT gateway and deny public access using security groups
• B. Attach an egress-only internet gateway and update the routing tables
• C. Use a NAT gateway and update the routing tables
• D. Attach an internet gateway and deny public access using security groups
• A. Create an Amazon EFS file system and run a shell script to copy the data
• B. Create an Amazon EBS snapshot using an Amazon CloudWatch Events rule
• C. Create an Amazon S3 snapshot policy to back up the Amazon EBS volumes
• D. Create a snapshot lifecycle policy that takes periodic snapshots of the Amazon EBS
volumes
• A. Change the backup so the data goes to Amazon S3 Standard-Infrequent Access (S3
Standard-IA) directly
• B. Create an S3 lifecycle policy that moves the data to the GLACIER storage class after 7
years
• C. Change the backup so the data goes to Amazon Glacier directly
• D. Create an S3 lifecycle policy that moves the data to Amazon S3 Standard-Infrequent
Access (S3 Standard-IA) after 35 days
• E. Creates an S3 lifecycle policy that moves the data to the GLACIER storage class after
35 days
• A. Defining S3 buckets by item may cause partition distribution errors, which will impact
performance.
• B. Amazon S3 DELETE requests are eventually consistent, which may cause other users
to view items that have already been purchased
• C. Amazon S3 DELETE requests apply a lock to the S3 bucket during the operation,
causing other users to be blocked
• D. Using Amazon S3 for persistence exposes the application to a single point of failure
• A. Amazon EFS
• B. Amazon EBS Cold HDD (sc1)
• C. Amazon S3 Standard
• D. Amazon DynamoDB
Question #281Topic 1
An application running on AWS Lambda requires an API key to access a third-party
service. The key must be stored securely with audited access to the Lambda function
only.
What is the MOST secure way to store the key?
• A. As an object in Amazon S3
• B. As a secure string in AWS Systems Manager Parameter Store
• C. Inside a file on an Amazon EBS volume attached to the Lambda function
• D. Inside a secrets file stored on Amazon EFS
• A. Store the data in Amazon S3 Standard storage with a lifecycle rule to transition the
data to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days, then
transition to the GLACIER storage class after 30 days
• B. Store the data in Amazon S3 Standard storage with a lifecycle rule to transition the
data to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days
• C. Store the data in Amazon S3 Standard storage with a lifecycle rule to transition the
data to the GLACIER storage class after 30 days
• D. Store the data in Amazon S3 Standard storage with a lifecycle rule to transition the
data to the GLACIER storage class after 7 days
• A. Use Amazon S3 for current invoices. Set up lifecycle rules to migrate invoices to the
GLACIER storage class after 30 days.
• B. Store the invoices as text files. Use Amazon CloudFront to convert the invoices from
text to PDF when customers download invoices.
• C. Store the invoices as binaries in an Amazon RDS database instance. Retrieve them
from the database when customers request invoices.
• D. Use Amazon S3 for current invoices. Set up lifecycle rules to migrate invoices to
Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days.
• A. Use an Amazon CloudFront distribution with an origin access identity (OAI). Configure
the distribution with an Amazon S3 origin to provide access to the file through signed
URLs. Design a Lambda function to remove data that is older than 14 days.
• B. Use an S3 bucket and provide direct access to the file. Design the application to track
purchases in a DynamoDB table. Configure a Lambda function to remove data that is
older than 14 days based on a query to Amazon DynamoDB.
• C. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an
Amazon S3 origin to provide access to the file through signed URLs. Design the
application to set an expiration of 14 days for the URI.
• D. Use an Amazon CloudFront distribution with an OAI. Configure the distribution with an
Amazon S3 origin to provide access to the file through signed URLs. Design the
application to set an expiration of 60 minutes for the URL, and recreate the URL as
necessary.
• A. Amazon S3
• B. Amazon EFS
• C. Amazon EBS volumes
• D. Amazon EC2 instance store
Question #291Topic 1
A Solutions Architect is designing an application that is expected to have millions of
users. The Architect needs options to store session data.
Which option is the MOST performant?
• A. Amazon ElastiCache
• B. Amazon RDS
• C. Amazon S3
• D. Amazon EFS
• A. Create a Network Load Balancer with an interface in each subnet, and assign a static
IP address to each subnet.
• B. Create additional EC2 instances and put them on standby. Remap an Elastic IP
address to a standby instance in the event of a failure.
• C. Use Amazon Route 53 with a weighted, round-robin routing policy across the Elastic IP
addresses to resolve one at a time.
• D. Add additional EC2 instances with Elastic IP addresses, and register them with
Amazon Route 53
• E. Switch the two existing EC2 instances for an Auto Scaling group, and register them
with the Network Load Balancer.
• A. AWS CloudHSM
• B. SSE-KMS: Server-side encryption with AWS KMS managed keys
• C. SSE-S3: Server-side encryption with Amazon-managed master key
• D. SSE-C: Server-side encryption with customer-provided encryption keys
• A. Use AWS Lambda to preprocess the data and transform the records into a simpler
format, such as CSV.
• B. Run the MergeShard command to reduce the number of shards that the consumer can
more easily process.
• C. Change the workflow to use Amazon Kinesis Data Firehose to gain a higher
throughput.
• D. Run the UpdateShardCount command to increase the number of shards in the stream
• A. Deploy six Amazon EC2 instances in sa-east-1a, six Amazon EC2 instances in sa-
east-1b, and six Amazon EC2 instances in sa-east-1c
• B. Deploy six Amazon EC2 instances in sa-east-1a, four Amazon EC2 instances in sa-
east-1b, and two Amazon EC2 instances in sa-east-1c
• C. Deploy three Amazon EC2 instances in sa-east-1a, three Amazon EC2 instances in sa-
east-1b, and three Amazon EC2 instances in sa-east-1c
• D. Deploy two Amazon EC2 instances in sa-east-1a, two Amazon EC2 instances in sa-
east-1b, and two Amazon EC2 instances in sa-east-1c
• A. Use a NAT Gateway as the front end for the application tier and to enable the private
resources to have Internet access.
• B. Use an Amazon EC2-based proxy server as the front end for the application tier, and a
NAT Gateway to allow Internet access for private resources.
• C. Use an ELB Classic Load Balancer as the front end for the application tier, and an
Amazon EC2 proxy server to allow Internet access for private resources.
• D. Use an ELB Classic Load Balancer as the front end for the application tier, and a NAT
Gateway to allow Internet access for private resources.
Question #301Topic 1
A Solutions Architect is designing a multi-tier application consisting of an Application
Load Balancer, an Amazon RDS database instance, and an Auto Scaling group on
Amazon EC2 instances. Each tier is in a separate subnet. There are some EC2
instances in the subnet that belong to another application. The RDS database instance
should accept traffic only from the EC2 instances in the Auto Scaling group.
What should be done to meet these requirements?
• A. Configure the inbound network ACLs on the database subnet to accept traffic from
the IP addresses of the EC2 instances only.
• B. Configure the inbound rules on the security group associated with the RDS database
instance. Set the source to the security group associated with instances in the Auto
Scaling group.
• C. Configure the outbound rules on the security group associated with the Auto Scaling
group. Set the destination to the security group associated with the RDS database
instance.
• D. Configure the inbound network ACLs on the database subnet to accept traffic only
from the CIDR range of the subnet used by the Auto Scaling group.
• A. Use a custom Amazon S3 bucket policy to allow access only to users inside the
organization's country
• B. Use Amazon CloudFront and Geo Restriction to allow access only to users inside the
organization's country
• C. Use an Amazon S3 bucket ACL to allow access only to users inside the
organization's country
• D. Use file-based ACL permissions on each video file to allow access only to users
inside the organization's country
• A. Use DynamoDB replication and restore the table from the replica
• B. Use AWS Data Pipeline and create a scheduled job to back up the DynamoDB table
daily
• C. Use Amazon CloudWatch Events to trigger an AWS Lambda function that makes an
on-demand backup of the table
• D. Use AWS Batch to create a scheduled backup with the default template, then back
up to Amazon S3 daily.
• A. Use Auto Scaling groups to increase the number of Amazon EC2 instances
delivering the web application
• B. Use Auto Scaling groups to increase the size of the Amazon RDS instances
delivering the database
• C. Use Amazon DynamoDB strongly consistent reads to adjust for the increase in traffic
• D. Use Amazon DynamoDB Accelerator (DAX) to cache read operations to the
database
• A. Create an import package of the application code for upload to AWS Lambda, and
include a function to create another Lambda function to migrate data into an Amazon
RDS database
• B. Create an image of the user's desktop, migrate it to Amazon EC2 using VM Import,
and place the EC2 instance in an Auto Scaling group
• C. Pre-stage new Amazon EC2 instances running the application code on AWS behind
an Application Load Balancer and an Amazon RDS Multi-AZ DB instance
• D. Use AWS DMS to migrate the backend database to an Amazon RDS Multi-AZ DB
instance. Migrate the application code to AWS Elastic Beanstalk
• A. On-Demand Instances
• B. Scheduled Reserved Instances
• C. Reserved Instances
• D. Spot Instances
• A. Save the logs in an Amazon S3 bucket and enable Multi-Factor Authentication Delete
(MFA Delete) on the bucket.
• B. Save the logs in an Amazon EFS volume and use Network File System version 4
(NFSv4) locking with the volume.
• C. Save the logs in an Amazon Glacier vault and use the Vault Lock feature.
• D. Save the logs in an Amazon EBS volume and take monthly snapshots.
Question #311Topic 1
A Solutions Architect is creating an application running in an Amazon VPC that needs to
access AWS Systems Manager Parameter Store. Network security rules prohibit any
route table entry with a 0.0.0.0/0 destination.
What infrastructure addition will allow access to the AWS service while meeting the
requirements?
• A. VPC peering
• B. NAT instance
• C. NAT gateway
• D. AWS PrivateLink
• A. Amazon ECS
• B. Amazon EC2 Spot instances
• C. AWS Lambda functions
• D. AWS Elastic Beanstalk
Question #2Topic 2
A Solutions Architect is designing the architecture for a web application that will be
hosted on AWS. Internet users will access the application using HTTP and
HTTPS.
How should the Architect design the traffic control requirements?
• A. Use a network ACL to allow outbound ports for HTTP and HTTPS. Deny other traffic for
inbound and outbound.
• B. Use a network ACL to allow inbound ports for HTTP and HTTPS. Deny other traffic for
inbound and outbound.
• C. Allow inbound ports for HTTP and HTTPS in the security group used by the web
servers.
• D. Allow outbound ports for HTTP and HTTPS in the security group used by the web
servers.
• A. Configure the S3 bucket policy to allow only CloudFront IP addresses to read objects.
• B. Create IAM users in a group that has read access to the S3 bucket. Configure
CloudFront to pass credentials to the S3 bucket.
• C. Create a CloudFront origin access identity (OAI), then update the S3 bucket policy to
allow the OAI read access.
• D. Convert the S3 bucket to an EC2 instance, then give CloudFront access to the instance
by using security groups.
• A. Store the files securely on Amazon S3 and have the application generate an Amazon
S3 pre-signed URL for the user to download.
• B. Store the files in an encrypted Amazon EBS volume, and use a separate set of servers
to serve the downloads.
• C. Have the application encrypt the files and store them in the local Amazon EC2 Instance
Store prior to serving them up for download.
• D. Create an Amazon CloudFront distribution to distribute and cache the files.
Question #8Topic 2
An application runs in a VPC on Amazon EC2 instances behind an Application Load
Balancer. Traffic to the Amazon EC2 instances must be limited to traffic from the
Application Load Balancer.
Based on these requirements, the security group configuration should only allow traffic
from:
• A. Install AWS SDK on the application instances. Design the application to use the AWS
SDK to log events directly to an Amazon S3 bucket.
• B. Install the Amazon Inspector agent on the application instances. Design the
application to store events in application log files.
• C. Install the Amazon CloudWatch Logs agent on the application instances. Design the
application to store events in application log files.
• D. Install AWS SDK on the application instances. Design the application to use AWS
SDK to log sensitive events directly to AWS CloudTrail.
• A. Use ELB Classic Load Balancer with the web tier. Deploy EC2 instances in two
Availability Zones and enable Multi-AZ RDS. Deploy a NAT gateway in one Availability
Zone.
• B. Use ELB Classic Load Balancer with the web tier. Deploy EC2 instances in two
Availability Zones and enable Multi-AZ RDS. Deploy NAT gateways in both Availability
Zones.
• C. Use ELB Classic Load Balancer with the database tier. Deploy Amazon EC2
instances in two Availability Zones and enable Multi-AZ RDS. Deploy NAT gateways in
both Availability Zones.
• D. Use ELB Classic Load Balancer with the database tier. Deploy Amazon EC2
instances in two Availability Zones and enable Multi-AZ RDS. Deploy a NAT gateway in
one Availability Zone.
• A. Reduce the number of EC2 instances behind each Classic Load Balancer.
• B. Change instance types in the Auto Scaling group launch configuration.
• C. Change the maximum size but leave the desired capacity of the Auto Scaling groups.
• D. Replace the Classic Load Balancers with a single Application Load Balancer.
• A. Amazon RDS
• B. Amazon Redshift
• C. Amazon DynamoDB
• D. Amazon Aurora
• A. Add an internet gateway to the private subnet and update the private subnet route
table.
• B. Add a NAT gateway to the public subnet and update the public subnet route table.
• C. Add an internet gateway to the VPC and update the private subnet route table.
• D. Add a NAT gateway to the public subnet and update the private subnet route table.
• A. Implement an AWS Auto Scaling group for the website to ensure it grows with use.
• B. Use cross-region replication to copy the website to an additional S3 bucket in a
different region.
• C. Create an Amazon CloudFront distribution, with the S3 bucket as the origin server.
• D. Move the website to large compute-optimized Amazon EC2 instances.
Reveal Solution
Question #18Topic 2
A company has a web application that makes requests to a backend API service. The
API service is behind an Elastic Load Balancer running on Amazon EC2 instances.
Most backend API service endpoint calls finish very quickly, but one endpoint that
makes calls to create objects in an external service takes a long time to complete.
These long-running calls are causing client timeouts and increasing overall system
latency.
What should be done to minimize the system throughput impact of the slow-running
endpoint?
• A. Change the EC2 instance size to increase memory and compute capacity.
• B. Use Amazon SQS to offload the long-running requests for asynchronous processing by
seprate workers.
• C. Increase the load balancer idle timeount to allow the long-running requests to
complete.
• D. Use Amazon ElastiCache for Redis to cache responses from the external service.
• A. AZ-a with six EC2 instances, AZ-b with six EC2 instances, and AZ-c with no EC2
instances.
• B. AZ-a with four EC2 instances, AZ-b with two EC2 instances, and AZ-c with two EC2
instances.
• C. AZ-a with two EC2 instances, AZ-b with two EC2 instances, and AZ-c with two EC2
instances.
• D. AZ-a with three EC2 instances, AZ-b with three EC2 instances, and AZ-c with no EC2
instances.
• E. AZ-a with three EC2 instances, AZ-b with three EC2 instances, and AZ-c with three
EC2 instances.
• A. Create a Read Replica of the RDS PostgreSQL database and point the dashboards at
the Read Replica.
• B. Move data from the RDS PostgreSQL database to Amazon Redshift nightly and point
the dashboards at Amazon Redshift.
• C. Monitor the database with Amazon CloudWatch and increase the instance size, as
necessary. Make no changes to the dashboards.
• D. Take an hourly snapshot of the RDS PostgreSQL database, and load the hourly
snapshots to another database to which the dashboards are pointed.
• A. NAT gateway
• B. Elastic IP address
• C. AWS Direct Connect
• D. Virtual private gateway
• A. Create a NAT gateway attached to the VPC. Add a route to the gateway to each private
subnet route table
• B. Configure an internet gateway. Add a route to the gateway to each private subnet route
table.
• C. Create a NAT instance in the private subnet of each AZ. Update the route tables for
each private subnet to direct internet-bound traffic to the NAT instance.
• D. Create a NAT gateway in each AZ. Update the route tables for each private subnet to
direct internet-bound traffic to the NAT gateway.
Reveal Solution Discussion 6
Question #25Topic 2
A company plans to use Amazon GuardDuty to detect unexpected and potentially
malicious activity. The company wants to use Amazon CloudWatch to ensure that when
findings occur, remediation takes place automatically.
Which CloudWatch feature should be used to trigger an AWS Lambda function to
perform the remediation?
• A. Events
• B. Dashboards
• C. Metrics
• D. Alarms
• A. Create a database user to run the GRANT statement with a short-lived token.
• B. Create the user account to use the AWS-provided AWSAuthenticationPlugin with IAM.
• C. Use AWS Systems Manager to securely save the connection secrets, and use the
secrets while connecting.
• D. Use AWS KMS to securely save the connection secrets, and use the secrets while
connecting.
• A. Amazon SNS
• B. Amazon SQS
• C. Amazon MQ
• D. Amazon SWF
• A. Deploy the database on multiple Amazon EC2 instances backed by Amazon EBS
across multiple Availability Zones.
• B. Use Amazon RDS with a multiple Availability Zone option.
• C. Use RDS with a single Available Zone option and schedule periodic database
snapshots.
• D. Use Amazon DynamoDB.
• A. An Aurora instance as the primary database with a read replica in the DR region.
• B. Inter-region VPC peering between the primary workload VPC and the DR VPC
• C. A cross-region Amazon EC2 Amazon Machine Image (AMI) copy
• D. Amazon S3 cross-region replication of application-tier installers
• E. Amazon CloudWatch Events in the primary region that trigger the failover to the DR
region
Reveal Solution Discussion 8
Question #31Topic 2
A website keeps a record of user actions using a globally unique identifier (GIUD)
retrieved from Amazon Aurora in place of the user name within the audit record.
Security protocols state that the GUID content must not leave the company's Amazon
VPC.
As the web traffic has increased, the number of web servers and Aurora read replicas
has also increased to keep up with the user record reads for the GUID.
What should be done to reduce the number of read replicas required while improving
performance?
• A. Keep the user name and GUID in memory on the web server instance so that the
association can be remade on demand. Remove the record after 30 minutes.
• B. Deploy a Amazon ElastiCache for Redis server into the infrastructure and store the
user name and GUID there. Retrieve GUID from ElastiCache when required.
• C. Encrypt the GUID using Base64 and store it in the user's session cookie. Decrypt the
GUID when an audit record is needed.
• D. Change the GUID to an MD5 hash of the user name, so that the value can be
calculated on demand without referring to the database.
• A. Create a multi-VPC peering mesh with network access rules limiting communications
to specific ports. Implement an internet gateway on each VPC for external connectivity.
• B. Place all instances in a single Amazon VPC with AWS WAF as the web front-end
communication conduit. Configure a NAT gateway for external communications.
• C. Use VPC peering to peer with on-premises hardware. Direct enterprise traffic through
the VPC peer connection to the instances hosted in the private VPC.
• D. Deploy the web and application instances in a private subnet. Provision an
Application Load Balancer in the public subnet. Install an internet gateway and use
security groups to control communications between the layers.