Answer: C
Answer: C
Answer: C
A. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the
private subnets and associate it with the Application Load Balancer
B. Configure the Network Load Balancer in the public subnets. Configure the Auto Scaling group in the
public subnets and associate it with the Application Load Balancer
C. Configure the Application Load Balancer in the public subnets. Configure the Auto Scaling group in
the private subnets and associate it with the Application Load Balancer
D. Configure the Application Load Balancer in the private subnets. Configure the Auto Scaling group in
the private subnets and associate it with the Application Load Balancer
Answer: C
2. A company is running a media store across multiple Amazon EC2 instances distributed across
multiple Availability Zones in a single VPC. The company wants a high-performing solution to
share data between all the EC2 Instances, and prefers to keep the data within the VPC only.
A. Create an Amazon S3 bucket and call the service APIs from each instance's application.
B. Create an Amazon S3 bucket and configure all instances to access it as a mounted volume.
C. Configure an Amazon Elastic Block Store (Amazon EBS) volume and mount it across all instances.
D. Configure an Amazon Elastic File System (Amazon EFS) file system and mount it across all instances
Answer: C
3. A company has implemented one of its microservices on AWS Lambda that accesses an
Amazon DynamoDB table named Books. A solutions architect is design an IAM policy to be
attached to the Lambda function's IAM role, giving it access to put, update, and delete items in
the Books table. the IAM policy must prevent function from performing any other actions on
the Books table or any other.
Which IAM policy would fulfill these needs and provide the LEAST privileged access?
A)
B)
C)
D)
A. Option A
B. Option B
C. Option C
D. Option D
Answer: A
4. A company has an application that generates a large number of files, each approximately 5
MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for
4 years before they can be deleted. Immediate accessibility is always required as the files
contain critical business data that is not easy to reproduce. The files are frequently accessed in
the first 30 days of the object creation but are rarely accessed after the first 30 days.
A. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object
creation. Delete the files 4 years after object creation.
B. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access
(S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
C. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent
Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
D. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access
(S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.
Answer: C
5. A company has a three-tier environment on AWS that ingests sensor data from its users'
devices. The traffic flows through a Network Load Balancer (NLB) then to Amazon EC2 instances
for the web tier, and finally to EC2 instances for the application tier that makes database calls
What should a solutions architect do to improve the security of data in transit to the web tier?
A. Configure a TLS listener and add the server certificate on the NLB.
B. Configure AWS Shield Advanced and enable AWS WAF on the NLB
C. Change the load balancer to an Application Load Balancer and attach AWS WAF to it.
D. Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key
Management Service (AWS KMS)
Answer: C
6. An application is running on an Amazon EC2 instance and must have millisecond latency
when running the workload. The application makes many small reads and writes to the file
system, but the file system itself is small. Which Amazon Elastic Block Store (Amazon EBS)
volume type should a solutions architect attach to their EC2 instance?
Answer: B
7. A company built a food ordering application that captures user data and stores it for future
analysis The application's static front end is deployed on an Amazon EC2 instance The front-end
application sends the requests to the backend application running on separate EC2 instance The
backend application then stores the data in Amazon RDS
What should a solutions architect do to decouple the architecture and make it scalable''
A. Use Amazon S3 to serve the front-end application which sends requests to Amazon EC2 to execute
the backend application The backend application will process and store the data in Amazon RDS
B. Use Amazon S3 to serve the front-end application and write requests to an Amazon Simple
Notification Service (Amazon SNS) topic Subscribe Amazon EC2 instances to the HTTP/HTTPS endpoint of
the topic and process and store the data in Amazon RDS
C. Use an EC2 instance to serve the front end and write requests to an Amazon SQS queue Place the
backend instance in an Auto Scaling group and scale based on the queue depth to process and store the
data in Amazon RDS
D. Use Amazon S3 to serve the static front-end application and send requests to Amazon API Gateway
which writes the requests to an Amazon SQS queue Place the backend instances in an Auto Scaling
group and scale based on the queue depth to process and store the data in Amazon RDS
Answer: D
8. A company is migrating a three-tier application to AWS. The application requires a MySQL
database. In the past, the application users reported poor application performance when
creating new entries. These performance issues were caused by users generating different real-
time reports from the application duringworking hours.
Which solution will improve the performance of the application when it is moved to AWS?
A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application
to use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources
exceed the on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the
application reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup
instance of the cluster as an endpoint for the reports.
Answer: C
9. A company has an application workflow that uses an AWS Lambda function to download and
decrypt files from Amazon S3 These files are encrypted using AWS Key Management Service
Customer Master Keys (AWS KMS CMKs) A solutions architect needs to design a solution that
will ensure the required permissions are set correctly.
Answer: B, E
10. A company runs a static website through its on-premises data center. The company has
multiple servers mat handle all of its traffic, but on busy days, services are interrupted and the
website becomes unavailable. The company wants to expand its presence globally and plans to
triple its website traffic.
A. Migrate the website content to Amazon S3 and host the website on Amazon CloudFront.
B. Migrate the website content to Amazon EC2 instances with public Elastic IP addresses in multiple AWS
Regions.
C. Migrate the website content to Amazon EC2 instances and vertically scale as the load increases.
D. Use Amazon Route 53 to distribute the loads across multiple Amazon CloudFront distributions for
each AWS Region that exists globally.
Answer: A
11. A company runs a web service on Amazon EC2 instances behind an Application Load
Balancer The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones
The company needs a minimum of four instances at all limes to meet the required service level
agreement (SLA) while keeping costs low.
If an Availability Zone fails, how can the company remain compliant with the SLA?
Answer: D
12. An ecommerce website is deploying its web application as Amazon Elastic Container Service
(Amazon ECS) container instances behind an Application Load Balancer (ALB). During periods of
high activity, the website slows down and availability is reduced. A solutions architect uses
Amazon CloudWatch alarms to receive notifications whenever there is an availability issue so
they can scale out resources. Company management wants a solution that automatically
responds to such events.
A. Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB Set up
AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
B. Set up AWS Auto Scaling to scale out the ECS service when the ALB CPU utilization is too high. Set up
AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
C. Sot up AWS Auto Scaling to scale out the ECS service when the service's CPU utilization is too high. Set
up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.
D. Set up AWS Auto Scaling to scale out the ECS service when the ALB target group CPU utilization is too
high. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too
high.
Answer: A
13. A solutions architect is designing a VPC with public and private subnets. The VPC and
subnets use IPv4 CIDR blocks. There is one public subnet and one private subnet in each of
three Availability Zones (AZs) for high availability. An internet gateway is used to provide
internet access for the public subnets. The private subnets require access to the internet to
allow Amazon EC2 instances to download software updates. What should the solutions
architect do to enable internet access for the private subnets?
A. Create three NAT gateways, one for each public subnet in each AZ. Create a private route table for
each AZ that forwards non-VPC traffic to the NAT gateway in its AZ
B. Create three NAT instances, one for each private subnet in each AZ. Create a private route table for
each AZ that forwards non-VPC traffic to the NAT instance in its AZ
C. Create a second internet gateway on one of the private subnets. Update the route table for the
private subnets that forward non-VPC traffic to the private internet gateway
D. Create an egress only internet gateway on one of the public subnets. Update the route table for the
private subnets that forward non-VPC traffic to the egress only internet gateway
Answer: B
14. A company has an application running on Amazon EC2 instances in a private subnet. The
application needs to store and retrieve data in Amazon S3 To reduce costs, the company wants
to configure its AWS resources in a cost-effective manner
Answer: B
15. A recently created startup built a three-tier web application. The front end has static
content. The application layer is based on microservices. User data is stored as JSON documents
that need to be accessed with low latency. The company expects regular traffic to be low during
the first year, with peaks in traffic when it publicizes new features every month. The startup
team needs to minimize operational overhead costs.
A. Use Amazon S3 static website hosting to store and serve the front end Use AWS Elastic Beanstalk for
the application layer. Use Amazon DynamoDB to store user data.
B. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon Elastic
Kubernetes Service (Amazon EKS) for the application layer. Use Amazon DynamoDB to store user data.
C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway
and AWS Lambda functions for the application layer Use Amazon DynamoDB to store user data.
D. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and
AWS Lambda functions for the application layer. Use Amazon RDS with read replicas to store user data.
Answer: C
16. A company uses Amazon S3 as its object storage solution. The company has thousands of S3
it uses to store data. Some of the S3 bucket have data that is accessed less frequently than
others. A solutions architect found that lifecycle policies are not consistently implemented or
are implemented partially. resulting in data being stored in high-cost storage.
Which solution will lower costs without compromising the availability of objects?
A. Use S3 ACLs
B. Use Amazon Elastic Block Store EBS) automated snapshots
C. Use S3 inteligent-Tiering storage
D. Use S3 One Zone-infrequent Access (S3 One Zone-IA).
Answer: B
17. A company hosts a training site on a fleet of Amazon EC2 instances. The company
anticipates that its new course, which consists of dozens of training videos on the site, will be
extremely popular when it is released in 1 week.
A. Store the videos in Amazon ElastiCache for Redis Update the web servers to serve the videos using
the Elastic ache API
B. Store the videos in Amazon Elastic File System (Amazon EFS) Create a user data script for the web
servers to mount the EFS volume.
C. Store the videos in an Amazon S3 bucket Create an Amazon CloudFlight distribution with an origin
access identity (OAI) of that S3 bucket Restrict Amazon S3 access to the OAI.
D. Store the videos in an Amazon S3 bucket. Create an AWS Storage Gateway file gateway to access the
S3 bucket Create a user data script for the web servers to mount the file gateway
Answer: C
18. A company has applications hosted on Amazon EC2 instances with IPv6 addresses. The
applications must initiate communications with other external applications using the internet.
However, the company's security policy states that any external service cannot initiate a
connection to the EC2 instances.
A. Create a NAT gateway and make it the destination of the subnet's route table
B. Create an internet gateway and make it the destination of the subnet's route table
C. Create a virtual private gateway and make it the destination of the subnet's route table
D. Create an egress-only internet gateway and make it the destination of the subnet's route table
Answer: D
19. A company is building a document storage application on AWS. The Application runs on
Amazon EC2 instances in multiple Availability Zones. The company requires the document store
to be highly available. The documents need to be returned immediately when requested. The
lead engineer has configured the application to use Amazon Elastic Block Store (Amazon EBS) to
store the documents, but is willing to consider other options to meet the availability
requirement.
A. Snapshot the EBS volumes regularly and build new volumes using those snapshots in additional
Availability Zones.
B. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the
document store on Amazon S3.
C. Use Amazon EBS for the EC2 instance root volumes. Configure the application to build the document
store on Amazon S3 Glacier.
D. Use at least three Provisioned IOPS EBS volumes for EC2 instances. Mount the volumes to the EC2
instances in RAID 5 configuration.
Answer: A
Answer: B
20. A company has two applications it wants to migrate to AWS. Both applications process a
large set of files by accessing the same files at the same time. Both applications need to read
the files with low latency.
A. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an
instance store volume to store the data.
B. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an
Amazon Elastic Block Store (Amazon EBS) volume to store the data.
C. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously.
Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data.
D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File
System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to
store the data.
Answer: D
21. A company is migrating a Linux-based web server group to AWS The web servers must
access files in a shared file store for some content To meet the migration date, minimal changes
can be made What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 Standard bucket with access to the web server.
B. Configure an Amazon CloudFront distribution with an Amazon S3 bucket as the origin
C. Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers
D. Configure Amazon Elastic Block Store (Amazon EBS) Provisioned IOPS SSD (io1) volumes and mount
them on all web servers.
Answer: C
22. An online shopping application accesses an Amazon RDS Multi-AZ DB instance. Database
performance is slowing down the application. After upgrading to the next-generation instance
type, there was no significant performance improvement. Analysis shows approximately 700
IOPS are sustained, common queries run for long durations and memory utilization is high.
A. Migrate the RDS instance to an Amazon Redshift cluster and enable weekly garbage collection
B. Separate the long-running queries into a new Multi AZ RDS database and modify the application to
query whichever database is needed
C. Deploy a two-node Amazon ElastiCache cluster and modify the application to query the cluster first
and query the database only if needed
D. Create an Amazon Simple Queue Service (Amazon SQS) FIFO queue for common queries and query
it first and query the database only if needed
Answer: D
23. A company has an application that runs on Amazon EC2 instances within a private subnet in
a VPC The instances access data in an Amazon S3 bucket in the same AWS Region. The VPC
contains a NAT gateway in a public subnet to access the S3 bucket The company wants to
reduce costs by replacing the NAT gateway without compromising security or redundancy
Answer: C
24. A company's application hosted on Amazon EC2 instances needs to access an Amazon S3
bucket. Due to data sensitivity, traffic cannot traverse the internet How should a solutions
architect configure access?
Answer: A
26. A company uses Application Load Balancers (ALBs) in different AWS Regions. The ALBs
receive inconsistent traffic that can spike and drop throughout the year The company's
networking team needs to allow the IP addresses of the ALBs in the on-premises firewall to
enable connectivity.
A. Write an AWS Lambda script to get the IP addresses of the ALBs in different Regions Update the
onpremises firewall's rule to allow the IP addresses of the ALBs.
B. Migrate all ALBs in different Regions to the Network Load Balancers (NLBs) Update the on-premises
firewall's rule to allow the Elastic IP addresses of all the NLBs.
C. Launch AWS Global Accelerator Register the ALBs in different Regions to the accelerator. Update the
on-premises firewall’s rule to allow static IP addresses associated with the accelerator.
D. Launch a Network Load Balancer (NLB) in one Region Register the private IP addresses of the ALBs m
different Regions with the NLB Update the on-premises firewall's rule to allow the Elastic IP address
attached to the NLB.
Answer: C
Answer: B
27. A company wants to build a scalable key management infrastructure to support developers
who need to encrypt data in their applications. What should a solutions architect do to reduce
the operational burden?
Answer: B
28. A group requires permissions to list an Amazon S3 bucket and delete objects from that
bucket. An administrator has created the following IAM policy to provide access to the bucket
and applied that policy to the group. The group is not able to delete objects in the bucket. The
company follows least privilege access rules.
A)
A. Option A
B. Option B
C. Option C
D. Option D
Answer: A
29. A company is developing a mobile game that streams score updates to a backend processor
and then posts results on a leaderboard. A solutions architect needs to design a solution that
can handle large traffic spikes, process the mobile game updates in order of receipt, and store
the processed updates in a highly available database. The company also wants to minimize the
management overhead required to maintain the solution.
A. Push score updates to Amazon Kinesis Data Streams. Process the updates in Kinesis Data Streams
with AWS Lambda. Store the processed updates in Amazon DynamoDB.
B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleet of Amazon EC2
instances set up for Auto Scaling. Store the processed updates in Amazon Redshifl.
C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe an AWS
Lambda function to the SNS topic to process the updates. Store the processed updates in a SOL
database running on Amazon EC2.
D. Push score updates to an Amazon Simple Queue Service (Amazon SOS) queue. Use a fleet of Amazon
EC2 instances with Auto Scaling to process the updates in the SQS queue. Store the processed updates
in an Amazon RDS Multi-AZ DB instance.
Answer: A
30. A company’s website is using an Amazon RDS MySQL Multi-AZ DB instance for its
transactional data storage. There are other internal systems that query this DB instance to fetch
data for internal batch processing. The RDS DB instance slows down significantly the internal
systems fetch data. This impacts the website’s read and write performance, and the users
experience slow response times.
Answer: D
31. A company has a live chat application running on list on-premises servers that use
WebSockets. The company wants to migrate the application to AWS Application traffic is
inconsistent, and the company expects there to be more traffic with sharp spikes in the future.
The company wants a highly scalable solution with no server maintenance nor advanced
capacity planning
A. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store
Configure the DynamoDB table for provisioned capacity
B. Use Amazon API Gateway and AWS Lambda with an Amazon DynamoDB table as the data store
Configure the DynamoDB table for on-demand capacity
C. Run Amazon EC2 instances behind an Application Load Balancer in an Auto Scaling group with an
Amazon DynamoDB table as the data store Configure the DynamoDB table for on-demand capacity
D. Run Amazon EC2 instances behind a Network Load Balancer in an Auto Scaling group with an Amazon
DynamoDB table as the data store Configure the DynamoDB table for provisioned capacity
Answer: B
32. A company is deploying a multi-instance application within AWS that requires minimal
latency between the instances.
Answer: A
33. A company has an on-premises application that generates a large amount of time-sensitive
data that is backed up to Amazon S3. The application has grown and there are user complaints
about internet bandwidth limitations. A solutions architect needs to design a long term solution
that allows for both timely backups to Amazon S3 and with minimal impact on internet
connectivity tor internal users.
A. Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
B. Establish a new AWS Direct Connect connection and direct backup traffic through this new
connection
C. Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to
AWS each day
D. Submit a support ticket through the AWS Management Console Request the removal of S3 service
limits from the account.
Answer: B
34. A company has a highly dynamic batch processing job that uses many Amazon EC2 instances
to complete it. The job is stateless in nature, can be started and stopped at any given time with
no negative impact, and typically takes upwards of 60 minutes total to complete The company
has asked a solutions architect to design a scalable and cost-effective solution that meets the
requirements of the job.
Answer: A
35. A company that hosts its web application on AWS wants to ensure all Amazon EC2
instances. Amazon RDS DB instances and Amazon Redshift clusters are configured with tags.
The company wants to minimize the effort of configuring and operating this check.
Answer: C
36. A solutions architect is designing a solution to access a catalog of images and provide users
with the ability to submit requests to customize images Image customization parameters will be
in any request sent to an AWS API Gateway API The customized image will be generated on
demand, and users will receive a link they can click to view or download their customized image
The solution must be highly available for viewing and customizing images
A. Use Amazon EC2 instances to manipulate the original image into the requested customization Store
the original and manipulated images in Amazon S3 Configure an Elastic Load Balancer in front of the EC2
instances
B. Use AWS Lambda to manipulate the original image to the requested customization Store the
original and manipulated images in Amazon S3 Configure an Amazon CloudFront distribution with the
S3 bucket as the origin
C. Use AWS Lambda to manipulate the original image to the requested customization Store the original
images in Amazon S3 and the manipulated images in Amazon DynamoDB Configure an Elastic Load
Balancer in front of the Amazon EC2 instances
D. Use Amazon EC2 instances to manipulate the original image into the requested customization Store
the original images in Amazon S3 and the manipulated images in Amazon DynamoDB Configure an
Amazon CloudFront distribution with the S3 bucket as the origin
Answer: B
37. A company's web application is using multiple Linux Amazon EC2 instances and storing data
on Amazon EBS volumes. The company is looking for a solution to increase the resiliency of the
application in case of a failure and to provide storage that complies with atomicity, consistency,
isolation, and durability (ACID).
A. Launch the application on EC2 instances in each Availability Zone. Attach EBS volumes to each EC2
instance.
B. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones
Mount an instance store on each EC2 instance
C. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones.
Store data on Amazon EFS and mount a target on each instance.
D. Create an Application Load Balancer with Auto Scaling groups across multiple Availability Zones Store
data using Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answer: C
38. A company had a build server that is in an Auto Scaling group and often has multiple Linux
instances running. The build server requires consistent and mountable shared NFS storage for
jobs and configurations.
A. Amazon S3
B. Amazon FSx
C. Amazon Elastic Block Store (Amazon EBS)
D. Amazon Elastic File System (Amazon EFS)
Answer: D
39. A company is designing a message-driven order processing application on AWS. The
application consists of many services and needs to communicate the results of its processing to
multiple consuming services. Each of the consuming services may take up to 5 days to receive
the messages
A. The application sends the results of its processing to an Amazon Simple Notification Service (Amazon
SNS) topic Each consuming service subscribes to this SNS topic and consumes the results
B. The application sends the results of its processing to an Amazon Simple Notification Service (Amazon
SNS) topic Each consuming service consumes the messages directly from its corresponding SNS topic.
C. The application sends the results of its processing to an Amazon Simple Queue Service (Amazon
SQS) queue Each consuming service runs as an AWS Lambda function that consumes this single SQS
queue.
D. The application sends the results of its processing to an Amazon Simple Notification Service (Amazon
SNS) topic. An Amazon Simple Queue Service (Amazon SQS) queue is created for each service and each
queue is configured to be a subscriber of the SNS topic.
Answer: C
40. A company is preparing to migrate its on-premises application to AWS The application
consists of application servers and a Microsoft SQL Server database The database cannot be
migrated to a different engine because SQL Server features are used in the application's NET
code. The company wants to attain the greatest availability possible while minimizing
operational and management overhead
What should a solutions architect do to accomplish this?
Answer: B
41. A company wants to share forensic accounting data is stored in an Amazon RDS DB instance
with an external auditor. The Auditor has its own AWS account and requires its own copy of the
database.
How should the company securely share the database with the auditor?
A. Create a read replica of the database and configure IAM standard database authentication to grant
the auditor access.
B. Copy a snapshot of the database to Amazon S3 and assign an IAM role to the auditor to grant access
to the object in that bucket.
C. Export the database contents to text files, store the files in Amazon S3, and create a new IAM user
for the auditor with access to that bucket.
D. Make an encrypted snapshot of the database, share the snapshot, and allow access to the AWS Key
Management Service (AWS KMS) encryption key.
Answer: C
42. A company is building a payment application that must be highly available even during
regional service disruptions A solutions architect must design a data storage solution that can
be easily replicated and used in other AWS Regions. The application also requires low-latency
atomicity, consistency, isolation, and durability (ACID) transactions that need to be immediately
available to generate reports The development team also needs to use SQL.
Answer: C
43. A company wants to optimize the cost of its data storage for data that is accessed quarterly.
The company requires high throughput, low latency, and rapid access, when needed Which
Amazon S3 storage class should a solutions architect recommend?
A. Amazon S3 Glacier (S3 Glacier)
B. Amazon S3 Standard (S3 Standard)
C. Amazon S3 Intelligent-Tiering (S3 Intelligent-Tiering)
D. Amazon S3 Standard-Infrequent Access (S3 Standard-IA)
Answer: C
44. A company wants to deploy a shared file system for its .NET application servers and
Microsoft SQL Server database running on Amazon EC2 instance with Windows Server 2016.
The solution must be able to be integrated in to the corporate Active Directory domain, be
highly durable, be managed by AWS, and provided levels of throuput and IOPS.
Answer: A
45. A company is planning to migrate its virtual server-based workloads to AWS The company
has internet facing load balancers backed by application servers. The application servers rely on
patches from an internet-hosted repository
Which services should a solutions architect recommend be hosted on the public subnet*?
(Select TWO.)
A. NAT gateway
B. Amazon RDS DB instances
C. Application Load Balancers
D. Amazon EC2 application servers
E. Amazon Elastic File System (Amazon EFS) volumes
Answer: A, C
46. A Company has an AWS Direct Connect connection from its corporate data center to its VPC in the
us-east-1 Region. The company recently acquired a corporation that has several VPCs and a Direct
connect Connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for
the VPCs of the company and the corporation do not overlap. The company requires connectivity
between two Regions oand the data centers. The company needs a solution that is scalable while
reducing operationl overhead.
47. A company has an AWS account used for software engineering. The AWS account has access to the
company's on-premises center through a pair of AWS Direct Connect connections. All non-VPC traffic
routes to the virtual private gateway.
A development team recently created a AWS Lambda function through the console. The development
team needs to allow the function to access a database that runs in a private subnet in the company's
data center.
A. Configure the Lambda function to run in the VPC with the appropriate security group.
B. Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function
through the VPN.
C. Update the route tables in the VPC to allow the Lambda function to access the on-premises data
center through Direct Connect
D. Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP
address without an elastic network interface.
48. A company needs to build a reporting soultion on AWS. The solution must support SQL queries that
data analysts run on the data. The data analysts will run fewer than 10 total queries each day. The
company generates 3 GB of new data daily in an on-premises relational database. This data needs to be
transferred to the AWS to perform reporting tasks.
What should a solutions architect recommend to meet these requirements at the LOWEST cost?
A. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database
into Amazon S3. Use Amazon Athena to query the data.
B. Use an Amoazon Kinesis data Firehose delivery stream to deliver thedata into an Amazon
Elasticsearch Service (Amazon ES) cluster. Run the queries in Amazon ES.
C. Export a daily copy of the data from the on-premises database. Use an AWS storage Gateway file
gateway to store and copy the export into Amazon S3. Use an Amazon EMR cluster to query the data.
D. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises
database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the
data.
49. A company finds that, as its use of Amazon EC2 instances grows, its Amazon Elastic Block Store
(Amazon EBS) storage costs increasing faster than expected.
Which EBS management practices would help reduce costs? (Select TWO)
A. Convert the EBS volumes to an EC2 instance store.
B. Monitor and enforce that the DeletionOnTermination attribute is set to true for all EBS volumes,
unless persistence requirements dictate otherwise.
C. Purchase an EC2 Instance Saving Plan for all EBS volumes that are serving persistent business
requirements.
D. For EBS volumes needed for retention purposes that are not being actively used, take a snapshot and
terminate the instance and volume.
E. Convert the existing EBS volumes to EBS Provisioned IOPS SSD (io1).
50. A company is using Amazon CloudFront with its website. The company has enabled logging on the
CloudFront distribution, and are saved in one of the company's Amazon S3 buckets. The company needs
to perform advanced analyses on the logs and build visualizations.
A. Use standard SQL queries in Amazon Athena to analyze the CloudFront Logs in the S3 bucket.
Visualize the results with AWS Glue.
B. Use standard SQL queries in Amazon Athena to analyze the CloudFront logs in the S3 Bucket. Visualize
the results with Amazon QuickSight.
C. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket.
Visualize the results with AWS Glue.
D. Use standard SQL queries in Amazon DynamoDB to analyze the CloudFront logs in the S3 bucket.
Visualize the results with AWS QuickSight.
51. A company is planning to make a series of schema changes to tables on its Amazon Aurora DB
cluster. A solutions architect needs to test the changes in the most cost-effective manner possible.
A. Crete a clone of the current Aurora DB cluster. Perform the schema changes on the clone. Once the
changes are tested and performance is acceptable, apply the same changes on the original cluster.
Delete the clone.
B. Create an Amazon RDS for MySQL replica. Perform the schema changes on the replica. Once the
changes are tested and performance is acceptable, apply the same changes on the primary DB instance
Delete the replica.
C. Create an additional Aurora Replica. Perform the schema changes on the Aurora Replica. Once the
changes are tested and performance is acceptable apply the same changes on the primary DB instance.
Delete the Aurora Replica.
D. Take a snapshot of the current Aurora DB cluster. Restore the snapshot of the cluster to a new
cluster. Perform the schema changes on the restored cluster. Once the changes are tested and
performance is acceptable, apply the same changes on the original cluster. Delete the resorted cluster.
52. A company is deploying an application that processes large quantities of data in parallel. The
company plans to use Amazon EC2 instances for the workload. The network architecture must be
configurable to provide the lowest possible latency between nodes.
Which combination of network solutions will meet these requirements? (Select TWO)
A. Distribute the EC2 instances across multiple Availability Zones.
B. Attach an Elastic Fabric Adapter (EFA) to each EC2 instance.
C. Place the EC2 instances in a single Availability Zone.
E. Run the EC2 instances in a cluster placement group.
53. A company wants to run a static website served through Amazon CloudFront.
What is an advantage of storing the website content in an Amazon S3 bucket instead of an Amazon
Elastic Block Store (Amazon EBS) volume?
A. S3 buckets are replicated globally, allowing for large scalability. EBS volumes are replicated only
within an AWS Region.
B. S3 is an origin for cloudFront. EBs volumes would need EC2 instance behind an Elastic Load Balancing
load balancer to be an origin.
C. S3 buckets can be encrypted, allowing for secure storage of the web files. EBS volumes cannot be
encrypted.
D. S3 buckets support object-level read throttling, preventing abuse. EBS volumes do not provide object-
level throttling.
54. A company's cloud operations team wants to standardize resource remediation. The company wants
to provide a standard governance evaluations and remediation’s to all member accounts in its
organization in AWS organizations.
Which self-managed AWS service can be company use to meet these requirements with the LEAST
amount of operations.
55. A company runs a fleet of web servers using an Amazon RDS for PostgreSQL DB instance. After a
routine compliance check, company sets a standard that requires a recovery point objective (RPO) of
less than 1 second for all its production databases.
56. A company has concerns about its Amazon RDS database. The workload is unpredictable, and
periodic floods of new can cause the company to run out of storage. The databse runs on a general
purpose instance with 300 GiB of storage.
What should a solutions architect recommend to the company?
57. A company has a web application for travel ticketing. The application is based on a database that
runs in a single data center in North America. The company wants to expand the application to serve a
global user base. The company needs to deploy the application in multiple AWS Regions. Average
latency must be less than 1 second on updates to the reservation database.
The company wants to have separate deployments of its web platform across multiple Regions.
However, the company must maintain single primary reservation database that is globally consistent.
A. Convert the application to use Amazon DynamoDB. Use a global table for the center reservation
table. Use the correct Regional endpoint in each Regional deployment.
B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each
Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
C. Migrate the database to an Amazon RDS for mySQL database. Deploy MySQL read replicas in each
Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
D. Migrate the application to an Amazon aurora serverless database. Deploy instances of the database
to each Region. Use the correct Regional endpoint in each Regional deployment to access the database.
Use Aws Lambda functions to process event streams in each Region to synchronize the databases.
58. A solutions architect is designing an architecture that includes web, application, and database tiers.
The web tier must be an auto scaling. The solutions architect has decided to separate each tier into its
own subnets. The design includes two public and four private subnets.
The security team requires that tiers be able to communicate with each other only when there is a
business need and that all to the network traffic be blocked.
59. A company's security policy requires that all AWS API activity in its AWS accounts be recorded for
periodic auditing. Which needs to ensure that AWS CloudTrail is enabled on all of its current and future
AWS accounts using AWS Organization.
Which solution is MOST secure?
A. At the organization's root, define and attach a service control policy (SCP) that permits enabling
CloudTrail on
B. Create IAM groups in the organization's master account as needed. Define and attach an IAM policy to
the group to prevents users from disabling CloudTrail.
C. Organize accounts into organizational units (OUs). At the organization's root, define and attach a
service control policy (SCP) that prevents users from disabling CloudTrail.
D. Add all existing accounts under the organization's root. Define and attach a service control policy
(SCP) to every that prevents users from disabling CloudTrail.