0% found this document useful (0 votes)
34 views52 pages

PRACTICE3

The document contains a series of questions and answers related to AWS services and architecture, focusing on disaster recovery strategies, data migration, and cost optimization solutions. It includes specific scenarios and recommended solutions for various use cases, such as using Multi-AZ configurations, AWS Snow Family devices, and Auto Scaling Groups. The document also provides feedback on the correct answers and explanations for each question.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views52 pages

PRACTICE3

The document contains a series of questions and answers related to AWS services and architecture, focusing on disaster recovery strategies, data migration, and cost optimization solutions. It includes specific scenarios and recommended solutions for various use cases, such as using Multi-AZ configurations, AWS Snow Family devices, and Auto Scaling Groups. The document also provides feedback on the correct answers and explanations for each question.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

SAA-3 Total points 42/65

Malpi

0 of 0 points

Name *

Gokul Upadhyay Guragain

Email: *

[email protected]

SET 4 42 of 65 points
Q.1 A media company uses Amazon ElastiCache Redis to enhance the *1/1
performance of its RDS database layer. The company wants a robust
disaster recovery strategy for its caching layer that guarantees minimal
downtime as well as minimal data loss while ensuring good application
performance.

Which of the following solutions will you recommend to address the


given use-case?

Schedule manual backups using Redis append-only file (AOF)

Opt for Multi-AZ configuration with automatic failover functionality to help


mitigate failure

Schedule daily automatic backups at a time when you expect low resource
utilization for your cluster

Add read-replicas across multiple availability zones to reduce the risk of potential
data loss because of failure

Feedback

Correct option:

Opt for Multi-AZ configuration with automatic failover functionality to help mitigate failure
- Multi-AZ is the best option when data retention, minimal downtime, and application
performance are a priority.

Data-loss potential - Low. Multi-AZ provides fault tolerance for every scenario, including
hardware-related issues.

Performance impact - Low. Of the available options, Multi-AZ provides the fastest time to
recovery, because there is no manual procedure to follow after the process is
implemented.

Cost - Low to high. Multi-AZ is the lowest-cost option. Use Multi-AZ when you can't risk
losing data because of hardware failure or you can't afford the downtime required by other
options in your response to an outage.
Q.2 As part of the on-premises data center migration to AWS Cloud, a *1/1
company is looking at using multiple AWS Snow Family devices to move
their on-premises data.

Which Snow Family service offers the feature of storage clustering?

AWS Snowcone

AWS Snowball Edge Compute Optimized

AWS Snowmobile Storage Compute

AWS Snowmobile

Feedback

AWS Snowball Edge Compute Optimized - AWS Snowball is a data migration and edge
computing device that comes in two device options: Compute Optimized and Storage
Optimized. Snowball Edge Storage Optimized devices provide 40 vCPUs of compute
capacity coupled with 80 terabytes of usable block or Amazon S3-compatible object
storage. It is well-suited for local storage and large-scale data transfer. Snowball Edge
Compute Optimized devices provide 52 vCPUs, 42 terabytes of usable block or object
storage, and an optional GPU for use cases such as advanced machine learning and full-
motion video analysis in disconnected environments.
Q.3 A company has migrated its application from a monolith architecture *1/1
to a microservices based architecture. The development team has
updated the Route 53 simple record to point "myapp.mydomain.com"
from the old Load Balancer to the new one.
The users are still not redirected to the new Load Balancer. What has
gone wrong in the configuration?

The TTL is still in effect

The CNAME Record is misconfigured

The Alias Record is misconfigured

The health checks are failing

Feedback

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS)
web service. Amazon Route 53 effectively connects user requests to infrastructure
running in AWS – such as Amazon EC2 instances, Elastic Load Balancing load balancers,
or Amazon S3 buckets – and can also be used to route users to infrastructure outside of
AWS.

You can use Amazon Route 53 to configure DNS health checks to route traffic to healthy
endpoints or to independently monitor the health of your application and its endpoints.
Amazon Route 53 Traffic Flow makes it easy for you to manage traffic globally through a
variety of routing types, including Latency Based Routing, Geo DNS, Geoproximity, and
Weighted Round Robin—all of which can be combined with DNS Failover to enable a
variety of low-latency, fault-tolerant architectures.

The TTL is still in effect - TTL (time to live), is the amount of time, in seconds, that you
want DNS recursive resolvers to cache information about a record. If you specify a longer
value (for example, 172800 seconds, or two days), you reduce the number of calls that
DNS recursive resolvers must make to Route 53 to get the latest information for the
record. This has the effect of reducing latency and reducing your bill for Route 53 service.

However, if you specify a longer value for TTL, it takes longer for changes to the record (for
example, a new IP address) to take effect because recursive resolvers use the values in
their cache for longer periods before they ask Route 53 for the latest information. If you're
changing settings for a domain or subdomain that's already in use, AWS recommends that
you initially specify a shorter value, such as 300 seconds, and increase the value after you
confirm that the new settings are correct.

For this use-case, the most likely issue is that the TTL is still in effect so you have to wait
until it expires for the new request to perform another DNS query and get the value for the
new Load Balancer.
Q.4 A systems administrator is creating IAM policies and attaching them *1/1
to IAM identities. After creating the necessary identity-based policies, the
administrator is now creating resource-based policies.

Which is the only resource-based policy that the IAM service supports?

Permissions boundary

AWS Organizations Service Control Policies (SCP)

Trust policy

Access control list (ACL)

Feedback

Correct option:

You manage access in AWS by creating policies and attaching them to IAM identities
(users, groups of users, or roles) or AWS resources. A policy is an object in AWS that,
when associated with an identity or resource, defines their permissions. Resource-based
policies are JSON policy documents that you attach to a resource such as an Amazon S3
bucket. These policies grant the specified principal permission to perform specific actions
on that resource and define under what conditions this applies.

Trust policy - Trust policies define which principal entities (accounts, users, roles, and
federated users) can assume the role. An IAM role is both an identity and a resource that
supports resource-based policies. For this reason, you must attach both a trust policy and
an identity-based policy to an IAM role. The IAM service supports only one type of
resource-based policy called a role trust policy, which is attached to an IAM role.
Q.5 You are working as an AWS architect for a weather tracking facility. *0/1
You are asked to set up a Disaster Recovery (DR) mechanism with
minimum costs. In case of failure, the facility can only bear data loss of a
few minutes without jeopardizing the forecasting models.

As a Solutions Architect, which DR method will you suggest?

Multi-Site

Warm Standby

Pilot Light

Backup and Restore

Correct answer

Pilot Light

Feedback

Incorrect options:

Backup and Restore - In most traditional environments, data is backed up to tape and sent
off-site regularly. If you use this method, it can take a long time to restore your system in
the event of a disruption or disaster. Amazon S3 is an ideal destination for backup data
that might be needed quickly to perform a restore. Transferring data to and from Amazon
S3 is typically done through the network and is therefore accessible from any location.
There are many commercial and open-source backup solutions that integrate with
Amazon S3. You can use AWS Import/Export to transfer very large data sets by shipping
storage devices directly to AWS. For longer-term data storage where retrieval times of
several hours are adequate, there is Amazon Glacier, which has the same durability model
as Amazon S3. Amazon Glacier is a low-cost alternative starting from $0.01/GB per
month. Amazon Glacier and Amazon S3 can be used in conjunction to produce a tiered
backup solution. Even though Backup and Restore method is cheaper, it has an RPO in
hours, so this option is not the right fit.

Warm Standby - The term warm standby is used to describe a DR scenario in which a
scaled-down version of a fully functional environment is always running in the cloud. A
warm standby solution extends the pilot light elements and preparation. It further
decreases the recovery time because some services are always running. By identifying
your business-critical systems, you can fully duplicate these systems on AWS and have
them always on. This option is more costly compared to Pilot Light.

Multi-Site - A multi-site solution runs on AWS as well as on your existing on-site


infrastructure in an active-active configuration. The data replication method that you
employ will be determined by the recovery point that you choose, either Recovery Time
Objective (the maximum allowable downtime before degraded operations are restored) or
Recovery Point Objective (the maximum allowable time window whereby you will accept
the loss of transactions during the DR process). This option is more costly compared to
Pilot Light.

References:

https://aws.amazon.com/blogs/publicsector/rapidly-recover-mission-critical-systems-in-a-
disaster/

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/plan-for-disaster-
recovery-dr.html

https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/disaster-recovery-dr-
objectives.html

Q.6 A Pharmaceuticals company is looking for a simple solution to *1/1


connect its VPCs and on-premises networks through a central hub.

As a Solutions Architect, which of the following would you suggest as the


solution that requires the LEAST operational overhead?

Partially meshed VPC peering can be used to connect the Amazon VPCs to the on-
premises networks

Fully meshed VPC peering can be used to connect the Amazon VPCs to the on-
premises networks

Use AWS Transit Gateway to connect the Amazon VPCs to the on-premises
networks

Use Transit VPC Solution to connect the Amazon VPCs to the on-premises
networks
Q.7 A company uses Application Load Balancers (ALBs) in multiple AWS *1/1
Regions. The ALBs receive inconsistent traffic that varies throughout the
year. The engineering team at the company needs to allow the IP
addresses of the ALBs in the on-premises firewall to enable connectivity.

Which of the following represents the MOST scalable solution with


minimal configuration changes?

Set up AWS Global Accelerator. Register the ALBs in different Regions to the
Global Accelerator. Configure the on-premises firewall's rule to allow static IP
addresses associated with the Global Accelerator

Develop an AWS Lambda script to get the IP addresses of the ALBs in different
Regions. Configure the on-premises firewall's rule to allow the IP addresses of the
ALBs

Migrate all ALBs in different Regions to the Network Load Balancer (NLBs).
Configure the on-premises firewall's rule to allow the Elastic IP addresses of all the
NLBs

Set up a Network Load Balancer (NLB) in one Region. Register the private IP
addresses of the ALBs in different Regions with the NLB. Configure the on-
premises firewall's rule to allow the Elastic IP address attached to the NLB
Q.8 The engineering team at a leading e-commerce company is *0/1
anticipating a surge in the traffic because of a flash sale planned for the
weekend. You have estimated the web traffic to be 10x. The content of
your website is highly dynamic and changes very often.

As a Solutions Architect, which of the following options would you


recommend to make sure your infrastructure scales for that day?

Use a Route53 Multi Value record

Deploy the website on S3

Use a CloudFront distribution in front of your website

Use an Auto Scaling Group

Correct answer

Use an Auto Scaling Group


Q.9 An enterprise has decided to move its secondary workloads such as *1/1
backups and archives to AWS cloud. The CTO wishes to move the data
stored on physical tapes to Cloud, without changing their current tape
backup workflows. The company holds petabytes of data on tapes and
needs a cost-optimized solution to move this data to cloud.

What is an optimal solution that meets these requirements while keeping


the costs to a minimum?

Use AWS VPN connection between the on-premises datacenter and your Amazon
VPC. Once this is established, you can use Amazon Elastic File System (Amazon
EFS) to get a scalable, fully managed elastic NFS file system for use with AWS
Cloud services and on-premises resources

Use Tape Gateway, which can be used to move on-premises tape data onto
AWS Cloud. Then, Amazon S3 archiving storage classes can be used to store
data cost-effectively for years

Use AWS DataSync, which makes it simple and fast to move large amounts of data
online between on-premises storage and AWS Cloud. Data moved to Cloud can
then be stored cost-effectively in Amazon S3 archiving storage classes

Use AWS Direct Connect, a cloud service solution that makes it easy to establish a
dedicated network connection from on-premises to AWS to transfer data. Once this
is done, Amazon S3 can be used to store data at lesser costs

Q.10 A development team has configured an Elastic Load Balancer for *1/1
host-based routing. The idea is to support multiple subdomains and
different top-level domains.

The rule *.example.com matches which of the following?

XAMPLE.COM

example.com

example.test.com

test.example.com
Q.11 An e-commerce company has copied 1 PB of data from its on- *1/1
premises data center to an Amazon S3 bucket in the us-west-1 Region
using an AWS Direct Connect link. The company now wants to set up a
one-time copy of the data to another S3 bucket in the us-east-1 Region.
The on-premises data center does not allow the use of AWS Snowball.

As a Solutions Architect, which of the following options can be used to


accomplish this goal? (Select two)

Set up S3 Transfer Acceleration to copy objects across S3 buckets in different


Regions using S3 console

Copy data from the source S3 bucket to a target S3 bucket using the S3 console

Copy data from the source bucket to the destination bucket using the aws S3
sync command

Set up S3 batch replication to copy objects across S3 buckets in another Region


using S3 console and then delete the replication configuration

Use Snowball Edge device to copy the data from one Region to another Region

Q.12 Amazon Route 53 is configured to route traffic to two Network Load *1/1
Balancer (NLB) nodes belonging to two Availability Zones (AZs): AZ-A
and AZ-B. Cross-zone load balancing is disabled. AZ-A has four targets
and AZ-B has six targets.

Which of the below statements is true about traffic distribution to the


target instances from Route 53?

Each of the four targets in AZ-A receives 12.5% of the traffic

Each of the six targets in AZ-B receives 10% of the traffic

Each of the four targets in AZ-A receives 8% of the traffic

Each of the four targets in AZ-A receives 10% of the traffic


Q.13 A small rental company had 5 employees, all working under the *1/1
same AWS cloud account. These employees deployed their applications
built for various functions- including billing, operations, finance, etc. Each
of these employees has been operating in their own VPC. Now, there is a
need to connect these VPCs so that the applications can communicate
with each other.

Which of the following is the MOST cost-effective solution for this use-
case?

Use VPC Peering

Use a Direct Connect

Use an Internet Gateway

Use a NAT Gateway

Q.14 As a solutions architect, you have created a solution that utilizes an *1/1
Application Load Balancer with stickiness and an Auto Scaling Group
(ASG). The ASG spawns across 2 Availability Zones (AZ). AZ-A has 3 EC2
instances and AZ-B has 4 EC2 instances. The ASG is about to go into a
scale-in event due to the triggering of a CloudWatch alarm.

What will happen under the default ASG configuration?

An instance in the AZ-A will be created

A random instance will be terminated in AZ-B

A random instance in the AZ-A will be terminated

The instance with the oldest launch configuration will be terminated in AZ-B
Q.15 A startup's cloud infrastructure consists of a few Amazon EC2 *0/1
instances, Amazon RDS instances and Amazon S3 storage. A year into
their business operations, the startup is incurring costs that seem too
high for their business requirements.

Which of the following options represents a valid cost-optimization


solution?

Use Amazon S3 Storage class analysis to get recommendations for transitions of


objects to S3 Glacier storage classes to reduce storage costs. You can also
automate moving these objects into lower-cost storage tier using Lifecycle Policies

Use AWS Compute Optimizer recommendations to help you choose the optimal
Amazon EC2 purchasing options and help reserve your instance capacities at
reduced costs

Use AWS Trusted Advisor checks on Amazon EC2 Reserved Instances to


automatically renew Reserved Instances. Trusted advisor also suggests
Amazon RDS idle DB instances

Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that
are either idle or have low utilization and use AWS Compute Optimizer to look at
instance type recommendations

Correct answer

Use AWS Cost Explorer Resource Optimization to get a report of EC2 instances that
are either idle or have low utilization and use AWS Compute Optimizer to look at
instance type recommendations
Q.16 A company has recently created a new department to handle their *1/1
services workload. An IT team has been asked to create a custom VPC to
isolate the resources created in this new department. They have set up
the public subnet and internet gateway (IGW). However, they are not able
to ping the Amazon EC2 instances with Elastic IP launched in the newly
created VPC.

As a Solutions Architect, the team has requested your help. How will you
troubleshoot this scenario? (Select two)

Check if the route table is configured with IGW

Create a secondary IGW to attach with public subnet and move the current IGW to
private and write route tables

Contact AWS support to map your VPC with subnet

Disable Source / Destination check on the EC2 instance

Check if the security groups allow ping from the source


Q.17 Your company is deploying a website running on Elastic Beanstalk. *1/1
The website takes over 45 minutes for the installation and contains both
static as well as dynamic files that must be generated during the
installation process.

As a Solutions Architect, you would like to bring the time to create a new
instance in your Elastic Beanstalk deployment to be less than 2 minutes.
Which of the following options should be combined to build a solution for
this requirement? (Select two)

Use EC2 user data to customize the dynamic installation parts at boot time

Use EC2 user data to install the application at boot time

Store the installation files in S3 so they can be quickly retrieved

Create a Golden AMI with the static installation components already setup

Use Elastic Beanstalk deployment caching feature


Q.18 A junior developer has downloaded a sample Amazon S3 bucket *0/1
policy to make changes to it based on new company-wide access
policies. He has requested your help in understanding this bucket policy.

As a Solutions Architect, which of the following would you identify as the


correct description for the given policy?

It ensures EC2 instances that have inherited a security group can access the bucket

It ensures the S3 bucket is exposing an external IP within the CIDR range


specified, except one IP

It authorizes an entire CIDR except one IP address to access the S3 bucket

It authorizes an IP address and a CIDR to access the S3 bucket

Correct answer

It authorizes an entire CIDR except one IP address to access the S3 bucket


Q.19 You have an S3 bucket that contains files in two different folders - *0/1
s3://my-bucket/images and s3://my-bucket/thumbnails. When an image
is first uploaded and new, it is viewed several times. But after 45 days,
analytics prove that image files are on average rarely requested, but the
thumbnails still are. After 180 days, you would like to archive the image
files and the thumbnails. Overall you would like the solution to remain
highly available to prevent disasters happening against a whole AZ.

How can you implement an efficient cost strategy for your S3 bucket?
(Select two)

Create a Lifecycle Policy to transition objects to Glacier using a prefix after 180
days

Create a Lifecycle Policy to transition all objects to S3 Standard IA after 45 days

Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after


45 days

Create a Lifecycle Policy to transition objects to S3 One Zone IA using a prefix after
45 days

Create a Lifecycle Policy to transition all objects to Glacier after 180 days

Correct answer

Create a Lifecycle Policy to transition objects to S3 Standard IA using a prefix after


45 days

Create a Lifecycle Policy to transition all objects to Glacier after 180 days
Q.20 For security purposes, a development team has decided to deploy *1/1
the EC2 instances in a private subnet. The team plans to use VPC
endpoints so that the instances can access some AWS services securely.
The members of the team would like to know about the two AWS
services that support Gateway Endpoints.

As a solutions architect, which of the following services would you


suggest for this requirement? (Select two)

Amazon Simple Notification Service (SNS)

Amazon Simple Queue Service (SQS)

Amazon S3

DynamoDB

Q.21 A social media company wants the capability to dynamically alter *1/1
the size of a geographic area from which traffic is routed to a specific
server resource.

Which feature of Route 53 can help achieve this functionality?

Weighted routing

Geolocation routing

Geoproximity routing

Latency-based routing
Q.22 A company wants to grant access to an S3 bucket to users in its *0/1
own AWS account as well as to users in another AWS account. Which of
the following options can be used to meet this requirement?

Use a bucket policy to grant permission to users in its account as well as to users
in another account

Use permissions boundary to grant permission to users in its account as well as to


users in another account

Use either a bucket policy or a user policy to grant permission to users in its
account as well as to users in another account

Use a user policy to grant permission to users in its account as well as to users in
another account

Correct answer

Use a bucket policy to grant permission to users in its account as well as to users in
another account
Q.23 A company runs a popular dating website on the AWS Cloud. As a *0/1
Solutions Architect, you've designed the architecture of the website to
follow a serverless pattern on the AWS Cloud using API Gateway and
AWS Lambda. The backend uses an RDS PostgreSQL database. Currently,
the application uses a username and password combination to connect
the Lambda function to the RDS database.

You would like to improve the security at the authentication level by


leveraging short-lived credentials. What will you choose? (Select two)

Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

Restrict the RDS database security group to the Lambda's security group

Use IAM authentication from Lambda to RDS PostgreSQL

Embed a credential rotation logic in the AWS Lambda, retrieving them from SSM

Deploy AWS Lambda in a VPC

Correct answer

Attach an AWS Identity and Access Management (IAM) role to AWS Lambda

Use IAM authentication from Lambda to RDS PostgreSQL


Q.24 An e-commerce company wants to migrate its on-premises *0/1
application to AWS. The application consists of application servers and a
Microsoft SQL Server database. The solution should result in the
maximum possible availability for the database layer while minimizing
operational and management overhead.

As a solutions architect, which of the following would you recommend to


meet the given requirements?

Migrate the data to EC2 instance hosted SQL Server database. Deploy the EC2
instances in a Multi-AZ configuration

Migrate the data to Amazon RDS for SQL Server database in a cross-region read-
replica configuration

Migrate the data to Amazon RDS for SQL Server database in a cross-region
Multi-AZ deployment

Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment

Correct answer

Migrate the data to Amazon RDS for SQL Server database in a Multi-AZ deployment
Q.25 As an e-sport tournament hosting company, you have servers that *1/1
need to scale and be highly available. Therefore you have deployed an
Elastic Load Balancer (ELB) with an Auto Scaling group (ASG) across 3
Availability Zones (AZs). When e-sport tournaments are running, the
servers need to scale quickly. And when tournaments are done, the
servers can be idle. As a general rule, you would like to be highly
available, have the capacity to scale and optimize your costs.

What do you recommend? (Select two)

Use Dedicated hosts for the minimum capacity

Set the minimum capacity to 1

Use Reserved Instances for the minimum capacity

Set the minimum capacity to 3

Set the minimum capacity to 2

Q.26 A financial services firm has traditionally operated with an on- *1/1
premise data center and would like to create a disaster recovery strategy
leveraging the AWS Cloud.

As a Solutions Architect, you would like to ensure that a scaled-down


version of a fully functional environment is always running in the AWS
cloud, and in case of a disaster, the recovery time is kept to a minimum.
Which disaster recovery strategy is that?

Multi Site

Warm Standby

Pilot Light

Backup and Restore


Q.27 A digital media company needs to manage uploads of around 1TB *1/1
each from an application being used by a partner company.

As a Solutions Architect, how will you handle the upload of these files to
Amazon S3?

Use multi-part upload feature of Amazon S3

Use Direct Connect to provide extra bandwidth

Use Amazon S3 Versioning

Use AWS Snowball


Q.28 What does this CloudFormation snippet do? (Select three) * 1/1

It prevents traffic from reaching on HTTP unless from the IP 192.168.1.1

It allows any IP to pass through on the HTTP port

It configures a security group's outbound rules

It lets traffic flow from one IP on port 22

It configures an NACL's inbound rules

It configures a security group's inbound rules

It only allows the IP 0.0.0.0 to reach HTTP


Q.29 The engineering team at a global e-commerce company is currently *1/1
reviewing their disaster recovery strategy. The team has outlined that
they need to be able to quickly recover their application stack with a
Recovery Time Objective (RTO) of 5 minutes, in all of the AWS Regions
that the application runs. The application stack currently takes over 45
minutes to install on a Linux system.

As a Solutions architect, which of the following options would you


recommend as the disaster recovery strategy?

Use Amazon EC2 user data to speed up the installation process

Create an AMI after installing the software and use this AMI to run the recovery
process in other Regions

Create an AMI after installing the software and copy the AMI across all Regions.
Use this Region-specific AMI to run the recovery process in the respective
Regions

Store the installation files in Amazon S3 for quicker retrieval

Q.30 An IT company runs a high-performance computing (HPC) workload *1/1


on AWS. The workload requires high network throughput and low-latency
network performance along with tightly coupled node-to-node
communications. The EC2 instances are properly sized for compute and
storage capacity and are launched using default options.

Which of the following solutions can be used to improve the performance


of the workload?

Select dedicated instance tenancy while launching EC2 instances

Select an Elastic Inference accelerator while launching EC2 instances

Select the appropriate capacity reservation while launching EC2 instances

Select a cluster placement group while launching EC2 instances


Q.31 You are working as a Solutions Architect for a photo processing *0/1
company that has a proprietary algorithm to compress an image without
any loss in quality. Because of the efficiency of the algorithm, your clients
are willing to wait for a response that carries their compressed images
back. You also want to process these jobs asynchronously and scale
quickly, to cater to the high demand. Additionally, you also want the job to
be retried in case of failures.

Which combination of choices do you recommend to minimize cost and


comply with the requirements? (Select two)

Amazon Simple Notification Service (SNS)

EC2 Spot Instances

EC2 Reserved Instances

Amazon Simple Queue Service (SQS)

EC2 On-Demand Instances

Correct answer

EC2 Spot Instances

Amazon Simple Queue Service (SQS)


Q.32 As a Solutions Architect, you are tasked to design a distributed *1/1
application that will run on various EC2 instances. This application needs
to have the highest performance local disk to cache data. Also, data is
copied through an EC2 to EC2 replication mechanism. It is acceptable if
the instance loses its data when stopped or terminated.

Which storage solution do you recommend?

Amazon Elastic Block Store (EBS)

Instance Store

Amazon Elastic File System (Amazon EFS)

Amazon Simple Storage Service (Amazon S3)

Q.33 A CRM company has a SaaS (Software as a Service) application *1/1


that feeds updates to other in-house and third-party applications. The
SaaS application and the in-house applications are being migrated to use
AWS services for this inter-application communication.

As a Solutions Architect, which of the following would you suggest to


asynchronously decouple the architecture?

Use Amazon Simple Notification Service (SNS) to communicate between systems


and decouple the architecture

Use Elastic Load Balancing for effective decoupling of system architecture

Use Amazon Simple Queue Service (SQS) to decouple the architecture

Use Amazon EventBridge to decouple the system architecture


Q.34 A Big Data processing company has created a distributed data *0/1
processing framework that performs best if the network performance
between the processing machines is high. The application has to be
deployed on AWS, and the company is only looking at performance as the
key measure.

As a Solutions Architect, which deployment do you recommend?

Use a Cluster placement group

Optimize the EC2 kernel using EC2 User Data

Use Spot Instances

Use a Spread placement group

Correct answer

Use a Cluster placement group

Q.35 A mobile gaming company is experiencing heavy read traffic to its *1/1
Amazon Relational Database Service (RDS) database that retrieves
player’s scores and stats. The company is using an RDS database
instance type that is not cost-effective for their budget. The company
would like to implement a strategy to deal with the high volume of read
traffic, reduce latency, and also downsize the instance size to cut costs.

Which of the following solutions do you recommend?

Setup ElastiCache in front of RDS

Move to Amazon Redshift

Switch application code to AWS Lambda for better performance

Setup RDS Read Replicas


Q.36 A retail company uses AWS Cloud to manage its technology *0/1
infrastructure. The company has deployed its consumer-focused web
application on EC2-based web servers and uses RDS PostgreSQL DB as
the data store. The PostgreSQL DB is set up in a private subnet that
allows inbound traffic from selected EC2 instances. The DB also uses
AWS KMS for encrypting data at rest.

Which of the following steps would you recommend to facilitate secure


access to the database?

Create a new Network Access Control List (NACL) that blocks SSH from the entire
EC2 subnet into the DB

Use IAM authentication to access the DB instead of the database user's access
credentials

Configure RDS to use SSL for data in transit

Create a new security group that blocks SSH from the selected EC2 instances into
the DB

Correct answer

Configure RDS to use SSL for data in transit


Q.37 You are working for a SaaS (Software as a Service) company as a *1/1
solutions architect and help design solutions for the company's
customers. One of the customers is a bank and has a requirement to
whitelist up to two public IPs when the bank is accessing external
services across the internet.

Which architectural choice do you recommend to maintain high


availability, support scaling-up to 10 instances and comply with the
bank's requirements?

Use a Classic Load Balancer with an Auto Scaling Group (ASG)

Use an Auto Scaling Group (ASG) with Dynamic Elastic IPs attachment

Use an Application Load Balancer with an Auto Scaling Group (ASG)

Use a Network Load Balancer with an Auto Scaling Group (ASG)

Q.38 You started a new job as a solutions architect at a company that *0/1
has both AWS experts and people learning AWS. Recently, a developer
misconfigured a newly created RDS database which resulted in a
production outage.

How can you ensure that RDS specific best practices are incorporated
into a reusable infrastructure template to be used by all your AWS users?

Attach an IAM policy to interns preventing them from creating an RDS database

Create a Lambda function which sends emails when it finds misconfigured RDS
databases

Store your recommendations in a custom Trusted Advisor rule

Use CloudFormation to manage RDS databases

Correct answer

Use CloudFormation to manage RDS databases


Q.39 A ride-sharing company wants to improve the ride-tracking system *1/1
that stores GPS coordinates for all rides. The engineering team at the
company is looking for a NoSQL database that has single-digit
millisecond latency, can scale horizontally, and is serverless, so that they
can perform high-frequency lookups reliably.

As a Solutions Architect, which database do you recommend for their


requirements?

Amazon Relational Database Service (Amazon RDS)

Amazon DynamoDB

Amazon Neptune

Amazon ElastiCache

Q.40 A company has grown from a small startup to an enterprise *1/1


employing over 1000 people. As the team size has grown, the company
has recently observed some strange behavior, with S3 buckets settings
being changed regularly.

How can you figure out what's happening without restricting the rights of
the users?

Use CloudTrail to analyze API calls

Implement an IAM policy to forbid users to change S3 bucket settings

Use S3 access logs to analyze user access using Athena

Implement a bucket policy requiring MFA for all operations


Q.41 An IT company has a large number of clients opting to build their *0/1
APIs by using Docker containers. To facilitate the hosting of these
containers, the company is looking at various orchestration services
available with AWS.
As a Solutions Architect, which of the following solutions will you
suggest? (Select two)

Use Amazon EKS with AWS Fargate for serverless orchestration of the
containerized services

Use Amazon ECS with AWS Fargate for serverless orchestration of the
containerized services

Use Amazon EMR for serverless orchestration of the containerized services

Use Amazon SageMaker for serverless orchestration of the containerized services

Use Amazon ECS with Amazon EC2 for serverless orchestration of the
containerized services

Correct answer

Use Amazon EKS with AWS Fargate for serverless orchestration of the
containerized services

Use Amazon ECS with AWS Fargate for serverless orchestration of the
containerized services
Q.42 A photo hosting service publishes a collection of beautiful mountain *1/1
images, every month, that aggregate over 50 GB in size and downloaded
all around the world. The content is currently hosted on EFS and
distributed by Elastic Load Balancing (ELB) and Amazon EC2 instances.
The website is experiencing high load each month and very high network
costs.

As a Solutions Architect, what can you recommend that won't force an


application refactor and reduce network costs and EC2 load drastically?

Create a CloudFront distribution

Upgrade the Amazon EC2 instances

Enable ELB caching

Host the master pack onto Amazon S3 for faster access


Q.43 An Elastic Load Balancer has marked all the EC2 instances in the *0/1
target group as unhealthy. Surprisingly, when a developer enters the IP
address of the EC2 instances in the web browser, he can access the
website.

What could be the reason the instances are being marked as unhealthy?
(Select two)

Your web-app has a runtime that is not supported by the Application Load
Balancer

The EBS volumes have been improperly mounted

The route for the health check is misconfigured

You need to attach Elastic IP to the EC2 instances

The security group of the EC2 instance does not allow for traffic from the
security group of the Application Load Balancer

Correct answer

The route for the health check is misconfigured

The security group of the EC2 instance does not allow for traffic from the security
group of the Application Load Balancer
Q.44 A retail company is using AWS Site-to-Site VPN connections for *1/1
secure connectivity to its AWS cloud resources from its on-premises data
center. Due to a surge in traffic across the VPN connections to the AWS
cloud, users are experiencing slower VPN connectivity.

Which of the following options will maximize the VPN throughput?

Create a transit gateway with equal cost multipath routing and add additional
VPN tunnels

Use AWS Global Accelerator for the VPN connection to maximize the throughput

Create a virtual private gateway with equal cost multipath routing and multiple
channels

Use Transfer Acceleration for the VPN connection to maximize the throughput
Q.45 The engineering team at an e-commerce company has been tasked *0/1
with migrating to a serverless architecture. The team wants to focus on
the key points of consideration when using Lambda as a backbone for
this architecture.

As a Solutions Architect, which of the following options would you


identify as correct for the given requirement? (Select three)

If you intend to reuse code in more than one Lambda function, you should consider
creating a Lambda Layer for the reusable code

Serverless architecture and containers complement each other but you cannot
package and deploy Lambda functions as container images

Lambda allocates compute power in proportion to the memory you allocate to your
function. AWS, thus recommends to over provision your function time out settings
for the proper performance of Lambda functions

By default, Lambda functions always operate from an AWS-owned VPC and


hence have access to any public internet address or public AWS APIs. Once a
Lambda function is VPC-enabled, it will need a route through a NAT gateway in
a public subnet to access public resources

Since Lambda functions can scale extremely quickly, it's a good idea to deploy a
CloudWatch Alarm that notifies your team when function metrics such as
ConcurrentExecutions or Invocations exceeds the expected threshold

The bigger your deployment package, the slower your Lambda function will
cold-start. Hence, AWS suggests packaging dependencies as a separate
package from the actual Lambda package

Correct answer

If you intend to reuse code in more than one Lambda function, you should consider
creating a Lambda Layer for the reusable code

By default, Lambda functions always operate from an AWS-owned VPC and hence
have access to any public internet address or public AWS APIs. Once a Lambda
function is VPC-enabled, it will need a route through a NAT gateway in a public
subnet to access public resources

Since Lambda functions can scale extremely quickly, it's a good idea to deploy a
CloudWatch Alarm that notifies your team when function metrics such as
ConcurrentExecutions or Invocations exceeds the expected threshold
Q.46 A company has developed a popular photo-sharing website using a *0/1
serverless pattern on the AWS Cloud using API Gateway and AWS
Lambda. The backend uses an RDS PostgreSQL database. The website is
experiencing high read traffic and the Lambda functions are putting an
increased read load on the RDS database.

The architecture team is planning to increase the read throughput of the


database, without changing the application's core logic. As a Solutions
Architect, what do you recommend?

Use Amazon ElastiCache

Use Amazon RDS Multi-AZ feature

Use Amazon DynamoDB

Use Amazon RDS Read Replicas

Correct answer

Use Amazon RDS Read Replicas


Q.47 An Internet-of-Things (IoT) company would like to have a streaming *0/1
system that performs real-time analytics on the ingested IoT data. Once
the analytics is done, the company would like to send notifications back
to the mobile applications of the IoT device owners.

As a solutions architect, which of the following AWS technologies would


you recommend to send these notifications to the mobile applications?

Amazon Kinesis with Simple Queue Service (SQS)

Amazon Kinesis with Amazon Simple Notification Service (SNS)

Amazon Kinesis with Simple Email Service (Amazon SES)

Amazon Simple Queue Service (SQS) with Amazon Simple Notification Service
(SNS)

Correct answer

Amazon Kinesis with Amazon Simple Notification Service (SNS)


Q.48 A company has noticed that its EBS storage volume (io1) accounts *0/1
for 90% of the cost and the remaining 10% cost can be attributed to the
EC2 instance. The CloudWatch metrics report that both the EC2 instance
and the EBS volume are under-utilized. The CloudWatch metrics also
show that the EBS volume has occasional I/O bursts. The entire
infrastructure is managed by AWS CloudFormation.

As a Solutions Architect, what do you propose to reduce the costs?

Convert the Amazon EC2 instance EBS volume to gp2

Change the Amazon EC2 instance type to something much smaller

Don't use a CloudFormation template to create the database as the


CloudFormation service incurs greater service charges

Keep the EBS volume to io1 and reduce the IOPS

Correct answer

Convert the Amazon EC2 instance EBS volume to gp2


Q.49 A niche social media application allows users to connect with *1/1
sports athletes. As a solutions architect, you've designed the architecture
of the application to be fully serverless using API Gateway & AWS
Lambda. The backend uses a DynamoDB table. Some of the star athletes
using the application are highly popular, and therefore DynamoDB has
increased the RCUs. Still, the application is experiencing a hot partition
problem.

What can you do to improve the performance of DynamoDB and


eliminate the hot partition problem without a lot of application
refactoring?

Use Amazon ElastiCache

Use DynamoDB DAX

Use DynamoDB Streams

Use DynamoDB Global Tables

Q.50 A leading e-commerce company runs its IT infrastructure on AWS *1/1


Cloud. The company has a batch job running at 7 am daily on an RDS
database. It processes shipping orders for the past day, and usually gets
around 2000 records that need to be processed sequentially in a batch
job via a shell script. The processing of each record takes about 3
seconds.

What platform do you recommend to run this batch job?

Amazon EC2

AWS Lambda

AWS Glue

Amazon Kinesis Data Streams


Q.51 The development team at a social media company wants to handle *1/1
some complicated queries such as "What are the number of likes on the
videos that have been posted by friends of a user A?".

As a solutions architect, which of the following AWS database services


would you suggest as the BEST fit to handle such use cases?

Amazon ElasticSearch

Amazon Aurora

Amazon Neptune

Amazon Redshift
Q.52 The engineering team at a social media company has recently *0/1
migrated to AWS Cloud from its on-premises data center. The team is
evaluating CloudFront to be used as a CDN for its flagship application.
The team has hired you as an AWS Certified Solutions Architect
Associate to advise on CloudFront capabilities on routing, security, and
high availability.

Which of the following would you identify as correct regarding


CloudFront? (Select three)

Use field level encryption in CloudFront to protect sensitive data for specific
content

Use KMS encryption in CloudFront to protect sensitive data for specific content

Use geo restriction to configure CloudFront for high-availability and failover

Use an origin group with primary and secondary origins to configure CloudFront
for high-availability and failover

CloudFront can route to multiple origins based on the content type

CloudFront can route to multiple origins based on the price class

Correct answer

Use field level encryption in CloudFront to protect sensitive data for specific
content

Use an origin group with primary and secondary origins to configure CloudFront for
high-availability and failover

CloudFront can route to multiple origins based on the content type


Q.53 A company's business logic is built on several microservices that *1/1
are running in the on-premises data center. They currently communicate
using a message broker that supports the MQTT protocol. The company
is looking at migrating these applications and the message broker to
AWS Cloud without changing the application logic.

Which technology allows you to get a managed message broker that


supports the MQTT protocol?

Amazon Kinesis Data Streams

Amazon Simple Notification Service (SNS)

Amazon MQ

Amazon Simple Queue Service (SQS)

Q.54 A healthcare company is evaluating storage options on Amazon S3 *1/1


to meet regulatory guidelines. The data should be stored in such a way on
S3 that it cannot be deleted until the regulatory time period has expired.

As a solutions architect, which of the following would you recommend for


the given requirement?

Use S3 Object Lock

Activate MFA delete on the S3 bucket

Use S3 cross-Region Replication

Use S3 Glacier Vault Lock


Q.55 A developer in your company has set up a classic 2 tier architecture *0/1
consisting of an Application Load Balancer and an Auto Scaling group
(ASG) managing a fleet of EC2 instances. The ALB is deployed in a
subnet of size 10.0.1.0/24 and the ASG is deployed in a subnet of
size 10.0.4.0/22.

As a solutions architect, you would like to adhere to the security pillar of


the well-architected framework. How do you configure the security group
of the EC2 instances to only allow traffic coming from the ALB?

Add a rule to authorize the security group of the ASG

Add a rule to authorize the security group of the ALB

Add a rule to authorize the CIDR 10.0.4.0/22

Add a rule to authorize the CIDR 10.0.1.0/24

Correct answer

Add a rule to authorize the security group of the ALB

Q.56 A ride-sharing company wants to use an Amazon DynamoDB table *1/1


for data storage. The table will not be used during the night hours
whereas the read and write traffic will often be unpredictable during day
hours. When traffic spikes occur they will happen very quickly.

Which of the following will you recommend as the best-fit solution?

Set up a DynamoDB table in the on-demand capacity mode

Set up a DynamoDB table with a global secondary index

Set up a DynamoDB global table in the provisioned capacity mode

Set up a DynamoDB table in the provisioned capacity mode with auto-scaling


enabled
Q.57 A CRM web application was written as a monolith in PHP and is *0/1
facing scaling issues because of performance bottlenecks. The CTO
wants to re-engineer towards microservices architecture and expose their
application from the same load balancer, linked to different target groups
with different URLs: checkout.mycorp.com, www.mycorp.com,
yourcorp.com/profile and yourcorp.com/search. The CTO would like to
expose all these URLs as HTTPS endpoints for security purposes.

As a solutions architect, which of the following would you recommend as


a solution that requires MINIMAL configuration effort?

Use SSL certificates with SNI

Use a wildcard SSL certificate

Use an HTTP to HTTPS redirect

Change the ELB SSL Security Policy

Correct answer

Use SSL certificates with SNI


Q.58 A music-sharing company uses a Network Load Balancer to direct *1/1
traffic to 5 EC2 instances managed by an Auto Scaling group. When a
very popular song is released, the Auto Scaling Group scales to 100
instances and the company incurs high network and compute fees.

The company wants a solution to reduce the costs without changing any
of the application code. What do you recommend?

Move the songs to Glacier

Leverage AWS Storage Gateway

Use a CloudFront distribution

Move the songs to S3


Q.59 A Big Data analytics company writes data and log files in Amazon *0/1
S3 buckets. The company now wants to stream the existing data files as
well as any ongoing file updates from Amazon S3 to Amazon Kinesis
Data Streams.

As a Solutions Architect, which of the following would you suggest as the


fastest possible way of building a solution for this requirement?

Configure EventBridge events for the bucket actions on Amazon S3. An AWS
Lambda function can then be triggered from the EventBridge event that will
send the necessary data to Amazon Kinesis Data Streams

Leverage AWS Database Migration Service (AWS DMS) as a bridge between


Amazon S3 and Amazon Kinesis Data Streams

Amazon S3 bucket actions can be directly configured to write data into Amazon
Simple Notification Service (SNS). SNS can then be used to send the updates to
Amazon Kinesis Data Streams

Leverage S3 event notification to trigger a Lambda function for the file create event.
The Lambda function will then send the necessary data to Amazon Kinesis Data
Streams

Correct answer

Leverage AWS Database Migration Service (AWS DMS) as a bridge between


Amazon S3 and Amazon Kinesis Data Streams
Q.60 An e-commerce company tracks user clicks on its flagship website *1/1
and performs analytics to provide near-real-time product
recommendations. An EC2 instance receives data from the website and
sends the data to an Aurora DB instance. Another EC2 instance
continuously checks the changes in the database and executes SQL
queries to provide recommendations. Now, the company wants a
redesign to decouple and scale the infrastructure. The solution must
ensure that data can be analyzed in real-time without any data loss even
when the company sees huge traffic spikes.

What would you recommend as an AWS Certified Solutions Architect


Associate?

Leverage Amazon Kinesis Data Streams to capture the data from the website
and feed it into Amazon Kinesis Data Analytics which can query the data in real
time. Lastly, the analyzed feed is output into Kinesis Data Firehose to persist the
data on Amazon S3

Leverage Amazon SQS to capture the data from the website. Configure a fleet of
EC2 instances under an Auto scaling group to process messages from the SQS
queue and trigger the scaling policy based on the number of pending messages in
the queue. Perform real-time analytics using a third-party library on the EC2
instances

Leverage Amazon Kinesis Data Streams to capture the data from the website and
feed it into Kinesis Data Firehose to persist the data on Amazon S3. Lastly, use
Amazon Athena to analyze the data in real time

Leverage Amazon Kinesis Data Streams to capture the data from the website and
feed it into Amazon QuickSight which can query the data in real time. Lastly, the
analyzed feed is output into Kinesis Data Firehose to persist the data on Amazon
S3
Q.61 A company has built a serverless application using API Gateway *1/1
and AWS Lambda. The backend is leveraging an RDS Aurora MySQL
database. The web application was initially launched in the Americas and
the company would now like to expand it to Europe, where a read-only
version will be available to improve latency. You plan on deploying the API
Gateway and AWS Lambda using CloudFormation, but would like to have
a read-only copy of your data in Europe as well.

As a Solutions Architect, what do you recommend?

Use a DynamoDB Streams

Create a Lambda function to periodically back up and restore the Aurora database
in another region

Use Aurora Read Replicas

Use Aurora Multi-AZ

Q.62 You have developed a new REST API leveraging the API Gateway, *1/1
AWS Lambda and Aurora database services. Most of the workload on the
website is read-heavy. The data rarely changes and it is acceptable to
serve users outdated data for about 24 hours. Recently, the website has
been experiencing high load and the costs incurred on the Aurora
database have been very high.

How can you easily reduce the costs while improving performance, with
minimal changes?

Enable AWS Lambda In Memory Caching

Switch to using an Application Load Balancer

Add Aurora Read Replicas

Enable API Gateway Caching


Q.63 A company wants to adopt a hybrid cloud infrastructure where it *1/1
uses some AWS services such as S3 alongside its on-premises data
center. The company wants a dedicated private connection between the
on-premise data center and AWS. In case of failures though, the company
needs to guarantee uptime and is willing to use the public internet for an
encrypted connection.

What do you recommend? (Select two)

Use Direct Connect as a primary connection

Use Site to Site VPN as a primary connection

Use Site to Site VPN as a backup connection

Use Egress Only Internet Gateway as a backup connection

Use Direct Connect as a backup connection

Q.64 Your company runs a web portal to match developers to clients who *1/1
need their help. As a solutions architect, you've designed the architecture
of the website to be fully serverless with API Gateway & AWS Lambda.
The backend uses a DynamoDB table. You would like to automatically
congratulate your developers on important milestones, such as - their
first paid contract. All the contracts are stored in DynamoDB.

Which DynamoDB feature can you use to implement this functionality


such that there is LEAST delay in sending automatic notifications?

DynamoDB DAX + API Gateway

EventBridge events + Lambda

DynamoDB Streams + Lambda

Amazon SQS + Lambda


Q.65 The engineering team at a company is running batch workloads on *0/1
AWS Cloud. The team has embedded RDS database connection strings
within each web server hosting the flagship application. After failing a
security audit, the team is looking at a different approach to store the
database secrets securely and automatically rotate the database
credentials.

Which of the following solutions would you recommend to meet this


requirement?

KMS

SSM Parameter Store

Secrets Manager

Systems Manager

Correct answer

Secrets Manager

This form was created inside of Adex International Pvt. Ltd..


Does this form look suspicious? Report

Forms

You might also like